selnor said:
|
That's not a lie, the theoretical numbers they gave are right. The fact that you usually wouldn't come close to those theoretical limits was well known to anyone who read some more in-depth analysis. I don't care much if it resulted deceiving for a "Joe Public" that learns about gigaflops to taunt others during piss contests, but then doesn't learn what it actually means in practice. The same happened with the gigahertz race between Intel and AMD during the Netburst architecture era.
But my point was another: much of the "speculative" part of the aritcle was simply written before developers were really put to test at efficiently implementing their work on the hardware. And remember that the article mostly comes from the perspective of PC coders, used to certain kinds of standard out-of-order processors and little SMP.
For example, the part about the SPE being useless at AI holds on the notion of the standard algorythms that benefit heavily on branch predicition. Same thing has been said about the collision detection code. But different algorythms have been used that can be efficiently ran by the SPEs ( Uncharted offloaded AI calculations to SPEs, I'm pretty sure there are other cases).
So I'm just saying: the technical specs part is good enough, though not perfect, and it makes a good read. The speculations about how the CPUs and GPUs would cope with real cases of game programming were a bit academic at the time, and should be taken with many pinches of salt, seeing what actually happened in practice since 2006. For example the writer was impressed with Heavenly Sword, and I think we can all agree that we've gone way beyond that initial benchmark.







