By using this site, you agree to our Privacy Policy and our Terms of Use. Close
WereKitten said:
selnor said:
WereKitten said:
A word of warning: that article dates back to November 2006.

While I could only find a few troubles with the "factual" parts of it (eg, I don't think it's true that the EDRAM on the Xenos is enough to have HDR+4xAA for free, unless you go sub-HD like in Halo 3), the speculations about how much the potential of the CPUs and GPUs can be tapped are naturally quite outdated, as the info about the memory footprint of the PS3 OS, that has been reduced meanwhile.


It is possible that OS systems of both consoles have got better on needing resources. But all the factual info about CPU's, memory and GPU will NEVER change unless they actually physically update the components. So yes your right about the OS. But everything will NOT change. Icluding both companies lies about GFLOPs. Even though Sony and M$ said theoretical they both knew they worded so average joe public would belive them. Truth is No game this will go over 80 GFLOPS on either console.

That's not a lie, the theoretical numbers they gave are right. The fact that you usually wouldn't come close to those theoretical limits was well known to anyone who read some more in-depth analysis. I don't care much if it resulted deceiving for a "Joe Public" that learns about gigaflops to taunt others during piss contests, but then doesn't learn what it actually means in practice. The same happened with the gigahertz race between Intel and AMD during the Netburst architecture era.

But my point was another: much of the "speculative" part of the aritcle was simply written before developers were really put to test at efficiently implementing their work on the hardware. And remember that the article mostly comes from the perspective of PC coders, used to certain kinds of standard out-of-order processors and little SMP.

For example, the part about the SPE being useless at AI holds on the notion of the standard algorythms that benefit heavily on branch predicition. Same thing has been said about the collision detection code. But different algorythms have been used that can be efficiently ran by the SPEs ( Uncharted offloaded AI calculations to SPEs, I'm pretty sure there are other cases).

So I'm just saying: the technical specs part is good enough, though not perfect, and it makes a good read. The speculations about how the CPUs and GPUs would cope with real cases of game programming were a bit academic at the time, and should be taken with many pinches of salt, seeing what actually happened in practice since 2006. For example the writer was impressed with Heavenly Sword, and I think we can all agree that we've gone way beyond that initial benchmark.

Multithread has been around for 20 years, and SPE's that are on Cell are not the first time PC coders have seen and tested these. The 90's had many manufacturers research this.

But it's intersting to point out to everyone as well, PS2 actually had a better CPU on the whole than Xbox 1. And the gap between Cell and Xenon is actually closer than the gap between PS2 and Xbox 1 CPU's. That is something that will shock many people.

You have to remember he does state that with time and effective programming developers can come up with using the SPU's. But at the end of the day it still comes back to spreading 512kb L2 Cache between 6 SPE's and 1PPE. As well as having to deal with each instruction in order rather than like a PC CPU which can deal with any code it needs to.

Their is actually very little speculation in the article. And when he does he tells you.