By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Procrastinato said:
Squilliam said:

I think you're slightly over guesstimating.

Wouldn't the figure be closer to 20-25% overall considering the extra PPU overhead from the two extra SPUs and the fact that no multicore processor scales linearly?

Im not doubting your point, im just 'adjusting' it.

Although it seems that the PP$ metric is currently being won by the PC architectures, well with Nvidia GPUs anyway and thats with prebuilt systems with Windows Vista. Which is pretty amazing considering that at every level there are extra margins paid to the manufacturers of the different components and retailers. One could argue that the GPUs are the Cells biggest competitors rather than the Intel X86 CPUs in terms of comparable workloads.

 

 

Actually the near-linear scaling is the primary benefit of the Cell architecture over architectures that share more direct resources than the Cell does.  That's kinda the whole point.  There wouldn't be any significant extra PPU overhead, either, actually.  Jobs can be launched on the SPEs/SPUs and run forever, doing their own gathering and distribution of data, which is probably exactly what F@H does.

I'm not trying to make the Cell look like a superprocessor in the general sense here.  I'm just pointing out the Sony really wasn't fibbing when they said it was years ahead of its time, especially since they were talking in the realm of "possiblity" as opposed to the actual reality of games as a business.  The PS3 Cell's "output" (meaning exclusive games' quality) will continue to grow over time, as devs get more used to it, and their engines become more adept at using it.  It honestly is a really good processor for simulation tasks, and the fact that it came out in 2006 is downright phenominal, compared to the other processors at the time -- especially game console processors.  Sony PR talked it up, but claiming that the Cell didn't deserve at least some of the hype (a fair bit, even) would be just plain untrue.

It will never replace a standard multicore architecture in the home, however.

For an architecture developed in 2006 its performance per transistor is pretty much 2nd to none when you consider the flexibility and performance it has overall. Sure the GPUs give you more flops per mm^2, but those flops are hardly as flexible. So yep you're 100% right here, though my only question really is about the contestation on the ring bus and off die latency/bandwidth to feed another 2 SPUs.

Though to be honest, long term I wonder if its going to be considered a brilliant one off, but a market failure. Kind of like how someone may look back in time and lament the fact that the electric car didn't take off and was replaced by the internal combustion engine. If the architecture got 1/10th the software development and 1/5th the hardware development budget for your GPU/GPGPU hardware architectures I would be surprised. I wonder how it can keep up with that level of investment long term especially considering the number of DX11 compliant GPUs will number in the hundreds of millions in a few short years.



Tease.