DevilRising said: The Cell processor was a pain to develop for, and not a single developer ever really used "all of that power" to the max. So why would they go back? |
Actually devs used 'all the CELL's power'. But in multiple ways it was way weaker than many people think. And where it actually was strong it still was hard to get the ideal workload.
The two biggest problems here are (in a simplified way):
Clock doesn't matter, except you compare the same architecture. Today's slowest Pentium Dual Core is lightyears ahead of the fastest Pentium 4 Dual Core, the latter one having way higher clock.
FLOPS don't matter, except you compare the same architecture. Like Pemalite showed with that Radeon 5K vs. Radeon 7K example or with current GeForce vs. Radeon GPU's and yes, even with different CPU's.
That cloth simularion example people show over and over again is one of the benchmalrs that scale extremely well with floating point performance and parallelization. It's just a benchmark though. And very specific one too.
With the old Xbox One API you can see a massive hit in GPU based simulation. RAM bandwidth, cache and yes, those ACE's might be not totally unimportant here as well. The Asynchronous Compute Engines again are something that people heavily misunderstand though.
Long story short, CELL is outdated. As is the concept of Sonys Emption Engine, which actually was a great concept at the time of development yet already outdated a year after release.