By using this site, you agree to our Privacy Policy and our Terms of Use. Close
torok said:
At this point, it's a ridiculously outdated and underpowered CPU. Your slides just show it loses to a netbook-level CPU at 1.6 Ghz. It has zero advantages. Those 30 bucks could be used in a better CPU.

Its only advantage was the SPEs, but the modern usage of GPGPU neglects that. What is the use for 6 SPEs when you have thousands of cores in a GPU? Even your slides are showing that offloading these tasks to the GPU resulted in massive gains. If you compare the Cell with the X360 processor (slides), it's just a 3X speedup. For GPGPU standard that is beyond pathetic. I'm not even going to start with the nightmare of working with a x86 CPU, a GPU, an extra RISC core, and 8 SPUs.

The Cell was an incredibly stupid idea, based on the assumption that it was intelligent to perform GPU-oriented tasks on the CPU. In the end, the GPUs are way more massively parallel and the Cell can't provide decent speedups to compensate its cost. If you consider the terrible dev tools for the SPEs, compared with the great toolkits for GPUs, plus the change in mindset to work with this aberration, it is simply not worth it.

To make things even worse, Cell is also based on the PPC architecture, that is dying a slow death due to lack of research while x86 advances in a quick pace. The smaller scale also implies on larger costs. There is no point in pouring money on researching a new Cell while you can just use existing CPUs and GPUs to do the job better.

About the CPU being a bottleneck while memory was the previous bottleneck, here is the thing: any computer will have a bottleneck. If we magically replaced the Jaguar with a Threadripper, the bottleneck would be the GPU. Throw in a 1070, and now it's RAM. More RAM? Well, the GPU would be the bottleneck again. It's a never ending game.

Ridiculously outdated but still beating Jaguar CPUs at floating point calculations.