By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Why Sony should also use a Cell Processor for PS5 (x86+Cell coprocessor)

SegataSanshiro said:

I recall the PS2 chip cost Sony $27 to put in PS3 at the time so easy to see why they cut it.

That is just the chip cost which doesn't account for the extra expenditure on PCB+traces, power delivery, controllers and so on which all add up.

The PS1 chip in the PS2 was essentially free as it replaced the job of another chip anyway.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:

It was also the most expensive console of all time if I remember correctly.

Actually I haven't.

Zen is fully backwards compatible with Jaguar from an ISA standpoint.

The PSX hardware was always being used though, even for PS2 games as it handled the I/O of the entire system, so the costs for it's inclusion could be justified.
That wouldn't be the case for Cell in a hypothetical future console.

 

There were a number of more expensive consoles than PS3, though PS3 was the most expensive ever that managed to make into mainstream.

Obviously the Jaguar was chosen because of the cost.

The PSX hardware was removed from the PS2 slim model, and the slim had slightly higher clock rate and the I/O controller was replaced with a cheaper one.

SegataSanshiro said:

I recall the PS2 chip cost Sony $27 to put in PS3 at the time so easy to see why they cut it.

Someone reading is likely thinking "geez, that's next to nothing": 27USD per unit makes 270 million USD per 10 million units, which is directly away from the profit Sony could make. Expecting PS3 to sell 10 million units per years, the PS2 hardware would need to increase SCE's profit by more than 270 million per year, which obviously wasn't the case as Sony complained people playing PS2 upscaled with their BC PS3's instead of buying PS3 games, making the situation even worse for Sony.



Ei Kiinasti.

Eikä Japanisti.

Vaan pannaan jalalla koreasti.

 

Nintendo games sell only on Nintendo system.

bdbdbd said:

There were a number of more expensive consoles than PS3, though PS3 was the most expensive ever that managed to make into mainstream.

Well, the Playstation 3 here was like $1,000 on release, which is more than the Xbox One X which was $650.
And that's not accounting for inflation either.

bdbdbd said:

The PSX hardware was removed from the PS2 slim model, and the slim had slightly higher clock rate and the I/O controller was replaced with a cheaper one.

You are right that the PSX hardware was removed, although I would actually need to see chip sizes and so on to be sure if it was cheaper.
But by that point the I/O controller should have been a negligible cost anyway.

*****************************

Still waiting on Ruler. You have made me wait all year already.



--::{PC Gaming Master Race}::--

The Cell processor was a pain to develop for, and not a single developer ever really used "all of that power" to the max. So why would they go back?



DevilRising said:
The Cell processor was a pain to develop for, and not a single developer ever really used "all of that power" to the max. So why would they go back?

Actually devs used 'all the CELL's power'. But in multiple ways it was way weaker than many people think. And where it actually was strong it still was hard to get the ideal workload.

 

The two biggest problems here are (in a simplified way):
Clock doesn't matter, except you compare the same architecture. Today's slowest Pentium Dual Core is lightyears ahead of the fastest Pentium 4 Dual Core, the latter one having way higher clock.

FLOPS don't matter, except you compare the same architecture. Like Pemalite showed with that Radeon 5K vs. Radeon 7K example or with current GeForce vs. Radeon GPU's and yes, even with different CPU's.

 

That cloth simularion example people show over and over again is one of the benchmalrs that scale extremely well with floating point performance and parallelization. It's just a benchmark though. And very specific one too.

With the old Xbox One API you can see a massive hit in GPU based simulation. RAM bandwidth, cache and yes, those ACE's might be not totally unimportant here as well. The Asynchronous Compute Engines again are something that people heavily misunderstand though.

Long story short, CELL is outdated. As is the concept of Sonys Emption Engine, which actually was a great concept at the time of development yet already outdated a year after release.



Around the Network

At this point, it's a ridiculously outdated and underpowered CPU. Your slides just show it loses to a netbook-level CPU at 1.6 Ghz. It has zero advantages. Those 30 bucks could be used in a better CPU.

Its only advantage was the SPEs, but the modern usage of GPGPU neglects that. What is the use for 6 SPEs when you have thousands of cores in a GPU? Even your slides are showing that offloading these tasks to the GPU resulted in massive gains. If you compare the Cell with the X360 processor (slides), it's just a 3X speedup. For GPGPU standard that is beyond pathetic. I'm not even going to start with the nightmare of working with a x86 CPU, a GPU, an extra RISC core, and 8 SPUs.

The Cell was an incredibly stupid idea, based on the assumption that it was intelligent to perform GPU-oriented tasks on the CPU. In the end, the GPUs are way more massively parallel and the Cell can't provide decent speedups to compensate its cost. If you consider the terrible dev tools for the SPEs, compared with the great toolkits for GPUs, plus the change in mindset to work with this aberration, it is simply not worth it.

To make things even worse, Cell is also based on the PPC architecture, that is dying a slow death due to lack of research while x86 advances in a quick pace. The smaller scale also implies on larger costs. There is no point in pouring money on researching a new Cell while you can just use existing CPUs and GPUs to do the job better.

About the CPU being a bottleneck while memory was the previous bottleneck, here is the thing: any computer will have a bottleneck. If we magically replaced the Jaguar with a Threadripper, the bottleneck would be the GPU. Throw in a 1070, and now it's RAM. More RAM? Well, the GPU would be the bottleneck again. It's a never ending game.



torok said:
At this point, it's a ridiculously outdated and underpowered CPU. Your slides just show it loses to a netbook-level CPU at 1.6 Ghz. It has zero advantages. Those 30 bucks could be used in a better CPU.

Its only advantage was the SPEs, but the modern usage of GPGPU neglects that. What is the use for 6 SPEs when you have thousands of cores in a GPU? Even your slides are showing that offloading these tasks to the GPU resulted in massive gains. If you compare the Cell with the X360 processor (slides), it's just a 3X speedup. For GPGPU standard that is beyond pathetic. I'm not even going to start with the nightmare of working with a x86 CPU, a GPU, an extra RISC core, and 8 SPUs.

The Cell was an incredibly stupid idea, based on the assumption that it was intelligent to perform GPU-oriented tasks on the CPU. In the end, the GPUs are way more massively parallel and the Cell can't provide decent speedups to compensate its cost. If you consider the terrible dev tools for the SPEs, compared with the great toolkits for GPUs, plus the change in mindset to work with this aberration, it is simply not worth it.

To make things even worse, Cell is also based on the PPC architecture, that is dying a slow death due to lack of research while x86 advances in a quick pace. The smaller scale also implies on larger costs. There is no point in pouring money on researching a new Cell while you can just use existing CPUs and GPUs to do the job better.

About the CPU being a bottleneck while memory was the previous bottleneck, here is the thing: any computer will have a bottleneck. If we magically replaced the Jaguar with a Threadripper, the bottleneck would be the GPU. Throw in a 1070, and now it's RAM. More RAM? Well, the GPU would be the bottleneck again. It's a never ending game.

Ridiculously outdated but still beating Jaguar CPUs at floating point calculations.



captain carot said:
DevilRising said:
The Cell processor was a pain to develop for, and not a single developer ever really used "all of that power" to the max. So why would they go back?

Actually devs used 'all the CELL's power'. But in multiple ways it was way weaker than many people think. And where it actually was strong it still was hard to get the ideal workload.

 

The two biggest problems here are (in a simplified way):
Clock doesn't matter, except you compare the same architecture. Today's slowest Pentium Dual Core is lightyears ahead of the fastest Pentium 4 Dual Core, the latter one having way higher clock.

FLOPS don't matter, except you compare the same architecture. Like Pemalite showed with that Radeon 5K vs. Radeon 7K example or with current GeForce vs. Radeon GPU's and yes, even with different CPU's.

 

That cloth simularion example people show over and over again is one of the benchmalrs that scale extremely well with floating point performance and parallelization. It's just a benchmark though. And very specific one too.

With the old Xbox One API you can see a massive hit in GPU based simulation. RAM bandwidth, cache and yes, those ACE's might be not totally unimportant here as well. The Asynchronous Compute Engines again are something that people heavily misunderstand though.

Long story short, CELL is outdated. As is the concept of Sonys Emption Engine, which actually was a great concept at the time of development yet already outdated a year after release.

The Emotion Engine had the same architecture as did PSX and N64. It was already outdated before PS2 was released, but Sony chose an architecture that the developers were already familiar with (technically so did everyone else; Dreamcast had the same architecture with Saturn, Gamecube shared an architecture with Power Macs and Xbox had a Celeron inside).

Yeah, the problem, as you point out, with the Cell is optimising the workload; if you don't do it, the processor isn't going to perform well.

Errorist76 said:
torok said:
At this point, it's a ridiculously outdated and underpowered CPU. Your slides just show it loses to a netbook-level CPU at 1.6 Ghz. It has zero advantages. Those 30 bucks could be used in a better CPU.

Its only advantage was the SPEs, but the modern usage of GPGPU neglects that. What is the use for 6 SPEs when you have thousands of cores in a GPU? Even your slides are showing that offloading these tasks to the GPU resulted in massive gains. If you compare the Cell with the X360 processor (slides), it's just a 3X speedup. For GPGPU standard that is beyond pathetic. I'm not even going to start with the nightmare of working with a x86 CPU, a GPU, an extra RISC core, and 8 SPUs.

The Cell was an incredibly stupid idea, based on the assumption that it was intelligent to perform GPU-oriented tasks on the CPU. In the end, the GPUs are way more massively parallel and the Cell can't provide decent speedups to compensate its cost. If you consider the terrible dev tools for the SPEs, compared with the great toolkits for GPUs, plus the change in mindset to work with this aberration, it is simply not worth it.

To make things even worse, Cell is also based on the PPC architecture, that is dying a slow death due to lack of research while x86 advances in a quick pace. The smaller scale also implies on larger costs. There is no point in pouring money on researching a new Cell while you can just use existing CPUs and GPUs to do the job better.

About the CPU being a bottleneck while memory was the previous bottleneck, here is the thing: any computer will have a bottleneck. If we magically replaced the Jaguar with a Threadripper, the bottleneck would be the GPU. Throw in a 1070, and now it's RAM. More RAM? Well, the GPU would be the bottleneck again. It's a never ending game.

Ridiculously outdated but still beating Jaguar CPUs at floating point calculations.

That's because the Jaguar can do so much more. If it was only about floating point units, everyone would be using nothing but GPU's, because this is what they're good at. And the way GPU's work, is roughly how the Cell works as well. And it's pretty much outdated, because it is not being further developed anymore.



Ei Kiinasti.

Eikä Japanisti.

Vaan pannaan jalalla koreasti.

 

Nintendo games sell only on Nintendo system.

bdbdbd said:

The Emotion Engine had the same architecture as did PSX and N64. It was already outdated before PS2 was released, but Sony chose an architecture that the developers were already familiar with (technically so did everyone else; Dreamcast had the same architecture with Saturn, Gamecube shared an architecture with Power Macs and Xbox had a Celeron inside).

Yeah, the problem, as you point out, with the Cell is optimising the workload; if you don't do it, the processor isn't going to perform well.

I didn't mean the MIPS architecture, which wasn't more outdated than x86 at that time because it still saw development. And no, even the MIPS core wasn't 'the same' like in the PS1 since it got massive enhancements on the instruction set (MIPS III plus major parts of MIPS IV), VLIW, SIMD, way more cache...

The most important part here are the two vector units that gave Emotion Engine the 'insane' floating point performance of 6.2GFLOP/s. THing is, Sony went for that power for graphics calculations, basically all the geometry work was done on the CPU to do stuff that graphics accelerators in the late nineties couldn't. But at the same time others worked on vertex shaders, pixel shaders and so on.

Having a lack of English right now because of being tired and having to much/not enough beer, but long story short, while being very flexible for 3D graphics when it was in development, Sonys approach as a whole was totally outdated a year after release.

CELL is a little different. It's approach is totally outdated as well and people should finally accept that. It definitely wasn't a miracle machine and no one needs it anymore. And no one needs it less than Sony.



captain carot said:

I didn't mean the MIPS architecture, which wasn't more outdated than x86 at that time because it still saw development. And no, even the MIPS core wasn't 'the same' like in the PS1 since it got massive enhancements on the instruction set (MIPS III plus major parts of MIPS IV), VLIW, SIMD, way more cache...

The most important part here are the two vector units that gave Emotion Engine the 'insane' floating point performance of 6.2GFLOP/s. THing is, Sony went for that power for graphics calculations, basically all the geometry work was done on the CPU to do stuff that graphics accelerators in the late nineties couldn't. But at the same time others worked on vertex shaders, pixel shaders and so on.

Having a lack of English right now because of being tired and having to much/not enough beer, but long story short, while being very flexible for 3D graphics when it was in development, Sonys approach as a whole was totally outdated a year after release.

CELL is a little different. It's approach is totally outdated as well and people should finally accept that. It definitely wasn't a miracle machine and no one needs it anymore. And no one needs it less than Sony.

It was outdated because the hardware at the time was moving towards a more simple and easier to program for, whereas PS2 was sticking with the old design we saw since SNES. The MIPS architecture is still used and developed even today, but it is not the most practical processor to use, unless you need cheap modularity.



Ei Kiinasti.

Eikä Japanisti.

Vaan pannaan jalalla koreasti.

 

Nintendo games sell only on Nintendo system.