By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony - Cerny: Full hardware capabilities of PS4 may take 'three to four years' to be harnessed

Tagged games:

Nintendo fans are so insecure. No ones ever said the wii U was maxed out already, just that is underpowered by modern standards which it is more or less factual.



Around the Network
petalpusher said:

Unlike WiiU,

Because it's based on a derivative of a ten years old cpu architecture wich has been worked around in every way for many years. It's still a very close relative to the gamecube cpu with just more caches, cores and frequency. gpu is nothing new here, it's basically a "Xenos 7 years after" with very few enhancements after all this time. lack of main bandwidth is a step backward, even compared to current generation. It's basically what makes the multi platform titles struggling on this console when it should have been a formality.

Input bandwidth, raw shading power, and pixel output (fillrate) are the 3 pillars of rendering performance, WiiU doesn't specially shine in any of the main 3. It's still very close to PS360 levels, when not worse.

PS4 and X1 have highly asynchronous compute architecture, they are very gpu centric console with lots of potential waiting to be tapped in that particular field. They will be the new referent for the next 5 years or so.


Rubbish.
The Wii U's CPU architecture is vastly improved over it's older brothers.
For starters it's not just more caches, cores and frequency, it brings with it an OoO execution engine which vastly improves efficiency, new SIMD instructions, wider and faster caches and interconnects.
The difference however would be like comparing an Intel Atom (In Order CPU) to AMD's Brazos (Out of Order) at the same clock and cores. - Put it this way, Brazos makes Atom look pathetic when software doesn't take advantage of more than 2 threads.

The GPU however is vastly different too, it's capable of far more higher quality effects such as Tessellation, which coincedentally the Playstation 3 lacks and the Xbox 360 was very limited with.

It will take time for developers to work around the nuances of the hardware, to expect anything different is silly.
Is it on the same level as an Xbox One or Playstation 4? Hell to the no. It's more a middle step between the Xbox 360 and Xbox One.

Then again, if you are so worried about graphics and the hardware, you wouldn't be using a console at all anyway.




www.youtube.com/@Pemalite

If cerney has to say it then that worries me. Ps4 is only a 4 year old pc. I thinks it's power has been harnessed already folks :(



Pemalite said:

Rubbish.
The Wii U's CPU architecture is vastly improved over it's older brothers.
For starters it's not just more caches, cores and frequency, it brings with it an OoO execution engine which vastly improves efficiency, new SIMD instructions, wider and faster caches and interconnects.
The difference however would be like comparing an Intel Atom (In Order CPU) to AMD's Brazos (Out of Order) at the same clock and cores. - Put it this way, Brazos makes Atom look pathetic when software doesn't take advantage of more than 2 threads.


The Wii CPU was OoO already... That's a bad start for you. It's still a Powerpc (750x family) wich no one on earth use anymore, and was basically designed in the previous century (1999). The cell and Xenon however were in order cpu. OoO is better for poorly optimized cpu code in general, but that's not such a big deal.

There s no new SIMD instructions, it's still 2x32 bit SIMD + 1x32 bit integer and the half hundred set of the original Gekko. They ve added L1/L2 and triple the cores, wich is not a miracle on a 45nm process, the cpu remains a very small piece of silicon. It's a short pipeline processor, wich limits it's frequency (less than twice the Broadway). It's a mere 15 GFLOPS cpu, about ten times less powerfull than Cell/Xenon, decent in general purpose, against only one core of the Xenon or the Cell's PPE, it'still weaker. When dev said it was horrible, it wasn't trolling.

And It's so much more of the same, that it is totally retro compatible with the Gekko (GC) and the broadway (Wii)....

 

About the gpu, it's still 8 ROPs / 16 TMUs with very limited main bandwidth at 12.8 GB/s, you can't expect much from that, nor aiming for any kind of serious compute pipeline.



petalpusher said:
Pemalite said:

Rubbish.
The Wii U's CPU architecture is vastly improved over it's older brothers.
For starters it's not just more caches, cores and frequency, it brings with it an OoO execution engine which vastly improves efficiency, new SIMD instructions, wider and faster caches and interconnects.
The difference however would be like comparing an Intel Atom (In Order CPU) to AMD's Brazos (Out of Order) at the same clock and cores. - Put it this way, Brazos makes Atom look pathetic when software doesn't take advantage of more than 2 threads.

The Wii CPU was OoO already... That's a bad start for you. It's still a Powerpc (750x family) wich no one on earth use anymore, and was basically designed in the previous century (1999). The cell and Xenon however were in order cpu. OoO is better for poorly optimized cpu code in general, but that's not such a big deal.

 

That's a bad start for you:



Around the Network

It sure does not compare to a 360 xenon processor... What is your point?



petalpusher said:


The Wii CPU was OoO already... That's a bad start for you. It's still a Powerpc (750x family) wich no one on earth use anymore, and was basically designed in the previous century (1999). The cell and Xenon however were in order cpu. OoO is better for poorly optimized cpu code in general, but that's not such a big deal.



I was comparing it to the Xbox 360 and Playstation 3, I apologise, I could have been clearer on that point.

As for OoO being better for poorly optimized code, well, that isn't exactly accurate.
The whole idea of OoO execution is that instead of stalling the pipeline whilst the CPU waits for the next instruction that's in que, OoO processors will instead slot another instruction in between so that the CPU is always constantly being utilised, developers generally don't deal with such intricate details unless you are a big first party like Naughty Dog or 343 Industries.

As for the PowerPC 750x being last Century, yeah I agree to an extent.
Unless you have some low-level information on the changes that has been done to Espresso (And I would like to hear them if you do!) then we have no idea what changes Nintendo and IBM have done to the die.
For example, Intel has "evolved" it's CPU architecture which all stems from the P6 core (With the Pentium 4/D being based on Netburst) thus even today, my Sandy-Bridge-E 6 core/12 thread processor actually has roots stemming from the P6 core introduced with the Pentium Pro in 1995.
Does it make it last century and thus crap? No. It's still one of the fastest CPU's in existence.

petalpusher said:

There s no new SIMD instructions, it's still 2x32 bit SIMD + 1x32 bit integer and the half hundred set of the original Gekko. They ve added L1/L2 and triple the cores, wich is not a miracle on a 45nm process, the cpu remains a very small piece of silicon. It's a short pipeline processor, wich limits it's frequency (less than twice the Broadway). It's a mere 15 GFLOPS cpu, about ten times less powerfull than Cell/Xenon, decent in general purpose, against only one core of the Xenon or the Cell's PPE, it'still weaker. When dev said it was horrible, it wasn't trolling.


 

You lost the argument when you used gigaflops, They're only a guage on performance on CPU's of the same type, my CPU can break 100 Gigaflops in synthetics, but it's still superior to the Cell in every single way, here is why: CPU's do more than just floating point math and game engines do more than just deal with floating point numbers.

So, I assume you have a detailed die-shot and understand what everything is and Nintendo has provided a white paper and you know every single detail about the processor? You can add new SIMD instructions whilst providing full backwards compatability with prior variations.
For example SSE and SSE2.

If anything it's a smart decision to retain underlying hardware backwards compatability, developers have had experience with that with prior generations, so it will make development easier.

 

petalpusher said:

 About the gpu, it's still 8 ROPs / 16 TMUs with very limited main bandwidth at 12.8 GB/s, you can't expect much from that, nor aiming for any kind of serious compute pipeline.

 


You would be surprised how well a Desktop GPU with 12.8GB/s of memory bandwidth can handle games, especially at 720P and especially modern GPU's with memory bandwidth saving technologies, like more advanced occolusion culling and AMD's 3dc texture/map compression, in-fact grab a GPU like the Radeon 6450, which is probably weaker than the Wii U, it can actually handle something like Unigen Heaven rather well at 720P and 30fps with most settings on low, even with some low factored Tessellation. - And that's something that's not even programmed at the metal!

Even if the GPU was slower in terms of most specs compared to the Xbox 360 and Playstation 3, I still expect it to do better, it's generally an all-round more efficient architecture.
Besides, you are also excluding the eDRAM from the bandwidth numbers which, when programmed the right way can give a real good kick in the pants when needed.

But don't let logic get in the way.




www.youtube.com/@Pemalite

Pemalite said:
petalpusher said:


The Wii CPU was OoO already... That's a bad start for you. It's still a Powerpc (750x family) wich no one on earth use anymore, and was basically designed in the previous century (1999). The cell and Xenon however were in order cpu. OoO is better for poorly optimized cpu code in general, but that's not such a big deal.



I was comparing it to the Xbox 360 and Playstation 3, I apologise, I could have been clearer on that point.

As for OoO being better for poorly optimized code, well, that isn't exactly accurate.
The whole idea of OoO execution is that instead of stalling the pipeline whilst the CPU waits for the next instruction that's in que, OoO processors will instead slot another instruction in between so that the CPU is always constantly being utilised, developers generally don't deal with such intricate details unless you are a big first party like Naughty Dog or 343 Industries.

As for the PowerPC 750x being last Century, yeah I agree to an extent.
Unless you have some low-level information on the changes that has been done to Espresso (And I would like to hear them if you do!) then we have no idea what changes Nintendo and IBM have done to the die.
For example, Intel has "evolved" it's CPU architecture which all stems from the P6 core (With the Pentium 4/D being based on Netburst) thus even today, my Sandy-Bridge-E 6 core/12 thread processor actually has roots stemming from the P6 core introduced with the Pentium Pro in 1995.
Does it make it last century and thus crap? No. It's still one of the fastest CPU's in existence.

petalpusher said:

There s no new SIMD instructions, it's still 2x32 bit SIMD + 1x32 bit integer and the half hundred set of the original Gekko. They ve added L1/L2 and triple the cores, wich is not a miracle on a 45nm process, the cpu remains a very small piece of silicon. It's a short pipeline processor, wich limits it's frequency (less than twice the Broadway). It's a mere 15 GFLOPS cpu, about ten times less powerfull than Cell/Xenon, decent in general purpose, against only one core of the Xenon or the Cell's PPE, it'still weaker. When dev said it was horrible, it wasn't trolling.


 

You lost the argument when you used gigaflops, They're only a guage on performance on CPU's of the same type, my CPU can break 100 Gigaflops in synthetics, but it's still superior to the Cell in every single way, here is why: CPU's do more than just floating point math and game engines do more than just deal with floating point numbers.

So, I assume you have a detailed die-shot and understand what everything is and Nintendo has provided a white paper and you know every single detail about the processor? You can add new SIMD instructions whilst providing full backwards compatability with prior variations.
For example SSE and SSE2.

If anything it's a smart decision to retain underlying hardware backwards compatability, developers have had experience with that with prior generations, so it will make development easier.

 

petalpusher said:

 About the gpu, it's still 8 ROPs / 16 TMUs with very limited main bandwidth at 12.8 GB/s, you can't expect much from that, nor aiming for any kind of serious compute pipeline.

 


You would be surprised how well a Desktop GPU with 12.8GB/s of memory bandwidth can handle games, especially at 720P and especially modern GPU's with memory bandwidth saving technologies, like more advanced occolusion culling and AMD's 3dc texture/map compression, in-fact grab a GPU like the Radeon 6450, which is probably weaker than the Wii U, it can actually handle something like Unigen Heaven rather well at 720P and 30fps with most settings on low, even with some low factored Tessellation. - And that's something that's not even programmed at the metal!

Even if the GPU was slower in terms of most specs compared to the Xbox 360 and Playstation 3, I still expect it to do better, it's generally an all-round more efficient architecture.
Besides, you are also excluding the eDRAM from the bandwidth numbers which, when programmed the right way can give a real good kick in the pants when needed.

But don't let logic get in the way.

PPC families share a lot of things together, but there s still different trees, i don't think you can compare that to Pentium > Sandy bridge, they are x86, ok but Espresso remains a pumped up 750.

About flops, i stated deliberately that it was decent in general purpose (not in branching btw), you right about cpu gflops being meaningless (to some extent), but you were the one who brought up the  SIMD.

eDRAM can be  interresting in very specific conditions, Nintendo like to have some in their consoles (and they have to anyway to remain backward compatible). ten years or even 5 years ago, i would have said that its a good thing to have some. Nowadays rendering is getting so complicated with so much buffers, you ll never be able to fit them in that tiny piece of memory and deal with them in an efficient way. WiiU eDRAM adds theorically about 70 GB/s of internal bandwidth, wich sounds a lot, but we don't see much of that benefit, first because it's not directly connecte to the ROPs (unlike the x360' eDRAM wich was its strongest point) and because studios don't want to get into this anymore without that direct path to rasterization.

Having a real unified memory is a must today, just one plain big memory full where you can arrange all your buffers, having huge render targets, and make all theses things sing together without moving them back and forth to some other memory chunk. There can be some benefits but no way like adding the two bandwidth (or even half of it). The DDR3 is still the main thing to consider when it comes to bandwidth (and god bandwidth is important in a real word game pipeline)

It's not bad hardware wise, but the complexity of today developpement makes it painfull and sometimes even very uneffective to deal with.



snyps said:

If cerney has to say it then that worries me. Ps4 is only a 4 year old pc. I thinks it's power has been harnessed already folks :(


Joke post?



"These are the highest quality pixels that anybody has seen"

you dont say