Forums - Sony Discussion - PS5 GDC Reveal and PS5 specs/performance Digital Foundry Video analysis : 3.5 Ghz 8 core Zen 2 CPU along with 10.3 TF RDNA 2 RT capable and 16GB GDDR6 RAM and also super crazy fast 5.5 GB/Second S

How do you feel

My brain become bigger su... 21 30.00%
 
I am wet 6 8.57%
 
What did he talked about??? 5 7.14%
 
I want some more info 9 12.86%
 
Total:41
the-pi-guy said:
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive

https://youtu.be/KfM_nTTxftE

"And at the nuts and bolts level, there are still some lingering question marks. Both Sony and AMD have confirmed that PlayStation 5 uses a custom RDNA 2-based graphics core, but the recent DirectX 12 Ultimate reveal saw AMD confirm features that Sony has not, including variable rate shading."

This part seems to give credential to the post @Pemalite made that it is quite possible PS5 was designed on RDNA1 with some features of RDNA2.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Around the Network
CrazyGPU said:
the-pi-guy said:
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive

https://youtu.be/KfM_nTTxftE

That video shows that frecuency has diminishing returns after a point and the more CUs of XBOX series X will make a difference. I think Sony strategy is pricing. 

It's a balancing act.

At some point driving frequency will not net you a linear increase in performance as bottlenecks in the design come into play. (I.E. Cache sizes.)

And the same with just adding more CU's, you eventually reach a point where you won't have a linear increase in performance as bottlenecks come into play. (I.E. number of ROPS.)

Digital Foundry was just demonstrating one aspect of this issue... And I wouldn't be using it as gospel of how the Xbox Series X/Playstation 5 scales in terms of clockrate or CU counts because neither console is using the exact same RDNA hardware as the Radeon demonstrated in that video so scaling will be completely different.

Sony and Microsoft have spent years working with AMD on fine-tuning their designs, they made the best decisions to meet their design goals and get the most performance possible, both consoles are great pieces of kit.

DonFerrari said:
the-pi-guy said:
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive

https://youtu.be/KfM_nTTxftE

"And at the nuts and bolts level, there are still some lingering question marks. Both Sony and AMD have confirmed that PlayStation 5 uses a custom RDNA 2-based graphics core, but the recent DirectX 12 Ultimate reveal saw AMD confirm features that Sony has not, including variable rate shading."

This part seems to give credential to the post @Pemalite made that it is quite possible PS5 was designed on RDNA1 with some features of RDNA2.

Pretty much just waiting on more info before I do a full breakdown on that... And what it could potentially mean for gaming... I do tend to get "information" before most people because of the Enthusiast tech circles I engage with.

Last edited by Pemalite - on 02 April 2020

--::{PC Gaming Master Race}::--

Pemalite said:
CrazyGPU said:

That video shows that frecuency has diminishing returns after a point and the more CUs of XBOX series X will make a difference. I think Sony strategy is pricing. 

It's a balancing act.

At some point driving frequency will not net you a linear increase in performance as bottlenecks in the design come into play. (I.E. Cache sizes.)

And the same with just adding more CU's, you eventually reach a point where you won't have a linear increase in performance as bottlenecks come into play. (I.E. number of ROPS.)

Digital Foundry was just demonstrating one aspect of this issue... And I wouldn't be using it as gospel of how the Xbox Series X/Playstation 5 scales in terms of clockrate or CU counts because neither console is using the exact same RDNA hardware as the Radeon demonstrated in that video so scaling will be completely different.

Sony and Microsoft have spent years working with AMD on fine-tuning their designs, they made the best decisions to meet their design goals and get the most performance possible, both consoles are great pieces of kit.

DonFerrari said:

"And at the nuts and bolts level, there are still some lingering question marks. Both Sony and AMD have confirmed that PlayStation 5 uses a custom RDNA 2-based graphics core, but the recent DirectX 12 Ultimate reveal saw AMD confirm features that Sony has not, including variable rate shading."

This part seems to give credential to the post @Pemalite made that it is quite possible PS5 was designed on RDNA1 with some features of RDNA2.

Pretty much just waiting on more info before I do a full breakdown on that... And what it could potentially mean for gaming... I do tend to get "information" before most people because of the Enthusiast tech circles I engage with.

Will look for your breakdown and also to compare with whatever "professionals" do, because except key folks on DF and Ars Technica I wouldn't trust other "pros", so you and CGI would be a good comparison from SW and HW side of what to expect.

From "Pros" we got from negilible difference, to PS5 is the most revolutionary console, to XSX advantage is massive. While it mostly is XSX is more powerful but the around 15% power difference won't be noticeable for most gamers since the output on the TV will be 4k30fps or the like.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Cerny should have stayed quiet. Making statements without clear facts was a real mistake. The DF video didn't help him or Sony for that matters.

Mark Cerny: "....PS5 can sustain GPU and CPU at maximum frequency most of the time...."

This doesn't answer anything, and if something it just made it worse, by evading a clear answer, one that we all know or suspect.
The previous assumption that the PS5 will not always sustain a 2.23Ghz frequency is then Correct.

What bothers me is the "most of the time" comment. Man please define "sustain" and "most", and then please explain to me what are the counter balance factors or performance drop estimates when the GPU/CPU frequency goes down. Also what is the base clock? For Christ sake, just say it, is just 9.2 Tflops, period. I have no problem accepting the PS5 as a 9.2TF system with a peak performance of 10.2TF if it ends cheaper and managing 4k30/60 very well.

In addition, when the DF guy talked (at 07:06) about Spiderman game engine rendering the sky he clearly says that in a similar situation the ps5 would have a higher frequency because the GPUs is not fully occupied, but this is not the case here because that by design the PS5 boost works by assuming the GPU is occupied for the entire frame (to avoid race to idle). What I understand by this is that the PS5 will offer 2.23ghz as long as it can maintain the given power budget(CPU+GPU fixed power consumption for optimal cooling solution), once the workload goes higher than what the total power budget is then the GPU will have to scale down. So in the end, frequency will increase to meet demand and scale down when not in use or when the workload is higher than the capacity available.

This point was proven too when they talked about Doom Eternal engine (at 08:51), "..but I Digress, the point is the clock is variable, it will be going lower in some scenarios...", This is the absolute proof, and my main concern.

What are the tools or built in measures given to developers to avoid bottle-necking the system by throttling down the CPU or GPU?
This will be a real problem for them at launch. They will have to either sacrifice rendering quality or max resolution to achieve a constant level of detail across the entire gaming experience. I would like to hear them talk about new apis and new code down to the metal techniques that could improve performance needed in a given moment or scenario, as these can be a game changer for them. Take Nvidia DLSS 2.0 feature for example, it can increase performance by up to 33% without visually degrading the picture.

I suppose the end result will not be that much different from XBOX SX in multi platform titles, if developers are given the proper tools and experience with the PS5 dev kits.
I don't doubt that most of the exclusive PlayStation developers will extract every bit of performance and put the PS5 on par with XBox SX.

I love Sony and I don't care if it is the weaker system, but the approach of blinding facts and details, makes me hesitant about the PS5.

And where the heck is Ps1/Ps2/Ps3 BC. I don't see Cerny discarding it. I would like a clear position on this. Yes or No, is that simple.



alexxonne said:
Cerny should have stayed quiet. Making statements without clear facts was a real mistake. The DF video didn't help him or Sony for that matters.

Mark Cerny: "....PS5 can sustain GPU and CPU at maximum frequency most of the time...."

This doesn't answer anything, and if something it just made it worse, by evading a clear answer, one that we all know or suspect.
The previous assumption that the PS5 will not always sustain a 2.23Ghz frequency is then Correct.

What bothers me is the "most of the time" comment. Man please define "sustain" and "most", and then please explain to me what are the counter balance factors or performance drop estimates when the GPU/CPU frequency goes down. Also what is the base clock? For Christ sake, just say it, is just 9.2 Tflops, period. I have no problem accepting the PS5 as a 9.2TF system with a peak performance of 10.2TF if it ends cheaper and managing 4k30/60 very well.

In addition, when the DF guy talked (at 07:06) about Spiderman game engine rendering the sky he clearly says that in a similar situation the ps5 would have a higher frequency because the GPUs is not fully occupied, but this is not the case here because that by design the PS5 boost works by assuming the GPU is occupied for the entire frame (to avoid race to idle). What I understand by this is that the PS5 will offer 2.23ghz as long as it can maintain the given power budget(CPU+GPU fixed power consumption for optimal cooling solution), once the workload goes higher than what the total power budget is then the GPU will have to scale down. So in the end, frequency will increase to meet demand and scale down when not in use or when the workload is higher than the capacity available.

This point was proven too when they talked about Doom Eternal engine (at 08:51), "..but I Digress, the point is the clock is variable, it will be going lower in some scenarios...", This is the absolute proof, and my main concern.

The fact that the PS5 will often run slower than 2.23 GHz wasn't a question or an assumption that was made.  Cerny made it very clear that there would be times where it wouldn't run that fast.  

He's spent a good amount of time explaining how the clock rate works.  

It's not a 9.2 TF system.  It's a system that runs as fast as it can, and it can usually push out 10.3 TF.  



Around the Network
alexxonne said:
Cerny should have stayed quiet. Making statements without clear facts was a real mistake. The DF video didn't help him or Sony for that matters.

Mark Cerny: "....PS5 can sustain GPU and CPU at maximum frequency most of the time...."

This doesn't answer anything, and if something it just made it worse, by evading a clear answer, one that we all know or suspect.
The previous assumption that the PS5 will not always sustain a 2.23Ghz frequency is then Correct.

What bothers me is the "most of the time" comment. Man please define "sustain" and "most", and then please explain to me what are the counter balance factors or performance drop estimates when the GPU/CPU frequency goes down. Also what is the base clock? For Christ sake, just say it, is just 9.2 Tflops, period. I have no problem accepting the PS5 as a 9.2TF system with a peak performance of 10.2TF if it ends cheaper and managing 4k30/60 very well.

In addition, when the DF guy talked (at 07:06) about Spiderman game engine rendering the sky he clearly says that in a similar situation the ps5 would have a higher frequency because the GPUs is not fully occupied, but this is not the case here because that by design the PS5 boost works by assuming the GPU is occupied for the entire frame (to avoid race to idle). What I understand by this is that the PS5 will offer 2.23ghz as long as it can maintain the given power budget(CPU+GPU fixed power consumption for optimal cooling solution), once the workload goes higher than what the total power budget is then the GPU will have to scale down. So in the end, frequency will increase to meet demand and scale down when not in use or when the workload is higher than the capacity available.

This point was proven too when they talked about Doom Eternal engine (at 08:51), "..but I Digress, the point is the clock is variable, it will be going lower in some scenarios...", This is the absolute proof, and my main concern.

What are the tools or built in measures given to developers to avoid bottle-necking the system by throttling down the CPU or GPU?
This will be a real problem for them at launch. They will have to either sacrifice rendering quality or max resolution to achieve a constant level of detail across the entire gaming experience. I would like to hear them talk about new apis and new code down to the metal techniques that could improve performance needed in a given moment or scenario, as these can be a game changer for them. Take Nvidia DLSS 2.0 feature for example, it can increase performance by up to 33% without visually degrading the picture.

I suppose the end result will not be that much different from XBOX SX in multi platform titles, if developers are given the proper tools and experience with the PS5 dev kits.
I don't doubt that most of the exclusive PlayStation developers will extract every bit of performance and put the PS5 on par with XBox SX.

I love Sony and I don't care if it is the weaker system, but the approach of blinding facts and details, makes me hesitant about the PS5.

And where the heck is Ps1/Ps2/Ps3 BC. I don't see Cerny discarding it. I would like a clear position on this. Yes or No, is that simple.

Seems like you ignored what was asked and answered and just decided to vent.

He is quite clear that there are situations where the GPU would be idle so there is no point in raising frequency, he also point out that the devkit gives feedback on the utilization of GPU/CPU and frame so they can optimize and do the same with less resources (or keep resources and fill with more work).

Last edited by DonFerrari - on 02 April 2020

duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

He wasn't clear. You are being clear when you talk with hard facts, not when you're sugar coating them with technobabble.

Mark Cerny (35:28, The Road to PS5)..."We went with a variable frequency strategy for PS5 which is to say we continuously run the gpu in boost mode"

If we take this literally, then is not true. PS5 will not run continuously in Boost mode, it will try. It will go up or down.

Why choose the words, like continuously, boost and variable, if the frequency will not be sustained. This confuses people. Just search around all the articles written because of this. His words were carefully chosen, so PS5 doesn't end trivialized as the weaker system, and I understand that. But I prefer a more honest approach. Something like..."we choose to design PS5 for a target price of 399 in mind and great 4k30/60 fps experiences while maintaining the accessibility that PlayStation family has always provided to traditional console gamers and newcomers...", would have been much better digested and understandable.

PS5 is a 9.2TF base system. Whether you want to believe or not, is your choice. But early leaks and on frequency(2ghz) and gpu codenames benchmarks pointed to a 9.2TF system, the same system if given a 2.23ghz clock resulting in 10.2TF. Hell...Digital Foundry talked about this in 3 separate videos.

DonFerrari said:

Seems like you ignored what was asked and answered and just decided to vent.

He is quite clear that there are situations where the GPU would be idle so there is no point in raising frequency, he also point out that the devkit gives feedback on the utilization of GPU/CPU and frame so they can optimize and do the same with less resources (or keep resources and fill with more work).

I did not. That is why he talked about traditionally game engines taking into account gpu race to idle factors, with the ps5 is not the same because the system will trigger a boost for the entire frame (30 or 60), even if the workload doesn't achieve max power. I gave the references. Watch them.

Also, giving feedback is not enough. You need new tools to better handle this different approach. You can't expect from a developer to design XBox Sx and Ps5 the same, if both achieve mas peak power very different. That is why is much important for the PS5 SDK to support new tools, features and apis to better balance the ps5 system. And yet they haven't talked about this. And it will be the secret sauce, if any.

Last edited by alexxonne - on 02 April 2020

alexxonne said:

He wasn't clear. You are being clear when you talk with hard facts, not when you're sugar coating them with technobabble.

Mark Cerny (35:28, The Road to PS5)..."We went with a variable frequency strategy for PS5 which is to say we continuously run the gpu in boost mode"

If we take this literally, then is not true. PS5 will not run continuously in Boost mode, it will try. It will go up or down.

Why choose the words, like continuously, boost and variable, if the frequency will not be sustained. This confuses people. Just search around all the articles written because of this. His words were carefully chosen, so PS5 doesn't end trivialized as the weaker system, and I understand that. But I prefer a more honest approach. Something like..."we choose to design PS5 for a target price of 399, and 4k30/60 fps in mind while maintaining the accessibility that PlayStation family has always provided to traditional console gamers and newcomers...", would have been much better digested and understandable.

>Why choose the words, like continuously, boost and variable, if the frequency will not be sustained.

Because none of those words have anything to do with a specific frequency being sustained.  

Continuously = always

Boost = increase 

variable = changeable

Boost frequency is always a peak frequency.  All of those words describe exactly what Cerny was talking about, with how the system works.  

The system is always (continously) running as fast (boosting) as it can, dependent on what the work load is(variable). 

This was a technical conference for game developers.  People that should be able to understand "technobabble".  

alexxonne said:

PS5 is a 9.2TF base system. Whether you want to believe or not, is your choice. But early leaks and on frequency(2ghz) and gpu codenames benchmarks pointed to a 9.2TF system, the same system if given a 2.23ghz clock resulting in 10.2TF. Hell...Digital Foundry talked about this in 3 separate videos.

1.) We don't know what the base clock is.  We don't even know if there even is a base clock, because the paradigm for how the system chooses a frequency is completely opposite of the norm. 

2.)  From Cerny's comments, we know it spends the majority of it's time near the max frequencies.  More than likely it'll be closer to a 10 TF machine for games.  



alexxonne said:

DonFerrari said:

Seems like you ignored what was asked and answered and just decided to vent.

He is quite clear that there are situations where the GPU would be idle so there is no point in raising frequency, he also point out that the devkit gives feedback on the utilization of GPU/CPU and frame so they can optimize and do the same with less resources (or keep resources and fill with more work).

I did not. That is why he talked about traditionally game engines taking into account gpu race to idle factors, with the ps5 is not the same because the system will trigger a boost for the entire frame (30 or 60), even if the workload doesn't achieve max power. I gave the references. Watch them.

Also, giving feedback is not enough. You need new tools to better handle this different approach. You can't expect from a developer to design XBox Sx and Ps5 the same, if both achieve mas peak power very different. That is why is much important for the PS5 SDK to support new tools, features and apis to better balance the ps5 system. And yet they haven't talked about this. And it will be the secret sauce, if any.

"There's another phenomenon here, which is called 'race to idle'. Let's imagine we are running at 30Hz, and we're using 28 milliseconds out of our 33 millisecond budget, so the GPU is idle for five milliseconds. The power control logic will detect that low power is being consumed - after all, the GPU is not doing much for that five milliseconds - and conclude that the frequency should be increased. But that's a pointless bump in frequency," explains Mark Cerny.

At this point, the clocks may be faster, but the GPU has no work to do. Any frequency bump is totally pointless. "The net result is that the GPU doesn't do any more work, instead it processes its assigned work more quickly and then is idle for longer, just waiting for v-sync or the like. We use 'race to idle' to describe this pointless increase in a GPU's frequency," explains Cerny. "If you construct a variable frequency system, what you're going to see based on this phenomenon (and there's an equivalent on the CPU side) is that the frequencies are usually just pegged at the maximum! That's not meaningful, though; in order to make a meaningful statement about the GPU frequency, we need to find a location in the game where the GPU is fully utilised for 33.3 milliseconds out of a 33.3 millisecond frame.

"So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation - we were looking at PlayStation 5 games in situations where the whole frame was being used productively. The same is true for the CPU, based on examination of situations where it has high utilisation throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency."

Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time. Cerny also stresses that power consumption and clock speeds don't have a linear relationship. Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency," Cerny emphasises.



alexxonne said:

He wasn't clear. You are being clear when you talk with hard facts, not when you're sugar coating them with technobabble.

Mark Cerny (35:28, The Road to PS5)..."We went with a variable frequency strategy for PS5 which is to say we continuously run the gpu in boost mode"

If we take this literally, then is not true. PS5 will not run continuously in Boost mode, it will try. It will go up or down.

Why choose the words, like continuously, boost and variable, if the frequency will not be sustained. This confuses people. Just search around all the articles written because of this. His words were carefully chosen, so PS5 doesn't end trivialized as the weaker system, and I understand that. But I prefer a more honest approach. Something like..."we choose to design PS5 for a target price of 399 in mind and great 4k30/60 fps experiences while maintaining the accessibility that PlayStation family has always provided to traditional console gamers and newcomers...", would have been much better digested and understandable.

PS5 is a 9.2TF base system. Whether you want to believe or not, is your choice. But early leaks and on frequency(2ghz) and gpu codenames benchmarks pointed to a 9.2TF system, the same system if given a 2.23ghz clock resulting in 10.2TF. Hell...Digital Foundry talked about this in 3 separate videos.

DonFerrari said:

Seems like you ignored what was asked and answered and just decided to vent.

He is quite clear that there are situations where the GPU would be idle so there is no point in raising frequency, he also point out that the devkit gives feedback on the utilization of GPU/CPU and frame so they can optimize and do the same with less resources (or keep resources and fill with more work).

I did not. That is why he talked about traditionally game engines taking into account gpu race to idle factors, with the ps5 is not the same because the system will trigger a boost for the entire frame (30 or 60), even if the workload doesn't achieve max power. I gave the references. Watch them.

Also, giving feedback is not enough. You need new tools to better handle this different approach. You can't expect from a developer to design XBox Sx and Ps5 the same, if both achieve mas peak power very different. That is why is much important for the PS5 SDK to support new tools, features and apis to better balance the ps5 system. And yet they haven't talked about this. And it will be the secret sauce, if any.

It will be continuous mode. It isn't 9.2 or 10.28 Tflop or 2Ghz to 2.23Ghz. The frequency will vary continuously as needed for both CPU and GPU. And most time it will be on he maximum, his words, if you don't want to believe in it then there would be no point in he giving any explanation as none would satisfy you.

Price isn't defined so he wouldn't talk about that in anyway.

About the boost, base or leaks, that is just non-sense, this wasn't developed in a week or month to beat Xbox. It is the philosophy they gone with the console for a long time.

I have never seem any GDC talk or any other that gives details on the developing tool. He said on the GDC video that the time for a team that dev for PS4 to get on with PS5 is the shortest ever, like 1 month. He also said that most of the features can be ignored by dev or will be performed invisible to them. And also said that the tool will allow  for optimization with the best way the package can be made to fully utilize the system and dev will be able to continuously monitor it. That seems enough detail to show the devkit was improved to help it, he talks about devs not wanting uncertainty be it from one PS5 to another working differently due to the change of frequency and thermals, so you can rest on it without worrying.

PS4Pro was a lot weaker than X1X and Sony didn't had any need to hide it, PS4 was a lot stronger than X1 and Sony also didn't make much advertising of it, so not sure why you think they are trying to hide the console weakness.

You just want more information than you got and is flustered because they are not giving what you want when you want.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994