By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS5 GDC Reveal and PS5 specs/performance Digital Foundry Video analysis : 3.5 Ghz 8 core Zen 2 CPU along with 10.3 TF RDNA 2 RT capable and 16GB GDDR6 RAM and also super crazy fast 5.5 GB/Second S

 

How do you feel

My brain become bigger su... 21 30.00%
 
I am wet 6 8.57%
 
What did he talked about??? 5 7.14%
 
I want some more info 9 12.86%
 
Total:41

CPU and GPU don't always run at max frequency and always trying to run at max frequency means they unnecessarily ramp up with little to no workload (race to idle). I think that's what Sony has been trying to address, all the way back from when they announced the ps5 to be a greener console. (Which was met with instant backlash)

For example a car analogy. Imagine you apply constant throttle yet to keep the speed constant on a hilly road you disengage the clutch (switch to neutral) for a bit 30 times per second (waiting for vsync). The GPU is the engine, what happens to the engine with constant fuel supply and no load, it revs up then down again when the load comes back. This produces extra unnecessary heat and also consumes more fuel (power). So basically Sony is trying to implement a cruise control for console, adjusting the throttle based on the predicted load.

With a constant power budget you avoid unnecessary heat and drawing more power for CPU and or GPU doing not much work at max frequency. I see this happening daily in GT Sport. While in the menus, the fan starts whining like crazy, the game is rendering the menu at a high refresh rate (discarding most frames) and since the lobby menu is very efficient to render it's taxing the CPU and GPU a lot more than when the actual race starts. I've gotten so used to it I wait until I hear the fan settle down before getting up and get ready to race, usually when the auto drive is going. Right in time for the start countdown. I see the same thing in many PC games, go into a menu and wham the fps climbs to 1000 fps!

But conserving power is a dirty word for power hungry gamers lol. This system will stop the problems with simple screens drawing more and power and generating more heat than necessary while still allowing taxing screens to run as fast as possible. It sounds like a paradox, doing more difficult work requiring less power. But just like a car. flooring it without a load will destroy your engine, flooring it with a load not so much.

That's my 2 cents.

Btw I have disabled CPU boost on my gaming laptop since it kept running into the thermal cap with FH4 throttling down the GPU. Without CPU boost (max power state set to 99) I got better frame rates in FH4. However other CPU bound games work better with boost on and don't run into thermal load problems. It would be useful if my laptop had smart shift and adjusted max frequencies based on a power budget.



Around the Network
DonFerrari said:

For someone claiming others have reading comprehension problems you seem to be the one with it.

He said that with previous paradigm on GPU development 2Ghz was unreachable (not that PS5 couldn't reach it, but that GPUs didn't get that far, just look at Xbox for examples, or current gen or all other GPUs). But they were able to work around that paradigm and get over 2.23 (it can go further but then it would be unstable, so you also got your answer of it being able to run at 2.23 most of time, and they can't say if it is 50, 80 or 99% of the time because that will be a game by game depending on the developer not the HW).

You seem to have lost the ride. Be happy believing what you want. Just because you don't agree with me doesn't mean you are right. You just disagree because is your choice and opinion against an established FACT, and I will respect that.

But I use facts and references, before even argue. I do not let my loyalty to Sony interfere. Being Loyal is one thing, being a fanatic is another one. That kind of fanatism can have devastating effects, such as the ps3 being a disaster at the beginning. The constant denial of facts, the experts written articles and Mark Cerny own words can be defined as a FANATIC behavior. Twisting reality doesn't make a thing more real my friend.

Previous Paradigm means obviously the way it has been until now, just like the PS4, PS4 Pro, XBOX One, XBox One X, and the new Xbox SX did. A fixed clock frequency with a fixed performance capability. How much tutoring do you need to understand something so simple?

Still, you wanted to argue with me just for the sake of arguing, yet you say the same as me in your last sentence; "...they can't say if it is 50, 80 or 99% of the time because that will be a game by game depending on the developer not the HW..."

I want to be myself clear. I support Sony, I want them to succeed. I wish they end with a powerful and balanced system for the next PS5. The changes and new features that will be available for the PS5 will be without doubt a game changer. I'm not debating that. I didn't support Microsoft this generation, even when they ended with the most powerful console. There is a kind of linearity with brands, hardware and games content, that adheres itself like a recipe for a successful console launch. And that is where I'm being properly critic of it.

Mark Cerny is talented. But he already made mistakes. The PlayStation Vita was a total fail because of him. Also the Pro system didn't exactly hit the nail. He didn't want to listen back then. I don't trust him specially when he evades simplistic logic and instead uses unusal terms. Like the use of the stupid paradigm word, just to evade to say "like before". I can assure that some people here will put their life on the line thinking the PS5 will have a paradigmatic ability, just because Cerny said something about that, and that will beat the crap put of Xbx SX. Please.....

I love and enjoy new technology, but there is a limit with it. If you live with a technophilic style of fetish, it will be harder to accept other critics even when their claims are true. Focusing in non essential features is waste of resources, when 90% of the potential buyers do not care about them. And I'm gonna put myself in this example, because I may be too part of the other 10% that I hardly critic. I wanted full BC features for the PS5, but if it gonna cost too much and risk again a ps3 style meltdown in Sony's cash, I really do not want it and just BC for PS4 titles will be enough. For the PS5 to be a success, it will need to be affordable, and I had to accept that whether I like it or not, even if i could pay $10,000 version of it. If you really love Sony you should now what to ask from them and what NOT to.

There is a north with the core values already established since the PS One launch; Power, Capacity and Affordability. PS2 followed the recipe and was the greatest success in gaming history. The PS3 system wanted to change that core recipe and failed. PS4 retook the original north and found again major success. I believe Cerny genuinely wants PS5 to be a success, but in my opinion he is not following the previous established road. Some details of his PS5 vision are a risk, a risk that I don't want Sony to take, at least now now.

And finally...If I have to be a newbie, an ignorant, and an uneducated because I use hard facts, references and truth...then I will gladly be one. Have a nice day.

SvennoJ said:

CPU and GPU don't always run at max frequency and always trying to run at max frequency means they unnecessarily ramp up with little to no workload (race to idle). I think that's what Sony has been trying to address, all the way back from when they announced the ps5 to be a greener console. (Which was met with instant backlash)

For example a car analogy. Imagine you apply constant throttle yet to keep the speed constant on a hilly road you disengage the clutch (switch to neutral) for a bit 30 times per second (waiting for vsync). The GPU is the engine, what happens to the engine with constant fuel supply and no load, it revs up then down again when the load comes back. This produces extra unnecessary heat and also consumes more fuel (power). So basically Sony is trying to implement a cruise control for console, adjusting the throttle based on the predicted load.

Excellent analogy. I suppose there is a special API or module built-in somewhere for that special task...maybe at the cache level of the I/O Complex, that way it retains all data at the memory to maintain bandwidth without performance drop whenever the PS5 need to use the cruise control setting you described.

In an ordinary PC, constantly changing frequencies at a software level produces a staggering/chopping effect whenever the GPU clock is too high. Flashing/hacking vbios is better if you know what you are doing. If PS5 drops CPU voltage to maintain GPU, it will need to sustain somewhere for a brief moment the change of output power, and that requires more transistors or a better process to build them...and is probably the reason why PS5 CU's are bigger than previous generation CU's, but even more surprising when taking into account that the PS4 node process was 14 nm and the PS5 is 7nm. If you want to tolerate more voltage for a higher frequency you need more better capacitors and heatsinks, across all the board not just transistors(main die).

Last edited by alexxonne - on 03 April 2020

SvennoJ said:


Btw I have disabled CPU boost on my gaming laptop since it kept running into the thermal cap with FH4 throttling down the GPU. Without CPU boost (max power state set to 99) I got better frame rates in FH4. However other CPU bound games work better with boost on and don't run into thermal load problems. It would be useful if my laptop had smart shift and adjusted max frequencies based on a power budget.

I have a Ryzen 2700u notebook which I actually use to do some light gaming on while I was training interstate... Well. Used to. Don't use it anymore thanks to the Switch... Probably should.

Throttling the CPU so that the GPU can "boost" is not an alien concept for me... The TDP of the APU is 15w which never changes, so if I reduce the CPU's portion of the TDP by limiting clockrates to 60%, then it can increase the TDP of the GPU portion. - It actually allows for overwatch to hit 720P@60fps. Otherwise the resolution takes a bit of a hit.
It's actually something Sony is taking home with the Smartshift aspect which takes it a step further.

I don't know what I am reading in this thread from other users... But I honestly can't be bothered.



--::{PC Gaming Master Race}::--

Pemalite said:

I have a Ryzen 2700u notebook which I actually use to do some light gaming on while I was training interstate... Well. Used to. Don't use it anymore thanks to the Switch... Probably should.

Throttling the CPU so that the GPU can "boost" is not an alien concept for me... The TDP of the APU is 15w which never changes, so if I reduce the CPU's portion of the TDP by limiting clockrates to 60%, then it can increase the TDP of the GPU portion. - It actually allows for overwatch to hit 720P@60fps. Otherwise the resolution takes a bit of a hit.
It's actually something Sony is taking home with the Smartshift aspect which takes it a step further.

I don't know what I am reading in this thread from other users... But I honestly can't be bothered.

Hi there

I suppose, that is because the ryzen APU is monitoring all the time the TDP, if it goes too high you will blow the capacitors depending on the chip, in this case the main apu capacitors. By reducing the CPU portion (when possible), you are cheating the APU TDP monitor, giving you more tolerance for a higher gpu frequency, it can be a nice boost given it would be better sustained and not throttled back when heat is an issue; and with no apparent damage to the capacitors life. Obviously this will help you in some games but not in others that do not depend on frequency(for example a game that depends on shader units count and not clock, or one game that depends more on cpu and not gpu).

I have question, if you scaled down the cpu clock to 60%, how much % up did you achieved with the GPU, a 40%? If not mistaken with 1300mhz, a 40% clock increase will be 1820mhz... I suppose you didn't end that high but if that was the case, the capacitors would have endure but vram will prematurely die if not well ventilated. Bad Vram is a pain in the ass, it can show up anytime without advise, and is not immediately evident to diagnose. Lots of used cards are like this on ebay, they randomly crash without, can be playing a game or using Word.

Something very similar to what AMD/Nvidia/Intel do, but in reverse whenever a chip design or manufacturing process is flawed, they deactivate some internals, tweak the clocks and sell them as a lower tier product.

Last edited by alexxonne - on 03 April 2020

the-pi-guy said:

You're only half reading my posts.  

Point taken.



Around the Network
alexxonne said:
Pemalite said:

I have a Ryzen 2700u notebook which I actually use to do some light gaming on while I was training interstate... Well. Used to. Don't use it anymore thanks to the Switch... Probably should.

Throttling the CPU so that the GPU can "boost" is not an alien concept for me... The TDP of the APU is 15w which never changes, so if I reduce the CPU's portion of the TDP by limiting clockrates to 60%, then it can increase the TDP of the GPU portion. - It actually allows for overwatch to hit 720P@60fps. Otherwise the resolution takes a bit of a hit.
It's actually something Sony is taking home with the Smartshift aspect which takes it a step further.

I don't know what I am reading in this thread from other users... But I honestly can't be bothered.

Hi there

I suppose, that is because the ryzen APU is monitoring all the time the TDP, if it goes too high you will blow the capacitors depending on the chip, in this case the main apu capacitors.

Uh. No.
You cannot exceed the TDP, it is a hard limit in the firmware.

However... Me being an enthusiast with no fucks to give has employed various tools to tweak the firmware and push the TDP limit to 25w.

alexxonne said:

By reducing the CPU portion (when possible), you are cheating the APU TDP monitor, giving you more tolerance for a higher gpu frequency, it can be a nice boost given it would be better sustained and not throttled back when heat is an issue; and with no apparent damage to the capacitors life.

Uh. No.

The chip shares it's TDP with the CPU and GPU... A lower load on either, will allow the other to ramp up in clockrate.

alexxonne said:

Obviously this will help you in some games but not in others that do not depend on frequency(for example a game that depends on shader units count and not clock, or one game that depends more on cpu and not gpu).

It will assist in all games that are GPU bound... And considering we are talking about integrated graphics and running at 1080P... That is pretty much every game.

alexxonne said:

I have question, if you scaled down the cpu clock to 60%, how much % up did you achieved with the GPU, a 40%? If not mistaken with 1300mhz, a 40% clock increase will be 1820mhz... I suppose you didn't end that high but if that was the case, the capacitors would have endure but vram will prematurely die if not well ventilated. Bad Vram is a pain in the ass, it can show up anytime without advise, and is not immediately evident to diagnose. Lots of used cards are like this on ebay, they randomly crash without, can be playing a game or using Word.

Something very similar to what AMD/Nvidia/Intel do, but in reverse whenever a chip design or manufacturing process is flawed, they deactivate some internals, tweak the clocks and sell them as a lower tier product.

1,300Mhz is the upper limit of the GPU boost. - However it's average clockrate is lower if the CPU is under 100% load. It cannot exceed that hard limit.
The CPU will go "Up-To 3.8Ghz on all 4-CPU cores."
The GPU will go "Up-To 1.3Ghz on all 10 CU's."

Limiting one gives headroom for the other to achieve higher average clockrates... This is the same principle the PS5 will work on and is entirely under developer control.

Capacitors? What are you even on about?

VRAM? You do realize this is an APU and thus integrated graphics and thus doesn't have VRAM?

the-pi-guy said:

1.) We don't know what the base clock is. We don't even know if there even is a base clock, because the paradigm for how the system chooses a frequency is completely opposite of the norm.

2.) From Cerny's comments, we know it spends the majority of it's time near the max frequencies. More than likely it'll be closer to a 10 TF machine for games.

The base clock is irrelevant as it depends on what the developer prioritizes... So it will vary from game to game, even from one games scene to the next.



--::{PC Gaming Master Race}::--

Pemalite said:

1)Uh. No.You cannot exceed the TDP, it is a hard limit in the firmware.However...
Me being an enthusiast with no fucks to give has employed various tools to tweak the firmware and push the TDP limit to 25w.

2)Uh. No. The chip shares it's TDP with the CPU and GPU... A lower load on either, will allow the other to ramp up in clockrate.

3)Capacitors? What are you even on about?

4)VRAM? You do realize this is an APU and thus integrated graphics and thus doesn't have VRAM?

1. Yes, that is why I mentioned it, there are ways to go beyond it. Usually board makers are required to build with 20-30% higher tolerance than what is the system is designed for.

2. You mean, the chip/firmware/bios does this automatically? I though you were messing with the settings or at a bios level. I don't know if such settings are available in your setup but a looong time ago I liked to hack Vbios(video card bios) and motherboard bios when the settings I wanted to change weren't available. Overcloking wasn't that much rewarding back then if you ended frying your mobo, tolerance wasn't that great, specially with the capacitor plague in the early 2000's. But essentially you're still playing with the headroom available that the TDP allows you to. If that's the case, yes your PS5 analogy is perfect.

3. Any electronics device requires capacitors to regulate voltage (capacitance=uF). They work by limiting voltage capacitance or storing it just like a battery. Usually these are embedded in the motherboard, or video card board. When people overclock and damage their boards these are the one that are blown out.

4. I do not know much about AMD APUS, but if i'm not mistaken some of them(Mini PC boxes) have a small dedicated memory pool while most others share the main system memory and use it as video memory. But either way, what I meant is that the memory used for it will get fucked up at those rates(1820mh). It will need heat-sinks or faster fan configuration, and for a portable device either it will be expensive or not practical(possible).



alexxonne said:

1. Yes, that is why I mentioned it, there are ways to go beyond it. Usually board makers are required to build with 20-30% higher tolerance than what is the system is designed for.

No they aren't.

alexxonne said:

2. You mean, the chip/firmware/bios does this automatically? I though you were messing with the settings or at a bios level. I don't know if such settings are available in your setup but a looong time ago I liked to hack Vbios(video card bios) and motherboard bios when the settings I wanted to change weren't available. Overcloking wasn't that much rewarding back then if you ended frying your mobo, tolerance wasn't that great, specially with the capacitor plague in the early 2000's. But essentially you're still playing with the headroom available that the TDP allows you to. If that's the case, yes your PS5 analogy is perfect.

The chip does it automatically. - You can play with Windows Power States and limit CPU load.

No bios/firmware settings or overclocking required.

alexxonne said:

3. Any electronics device requires capacitors to regulate voltage (capacitance=uF). They work by limiting voltage capacitance or storing it just like a battery. Usually these are embedded in the motherboard, or video card board. When people overclock and damage their boards these are the one that are blown out.

I am aware.
However the framing of your statements makes me question your fundamental understanding of consumer electronics, especially as you are framing the context around capacitors which aren't the issue in this regard... And anyone with a basic fundamental understanding of TDP and overclocking knows this.

Blown Caps just doesn't happen anymore... Maybe in the era of cheap-shit chinese, liquid caps, but not today with quality solid Japanese caps... Making your statements entirely redundant.

alexxonne said:

4. I do not know much about AMD APUS, but if i'm not mistaken some of them(Mini PC boxes) have a small dedicated memory pool while most others share the main system memory and use it as video memory. But either way, what I meant is that the memory used for it will get fucked up at those rates(1820mh). It will need heat-sinks or faster fan configuration, and for a portable device either it will be expensive or not practical(possible).

Clearly.

No. They generally do not include a small dedicated memory pool... This is integrated graphics with the CPU on-die and not a separate chip or card.

Some older AMD integrated graphics used to include a small amount of RAM on the motherboard known as "side port". - Which was usually just a small memory pool which could be dedicated to graphics tasks.
I actually had the Asrock A785GXH at one point which included 128MB of DRAM for the integrated graphics on the motherboard. - But that is still your regular, plain-jane DDR3 commodity ram on a 64-bit interface, not special-sauce GDDR.

And yes... I overclocked that integrated Radeon 4200 graphics chip to 1ghz which ran reliably for years. No issue.

But that also doesn't employ the sharing of TDP between the CPU or GPU.



--::{PC Gaming Master Race}::--

Hardware specs must be one of the things most talked about by people that really do not know much about the subject,you can get a vague grasp on how things work and that is good but to discuss your point like it is gospel while having that flawed knowledge to back it up is a bit unfruitfull.



Pemalite said:

1)No they aren't.

2)The chip does it automatically. - You can play with Windows Power States and limit CPU load.

No bios/firmware settings or overclocking required.

3)I am aware.
However the framing of your statements makes me question your fundamental understanding of consumer electronics, especially as you are framing the context around capacitors which aren't the issue in this regard... And anyone with a basic fundamental understanding of TDP and overclocking knows this.

Blown Caps just doesn't happen anymore... Maybe in the era of cheap-shit chinese, liquid caps, but not today with quality solid Japanese caps... Making your statements entirely redundant.

4)

Clearly.

No. They generally do not include a small dedicated memory pool... This is integrated graphics with the CPU on-die and not a separate chip or card.

Some older AMD integrated graphics used to include a small amount of RAM on the motherboard known as "side port". - Which was usually just a small memory pool which could be dedicated to graphics tasks.
I actually had the Asrock A785GXH at one point which included 128MB of DRAM for the integrated graphics on the motherboard. - But that is still your regular, plain-jane DDR3 commodity ram on a 64-bit interface, not special-sauce GDDR.

And yes... I overclocked that integrated Radeon 4200 graphics chip to 1ghz which ran reliably for years. No issue.

But that also doesn't employ the sharing of TDP between the CPU or GPU.

1)Yes they are. By design is impossible for the board components to have the same tolerance level as the maximum power draw capacity of the main chips. 15-20 years ago that tolerance level was only near 10% beyond the max capacity, now-days it can easily go between 20-30%. Most manufacturers started using military grade components after the capacitor plague specially for products intended for the overclocking the crowd, now every manufacturer use them,  from the cheap Acer products to the luxury class of devices from Apple. If you don't know this, is because you're too young perhaps. You need to understand basic electric engineering, what a capacitor is, a resistor, and how the transistors unload and overloads these two constantly when you use your pc, to comprehend what really happens when you mess with the original clock frequency settings. Just a change of .1mv would alter the internal components bringing stability or havoc to the entire system. If you don't take into account the tolerance level, the system will shut down or crash. That is why Bios, usually disable some options or put limits to them. With your approach then Yes, you are controlling the power state from windows, but windows is making a predicted change to the bios settings on the fly. But, you are not doing anything out of the ordinary to what the chip was designed for. Except for what you said earlier about pushing TDP up, and for this is why I talked about tolerance levels, is a requirement and not a choice for proper functioning, specially when lowering voltage for higher frequencies which decreases amperage (if passive), and stresses capacitance; hence why capacitors are the first to be blown. (capacitor tolerance level was exceeded)

2)Power states are just ACPI profiles. Just because you can play with Windows settings doesn't mean you understand what you are doing. But you are not an idiot, I give you that.  While you can be clever sometimes, the tone you use in your writing...pufff, just blows your intended purpose.

3)Clearly the framing of my comments include my experience blowing stuff every time for testing purposes. After repairing lots of shit like this everyday, I know very well what I'm talking about. Computer enthusiasts like lots of people here enjoy the thrill of squeezing performance....until something goes wrong and burn their systems. But clearly you don't give a shit about that, so you don't know, but some other people do really. If by basic understanding you mean my Ph D, then yes, clearly is an issue you haven't found yet cause you are not that deep in to this as I wrongly assumed before.

4) I was being humble with you, because I don't like to use AMD APUs. You can not judge everything in life based only on your self experiences; if something that makes you a naive person, something I highly doubt. You understood right away what I meant when I said VRAM, whether it was dedicated or shared ram, is still applied the context of my argument. Yet you decided to twist the argument to impose yourself with a deviated criteria.

APUs are just embedded processors within the same die, and the concept doesn't apply solely to AMD; Intel and Qualcomm also do this. But to answer you directly, all AMD APUs can have dedicated video memory if the target product requires it, just like with everything else.  Sharing the memory is just cheaper, and by design is the default option for building APUs boards, but sometimes an alternative is needed The are some brands like Intel, Acer, Zotac, Asus, etc that offer a wide range of products (APU style) with dedicated video memory modules for specific consumer needs.

And...

Usually overclocking is limited in the bios. YES including those boards and cpus with unlocked potential. No matter how much you increase frequencies, the BIOS will not let you exceed some parameters and voltage, it can even default those values without letting you know if it finds instability. Whenever you computer restarts, crashes or shuts down, is a designed behavior from the Bios. If the bios lets you tweak something, then a tolerance level was already calculated. This is why sometimes you have to update your Bios, to have a better support and stability from new CPU or GPU units, etc. Most of the time with same model processors  will have multiple configuration sets based on the code of manufacturing, so behavior and performance outcome will vary even from a same model. If your system was reliable for years it was because it was only a marginal tweak, one already calculated and compensated in the BIOS internal code settings. But those changes will bite you back...sooner or later, depending on the quality of your build.

The only exception for this is ram overclocking, as ram modules do not have an ID tag for custom function sets based on the stick model and brand. Instead they use timings for compatibility, but if you mess with those, well unexpected behavior can occur, maybe a a random crash or a full corrupted hard drive. A very high overclocking on a ram stick would crack it, and I have seen it, when using nitrogen as cooling solution, it pops like pop corn out of the motherboard.


User was warned for this post. - Pemalite.

Last edited by Pemalite - on 04 April 2020