By using this site, you agree to our Privacy Policy and our Terms of Use. Close
twintail said:
Can someone simplify the CPU/ GPU thermal ?

My understanding is that because they have a set thermal limit, games will be able to nearly max out the CPU/ GPU for extended periods of time without any concern.
Whereas in a more traditional design, the hardware cooling and thermals need to guess theoretical max loads because of fluctuations?

I don't know.

Basically, you set your aircon to 23 degrees because you know regardless of the sunlight or activities in the room, you'll remain cool. You can party hard during the day because the Aircon has been set to accept all that play.
As opposed to setting it to 25 and having to drop the temp because the sunlight and in room activities are warming up the room, thus making the Aircon work harder?

Like I said, I'm trying to understand how it works on a more basic level.

Thanks.

You are pretty much spot on. Its something like this...Two key things to note.

  1. The first thing to note is that the core load is what determines power draw and not necessarily frequency. A 36CU GPU running at 1.5Ghz at 100% load can be drawing more power than a 36CU GPU running at 1.8Ghz at a 70% load.
  2. The second thing of note is that its nearly impossible to get any chip running at 100% load all the time. That would require impossibly good optimization.

Now into the details.

Traditional console (aka fixed clocks)
Clock frequency is fixed, but power draw is variable. However, power draw at 70% load is very different from power draw at 100% load (1). So designers kinda guess how capable their cooling needs to be, based on an estimated load/power consumption TDP, but also to account for the duration of time that the chip would run in that state. This is very hard to do, and if you just go for the biggest cooler you can then you aren't really in the traditional console design form factor anymore... eg.XSX.

PCs (variable clocks & variable power)
With PCs everything is variable. They have a minimum clock, base clock, and boost clock. Thermals control the system. The GPU sits at its normal clock for most of the time and can boost its clock depending on how much load it has. It typically would alternate between that boost clock and the normal clock as long as thermals allow it. if the thermals are good, it can sit at its boost clock longer, if they aren't then it throttles and clocks down.

PS5 (fixed power, variable frequency)
The power draw is fixed. The APU can never draw more than a set amount of power. This allows sony to build a cooling system for that specific TDP. Let's call this 200W. Now sony could have a CPU budget of 50W and a GPU budget of 150W at an estimated load of 90%(2). That power will keep both the CPU and GPU at their peak clocks of 3.5Ghz and 2.2Ghz respectively.

However, when that game comes along that has that one scene or system in the game that can saturate the GPU/CPU to over 90%-100% load (eg. pulling up the map in Horizon zero dawn) then the GPU would likely be exceeding its 150W power budget. So the PS5 can, in that case, drop GPU clocks by around 2% - 3% (44Mhz to 66Mhz) and that little percentage drop can save as much as 10W in power consumption. 

Thats it... there is a reason why a 3% drop in clock speed can save as much as 10W of power and why something as simple as a map screen can make the GPU go crazy. But that's not what you asked lol.

And the people that say things like "just how big is that clock drop going to be", do not understand this fundamental part clocks and power draw. Eg, it can take 20W to get a chip from 1500Mh to 1800Mhz, but take 20W to get it from 1800Mhz to 1830Mhz.

Last edited by Intrinsic - on 27 March 2020