By using this site, you agree to our Privacy Policy and our Terms of Use. Close
TalonMan said:
Azzanation said:

Weather they opt for the lower end denominator still means you get greater benefits playing old games on the Series X. Much like how a good PC will run games good compared to a average PC running games averagly.

Gears 5 running at PC settings is a perfect example. Imagine if all 1st party Xbox games get next gen boosts like Halo 5 running with Ray Tracing at a constant 60/4k? That game already looks amazing on the X1X.

They only get the "boost" if the developer codes for it - that's the tricky part, which is why I hate the concept of dividing a userbase with different hardware configurations. 

Again - I could be totally wrong about this. I'm certainly not claiming to be the expert as to what's going on with the upcoming generation. I'm basing my opinion solely on assumptions built upon how things have been done in the past. 

...but putting all this aside, I think @NobleTeam360 hits the nail on the head - I honestly don't like ANY of these prices, and I'm not sure I'm willing to shell out the money required to get into this next generation until those prices come down.

Laptops have this technology too.

Apparently if you lower clock rates ~2-3% you can save upwards of 10% power sometimes.
So if you have a game where the bottleneck is the GPU, you can lower the speed of the CPU, while keeping the GPU running at its highest speed.
And vice versa.

This can be done on the fly, by the hardware itself, it'll intelligently manage the resources (power usage) of both.

So this isnt something they need to code for.

Mark Cerny clarifed that the results where repeatable (ei. always the same).
So they dont need to take any notice of how this technology clocks things up and down, or worry the PS5 will do this differntly from unit to unit.

Its a "cheap" way to maximise the power budget, that the PSU and the Cooler, can handle.


You have 2 options :

a)  you make a power consumption / heat limit, and have your cpu scale up and down, intelligently right up to the "cap". 
This allows you to run right to the limits of what your PSU/cooler can handle.

b) you make your PSU + cooler oversized, and run your CPU+GPU clocks well below the "cap", because depending on usage, their heat/power consumption will jump up and down, and you dont know if it ll go beyound the breaking point.  So your forced to leave yourself a "good margin" of error.


Sony went with option a).

Its a cost effective way to get more performance, out of something, on a limited power / cooling budget.

Microsoft went with option b)

Its safe (the norm in consoles so far), but not as cost effective (you leave performance at the table, leaveing yourself a margin of error).


Basically Sony did everything they could to squeeze extra performance out of a small chip.

This is why I believe there will be a 100$ price differnce between the two.
its a smaller chip, that uses alot of "tricks" to squeeze more performance out of it.



Why does Sony have Variable clocks?
Extra performance, without any added costs to it (why leave performance on the table?)

Last edited by JRPGfan - on 23 March 2020