By using this site, you agree to our Privacy Policy and our Terms of Use. Close
shikamaru317 said:
HollyGamer said:

PS5 will run with variable clock speed " based on workload ", it means it can sustain maximum clock speed forever if it has too, but games don't have the same scene , the same event and the same density of polygon and geometry across the level. So the variable is there to make the efficiency for Power. 

second it will be easy for developer to programed on high clock speed  and also inside the GPu there other command processing unit that better when it run with higher clock speed.

even we have an example on how RTX 2060 desktop (with less shader unit but with higher clock speed ) VS RTX 2080 Max Q Laptop ( with more Shader unit but lower clock speed) . Both have equal performance 

The RAM on Xbox are spilt speed, 6 GB for OS/system run at 300 Gb/s  something, and 10 GB with 560 GB something. Remember in the end  "the high speed " need to match " the lower clock speed ",  because both GPU and CPU need both RAM, so the high speed. In the end it will be just PS5 RAM with 448 Gb/s . PS5 RAM are equally unified all system and VRAM are the same speed. In the end they will be the same. 

Also RAM will not be the biggest factor when it comes to streaming high quality assets. But SSD speed. Unless the  assets are not streamed but pre loaded via loading (by dumping all asset to RAM) . But that's required a lot of RAM and can only be utilize on PC with more system RAM.  

Here's where that guys statement about texture streaming falls apart for me. He claims that games struggle to utilize lots of shaders concurrently, but I'm not seeing any evidence of that when looking at PC benchmarks. XSX has 3328 enabled shader cores running at 1825 mhz. Compare that to Nvidia's Geforce 2080ti, which has a whopping 4352 shaders, more than 1000 more shaders than XSX has, running at 1350 mhz base clock, 1545 mhz boost clock. That comes out to 12.1 tflop for XSX and 11.75 tflop base/13.4 tflop boost for the Geforce 2080ti. Even though Geforce 2080ti has 1000 more shaders than XSX, games don't seem to have any trouble utilizing those shaders.

Here we have a benchmark showing that the 2080ti has a 11 fps improvement over the 2080 Super, which much like PS5, has less shaders running at a higher clock speed, 3072 shaders running at 1650 mhz base, 1815 mhz boost, which comes out to 10.1 tflop base/11.1 tflop boost. 

Here we have another benchmark for a different game, showing a 12 fps gap in minimum framrates between the 2080 Super and the 2080ti:

It seems to me that games don't seem to struggle with shader utilization like he claims, considering that 2080ti can make good use of 1000 more shaders than XSX has.

Is as if you only we change one part of a system with everything else being the same, it somehow reflects only that part switched. Who would have thought. now im wondering what would happen if its a diferent combination of parts in a system. like proprietary parts that are meant to work in unison in a closed system.  



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.