By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Chrkeller said:
Leynos said:

 leak shows S2 at 120GB

You are talking hard-drive.  I'm talking memory bandwidth.  The S2 will be 112 gb/s max.

Edit

For those who care, memory bandwidth is how much data a gpu can load on and off the vram.  We don't play full motion videos, but rather a bunch of still shots.  The more still shots (e.g. fps) the smoother a game looks and the more accurate the controls.

Using easy math, say each still shot is 10 gb.

30 fps x 10 gb each is 300 gb/s

60 fps x 10 gb each is 600 gb/s

120 fps x 10 gb each is 1200 gb/s

The S2 will require downgrades of new games because it will be 112 gb/s docked.  The ps5 is struggling at 60 fps on the newest games because it is 448 gb/s.  A 4090 is 1006 gb/s, thus can push most (not all) games close to 120 fps.

Best way to reduce memory bandwidth is reduce image quality via resolution, textures, lighting, shadows, volumetric, etc.

It's not as simple as that I am afraid.

Some rendering techniques require more bandwidth than others... And there is an efficiency curve of bandwidth vs resolution as well, each GPU architecture has an optimal resolution for the resources it has.

100-150GB/s is definitely optimal for 720P and a little above, with around 200-250GB/s being ideal for 1080P, that's not to say you won't get more performance with more memory bandwidth, but that's certainly enough "fillrate" to get the job done competently at those targets.

The other things is that... Modern GPU's now break down a scene into tiles, then the GPU looks at the differences between neighboring pixels... Aka. The delta between those pixels, then it compares it to a pattern library to explain those deltas which ultimately results in significant reduction in bandwidth requirements and L2 cache usage and texture mapping unit usage due to a reduction in reading back compressed render targets.
And the more modern a GPU architecture is, the more patterns it has to compare it to, which means more significant gains.

More modern architectures are also able to more effectively eject polygons and textures from being rendered that aren't visible as well.

Some techniques like alpha effects also demand more memory bandwidth... So on consoles with oodles of memory bandwidth developers can go silly with alpha effects like with the Xbox 360 and it's eDRAM setup... Verses the Playstation 3... But on consoles with less memory bandwidth you can strip some of that away and maintain higher resolutions.

Which is why Switch games tend to avoid using lots of Alpha effects.

As for the comparisons to the Playstation 5 and Series X... Remember developers of today are pushing Ray Tracing, AMD's implementation of Ray Tracing is to put bluntly... Absolute garbage.

The Playstation 5 and Series X lack hardware capability to do BVH/Ray/Triangle intersection calculations due to their reliance on AMD technology, which means the hypothetical nVidia powered Switch 2.0 would be more efficient and faster at Ray Tracing, this will be a significant advantage that can't really be understated.

Sorry to say that arbitrary numbers like 30fps @ 10gb each requires 300GB/s is just not accurate.

I cannot understate how far ahead nVidia is compared to AMD in regards to efficiency, AMD is generations behind.
Is the Switch going to be a beast? For a handheld, absolutely, for a fixed console it's going to be competent... And that's all that matters.



--::{PC Gaming Master Race}::--