By using this site, you agree to our Privacy Policy and our Terms of Use. Close
sc94597 said:

I am aware that unified memory needs to address the demands of both the GPU and CPU. 

My point was though if you have a VRAM capacity bottleneck (because you can only ever dedicate a fixed 4GB of relatively high bandwidth memory to graphics and swapping assets between the system ram and video ram slows down/interrupts the pipeline and has associated copy-function overhead anyway), having a large share of unified memory that can be apportioned flexibly is an advantage (assuming there is memory free to allocate.) 

Considering how low-performing an RTX 2050 is, it would likely be a waste having more than 4GB of memory, it's not pushing high-end 1440P/4k visuals, not with only a paltry 112GB/s of memory bandwidth anyway...

Which is literally 1/3rd the speed of an RTX 2060 desktop part which already is a struggle at  1440P in most modern titles.

sc94597 said:

Given the rumors, we can expect the Switch 2 to be able to allocate something like 6 - 8 GB of its unified memory to graphics use-cases. That's an advantage over the 2050, given that the relative throughput of the video memory is roughly comparable (112 GB/s vs. 102 GB/s) and a large share of the laptop's system ram is eaten up by windows bloat. 

That is blatantly false.

The Switch 2 will not allocate RAM for any specific uses other than OS.
The rest is 100% up to the developer, always has been, regardless of console platform.

Not only that, there has literally been ZERO communication on what hardware the Switch is using, let alone how it will be used and/or allocated, meaning your entire argument is a false fabrication from the very start.

Keep in mind that Console vs PC OS bloat is pretty much equal these days.

The Xbox One/Playstation 4/Xbox Series X/Playstation 5 ALL allocate at-least 2.5GB of Ram OR MORE for the OS, which is actually a little more than Windows. - Obviously you want more than just 2.5GB of Ram to make any of these devices usable so you can run more than just the OS.

It's not 2001 anymore.

sc94597 said:

So you have a situation where the 2050 has 4GB of VRAM @112 GB/s and 16GB - (Windows bloat) of system ram @ something like 42 GB/s (for DDR5 laptop.) 

vs 

Switch 2 with 12 to 16GB  - (Switch OS bloat << Windows bloat) of unified memory at 102 GB/s.

Doesn't work like that.

sc94597 said:

I probably should've been more precise in what I was saying when I said "make up for lower clocks/CUDA cores." What I intended to say is that if you have a situation where the most common bottleneck in the graphics pipeline is VRAM capacity,  the extra core/clock throughput is wasted anyway. And if that is the bottleneck, then yes having more effective graphics-purposed memory will lead to better performance. In those instances where it isn't the bottleneck, then no it "doesn't make up for it." I'd expect that if they loosened up the clock-rate limits in Cyberpunk for example, they probably wouldn't get that much better performance because it is likely the VRAM that is the bottleneck, not the compute-capacity. 

Look. Developers work within the confines of the hardware.

There are GPU tasks which are not very VRAM heavy, which will simply be emphasized more in a VRAM limited environment.

Plus the whole stream-from-SSD thing with compression/decompression has made big improvements in terms of efficient memory utilization.
PC also tends to stream data from disk to system memory and/or video memory (Sometimes both!), which is dynamically optimized by the system drivers for maximum utilization with minimal developer interference.

People need to stop over-hyping the Switch 2 to be something it's not.

It's not high-end hardware, never was, never will be... And simply cannot be in a mobile device.



--::{PC Gaming Master Race}::--