By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

PC memory works differently than consoles.

Yes the RTX 2050 only has 4GB of memory. - But that 4GB is dedicated -only- to graphics and isn't shared with anything else... Compared to consoles where that pool of Ram needs to be split for A.I, Scripting, Sound, Networking and a heap more.

The PC also has System memory which will augment the GPU memory, streaming data on a needs basis (Or depending on the data, accessing it from system memory directly). - And that had a total of 16GB of System Memory for a total of 20GB of Ram in the entire system.

Then you have oodles of CPU and GPU caching to hide bandwidth and latency deficits.

It's always a bizarre misnomer when someone claims any console has a memory advantage over the PC, that has never been true and never will be.

More memory doesn't make up for lower clocks/cuda cores either.

A Geforce RTX 2050 is a really really good baseline to aim Nintendo's next handheld for, Nintendo doesn't chase high-end, high performing, power hungry components anymore... And with Nintendo only making handhelds these days, they need to be smarter with their choices of components... And they are.
Tegra is a good choice, despite the fact there are faster mobile chips on the market.

I am aware that unified memory needs to address the demands of both the GPU and CPU. 

My point was though if you have a VRAM capacity bottleneck (because you can only ever dedicate a fixed 4GB of relatively high bandwidth memory to graphics and swapping assets between the system ram and video ram slows down/interrupts the pipeline and has associated copy-function overhead anyway), having a large share of unified memory that can be apportioned flexibly is an advantage (assuming there is memory free to allocate.) 

Given the rumors, we can expect the Switch 2 to be able to allocate something like 6 - 8 GB of its unified memory to graphics use-cases. That's an advantage over the 2050, given that the relative throughput of the video memory is roughly comparable (112 GB/s vs. 102 GB/s) and a large share of the laptop's system ram is eaten up by windows bloat. 

So you have a situation where the 2050 has 4GB of VRAM @112 GB/s and 16GB - (Windows bloat) of system ram @ something like 42 GB/s (for DDR5 laptop.) 

vs 

Switch 2 with 12 to 16GB  - (Switch OS bloat << Windows bloat) of unified memory at 102 GB/s.

I probably should've been more precise in what I was saying when I said "make up for lower clocks/CUDA cores." What I intended to say is that if you have a situation where the most common bottleneck in the graphics pipeline is VRAM capacity,  the extra core/clock throughput is wasted anyway. And if that is the bottleneck, then yes having more effective graphics-purposed memory will lead to better performance. In those instances where it isn't the bottleneck, then no it "doesn't make up for it." I'd expect that if they loosened up the clock-rate limits in the Cyberpunk benchmark for example, they probably wouldn't get that much better performance because it is likely the VRAM that is the bottleneck, not the compute-capacity. 

Last edited by sc94597 - on 15 November 2023