By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

Correct. 1440P doesn't need to be the target for a VRAM bottleneck.

That wasn't my argument.

As for the 3050 4GB vs 6GB.

Keep in mind the 4GB actually has a 128bit memory bus (4x 32bit) and the 6GB model has a 96bit memory bus (3x 32bit) and in very bandwidth-demanding scenarios, the 4GB card can actually be faster depending on the demands of the software.

https://technical.city/en/video/GeForce-RTX-3050-4GB-mobile-vs-GeForce-RTX-3050-6GB-mobile

However, if everything is not kept equal...
3050 6GB - 2560 Cuda Cores @ 1237Mhz
3050 4GB - 2048 Cuda Cores @ 1237Mhz

Then the 6GB part is the obvious winner by sheer increase in functional processing units.

The 3050Ti will often show a sizable advantage over the 6GB variant due to it's higher levels of bandwidth, especially when a ton of alpha effects are being used.

A high probability that 1080P is not the target.

When the RTX 3050ti does outperform the RTX 3050 6GB, we're looking at 5%-10% gains in average framerates (but still often worse 1% lows.) When the RTX 3050 6GB outperforms the RTX 3050ti 4GB, such as in Assassins' Creed Valhalla or Forza Horizon 5, we're looking at 30-70% higher average framerates (and much better 1% lows.) 

The 3050ti and 3050 6GB have the same number of CUDA cores (which is why I didn't bring up the 3050 4GB.) 

Notice the 1940 Mhz core clock for the 3050ti versus 1785 Mhz for the 3050 6GB. The 13450HX is a moderately better CPU than the 5600H, but neither seems to be over-utilized (although I suppose we would have to see each core to tell for certain.

Yes, it is 95W vs. 75W, BUT the GPU clocks are comparable and the 3050 is running at roughly 72W. 1965 MHz for the 3050ti and 1942 for the 3050 6GB. And the difference is +74%. That's not just because of a 20 watts difference, especially when that difference isn't affecting max clock rates. 

A high probability that 1080P is not the target.

I don't think it will be the target for every game and especially not for the most demanding games. I think most games will have native 720-900p, upscaled to 1080p (or maybe a bit higher when 900p is the internal resolution.) But for many games 1080p is viable. BOTW isn't the most demanding game, but it is still impressive that the Switch 2 is able to run it at 4k (upscaled, likely from 1080p) 60fps, given that enhanced 360 games (ex: Mirrors Edge) on the Series S tend to target 1440p 60fps.  

But yeah, given that even the PS5 has some games that fail to reach 1080p natively in performance mode, it is unrealistic to expect the most demanding Switch 2 games to reach that mark. 

The original Switch came with 4GB of Ram.
1GB of Ram was used for the OS/Background tasks leaving 3GB for games.

Yet... Hogwarts legacy is running in that 3GB of Ram pool.

I'll let you draw the conclusion for that one.

Sure, and we've seen what had to be done to Hogwarts Legacy to get it to run on Switch. It's not surprising that given the state of the Switch version, it can run in a 3GB pool. My point though is that I don't anticipate Nintendo's OS increasing significantly from 1GB to >2.5GB unless they add more features (media apps, achievements, better streaming overlays, browsers, etc.) I suppose if they go hardcore on a ram-hungry security system to prevent privacy it's possible, but that is the only scenario where I can see it. 

The 3050 6GB vs 3050Ti.

It also showcases that the 3050 Ti 4GB thanks to it's higher levels of bandwidth, can outperform the 6GB card.

Which brings me back to my original point about developers building their software for the hardware environment... And also brings me back to my original point that more than 4GB of VRAM is not super important in the grand scheme of things in these low-end GPU's.

Never disputed that having higher memory bandwidth can be beneficial. The actual evidence has shown us though that at the resolutions these low-end GPU's target, memory capacity bottlenecks still happen often enough that having more memory can be useful. From a cost perspective it isn't clear that having enough unified memory that 6GB of VRAM can be utilized when these bottlenecks occur on other platforms is more expensive than say, doubling (or even increasing by 50%) throughput the modules that allow for an effective 4GB (suppose they could go the Series S path an have lower bandwidth ram for the OS to make up for it.) And again, what we see is that when the 3050 6GB outperforms the 3050ti it is on the order of 30-50% improvements, whereas when the 3050ti outperforms the 3050 6GB it is along the lines of 5%-10% improvement. 

The fact is... A 2050 4GB is going to be roughly as powerful as the Tegra in the Switch 2.0.

You aren't going to get much more than that as it's not financially responsible.

This entire comment thread was in response to a part of my original post that was arguing that a 2050 4GB level Switch (even when neutered, as Digital Foundry had done) would be roughly Rog Ally performance mode (15W) level. I do think the Switch 2 will be "roughly as powerful as a 2050 4GB", possibly slightly better. Whether or not it is roughly as powerful as a neutered 2050 4GB was what I was getting at when I said I think they're underestimating it. 

For three reasons: 

1. I suspect that the Switch 2 (either because it is a closed platform with targeted optimizations or because of the flexibility of unified memory) won't have graphics memory capacity bottlenecks to the same degree as the 2050. Remember, both platforms (again assuming Switch 2 is what we all think it is) have roughly similar memory throughput, so it is the capacity that is the distinction. You suggested that swapping from system memory means that this isn't much of an issue for the 2050, but we've seen with the 3050 6GB vs. 3050ti 4GB comparison, that that isn't always the case (i.e Assassins'' Creed Valhalla, Forza Horizon 5.) Of course the 2050 is weaker than the 3050 and 3050ti, but not that much weaker that we should expect the bottleneck scenario to be much different, just relatively less. 

2. There is a good chance that the Switch 2 will be on a 5/4nm TSMC node (especially if it releases late 2024/early2025 and will be updated with a refresh say 2028-2029), meaning that underclocking to the base clock of the RTX 2050 might be too aggressive of a simulation. Of course, if the Switch 2 is running at Switch TGP levels, maybe not. 

3. While closed platform optimization doesn't mean as much as it used to, there is still an advantage. As you said, "developers build their software for the hardware environment", but that is much more difficult with open platforms than closed platforms. 

Anyway, I have argued that the Switch 2 would likely be roughly in between the 2050 and 3050 (30-35W) in terms of performance for the last few months. Closer to the prior than the latter, but probably a bit better than the prior still. You and a few others thought that even the 2050 was a stretch. Digital Foundry makes a video suggesting that that is likely a good estimate for what we should expect. The whole point of that original post was to argue that the Switch 2 and Rog Ally  should be roughly comparable, even if we take Digital Foundry's estimate as the most likely outcome. 

Last edited by sc94597 - on 16 November 2023