By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - How will be Switch 2 performance wise?

 

Your expectations

Performance ridiculously ... 0 0%
 
Really below current gen,... 2 100.00%
 
Slightly below current ge... 0 0%
 
On pair with current gen,... 0 0%
 
Total:2
Pemalite said:


And yet... It still beats it.
Either way, the whole 3050 4GB vs 6GB argument is irrelevant.

Because like I alluded to before, video game developers build games within the confines of the hardware walls, not out of it.

The 4GB 2050 is also not a 3050 4GB or 6GB.
It's actually worst with a fraction of the bandwidth, which means it is even more useless at managing large datasets.

112GB/s vs 176GB/s/192GB/s is a big difference... Bandwidth is what is holding back the 2050 the most, not memory capacity.

It seems you've lost the context of the discussion here if you think it is irrelevant.

  1. I argued that 6GB being available for graphics could alleviate VRAM capacity bottlenecks.
  2. You argued that VRAM capacity bottlenecks are far less of an issue compared to VRAM bandwidth bottlenecks as the Switch 2 isn't targeting 1440p resolutions.
  3. I showed a counterexample where two nearly identical low-end Ampere GPU's have the same exact specs except one has more VRAM capacity and the other more VRAM bandwidth. Their target resolution is native 1080p. The chip with more capacity not only was more stable even in those instances where it "loses" by having a slightly lower average framerate, but also didn't experience hard bottlenecks that nearly halved its framerate in the large and growing number of games that struggle at 1080p on 4GB cards.
  4. The analogous situation where memory bandwidth is an huge issue doesn't show up. In other-words, the bottlenecks for the RTX 3050 6GB are less severe than the bottlenecks for the RTX 3050ti 4GB when targeting 1080p. Which is why most people recommend the prior over the latter. 

Not sure where you're getting 176GB/s for the 3050 6GB (or are you referring to 3050 4GB?)

The 3050 6GB's effective double-rate clock speed is 14Gbps (in Afterburner we have 7001 Mhz * 2 for DDR)

14Gbps/pin * 96 pins/8bits/byte = 168 GBps. 

Anyway,

Let's test your hypothesis (against the empirical evidence) on the 2050 vs. 3050. 

Jarrod's Tech ran benchmarks comparing the RTX 2050 and RTX 3050 4GB with the same CPU. 

Here is the average difference @1080p.

Yeah... it's not bandwidth, at least not at the lower TGPs. 

Pemalite said:

sc94597 said:

Yes, it is 95W vs. 75W, BUT the GPU clocks are comparable and the 3050 is running at roughly 72W. 1965 MHz for the 3050ti and 1942 for the 3050 6GB. And the difference is +74%. That's not just because of a 20 watts difference, especially when that difference isn't affecting max clock rates.

TDP has a massive influence over mobile hardware... To the point where a lower-end part with a higher-TDP will outperform a higher-end part.

I.E. RTX 3070 outperforming the RTX 3080. - Despite the 3080 having twice the VRAM.

You know this, but it's the ability to achieve higher boost clocks that allows for the performance gains as you increase TGP. Having a certain voltage is necessary to achieve certain boost clocks; power usage is proportional to voltage squared (and linearly proportional to frequency) so you have to increase the power profile in order to achieve certain stable frequencies. But in the comparison I brought up, the ostensible "95W" RTX 3050 6GB was pulling about 70W most of the time, probably because of dynamic boosting apportioning power to the CPU, and the GPU core clock rates were the same between the two GPU's, with actually the 3050ti having a very slightly higher core clock.

If one pegs a GPU to 1950 Mhz, and then  undervolts it so that they can reduce the power consumption by 20W, it's not going to lose performance as long as it is stable at that voltage. 

Most voltage-frequency curves look like this, flattening around some horizontal asymptote. It seems that was what happened here. 

Anyway, the 70% performance difference isn't because of the TDP label here. It is the fact that Forza bottlenecks as the VRAM fills up. All else that matters (clock rate, core count, etc.) is kept equal except that. 

Keep in mind that Breath of the Wild also runs on a:

* Triple core CPU @ about 1.25Ghz
* 1GB Ram.
* Radeon 5550 class GPU.

It doesn't have hardware demands that are regarded as "intensive". 4k or not.

Yes and what hardware did Mirror's Edge run on originally? 

Nintendo OS Ram use has generally increased in memory footprint every console generation.

Not only that, but one of the Switch's biggest issues is the extremely slow and laggy eSHOP performance, more memory dedicated to that task would clean it up a ton.
..And if they implement features like you alluded to, such as voice chat natively on the console itself, that would also require more RAM.

The Wii U had 1GB dedicated to the OS, like the Switch. The OS of the Wii and Wii U were significantly different enough in feature-set that it makes sense that there would be an increase. Likewise with the Wii and GameCube. 

I don't see Nintendo adopting more features. In fact, the Switch was a downgrade in terms of non-gaming APPs compared to the Wii U (i.e no consumer-facing  browser, no media apps, etc.) 

Very much unlikely to be on a 5/4nm TSMC node as it's expensive.

Thratkor explains here why 5/4nm is very much likely, from a cost-minimization perspective. Basically going with 12 SMs doesn't make sense on 8N unless the power-profile is much, much higher than we would expect for a handheld. Going with 6SM at higher minimum clock speeds would be cheaper and give better performance, but we know from the leak there are 12 SMs on the T239. 

"The short answer is that a 12 SM GPU is far too large for Samsung 8nm, and likely too large for any intermediate process like TSMC's 6nm or Samsung's 5nm/4nm processes. There's a popular conception that Nintendo will go with a "cheap" process like 8nm and clock down to oblivion in portable mode, but that ignores both the economic and physical realities of microprocessor design.

I recommend reading the whole post, but this snippet addresses the expense question specifically. 

"But what about cost, isn't 4nm really expensive?

Actually, no. TSMC's 4N wafers are expensive, but they're also much higher density, which means you fit many more chips on a wafer. This SemiAnalysis article from September claimed that Nvidia pays 2.2x as much for a TSMC 4N wafer as they do for a Samsung 8nm wafer. However, Nvidia is achieving 2.7x higher transistor density on 4N, which means that a chip with the same transistor count would actually be cheaper if manufactured on 4N than 8nm (even more so when you factor yields into account)."

Radeon RX 570 is running Matrix Awakens demo.

It is a 4GB card.

I mean if you go into the config file and set every memory-intensive variable to 0, then yes it will work! 

This is what they did to get it working. 

Meanwhile the 3050ti doesn't do much better than Digital Foundry's  attempt with the 2050 when running the actual demo, despite having nearly double the memory bandwidth.  

"Video memory has been exhausted (2220.551 MB over budget) Expect extremely poor performance"

The Switch 2 version ostensibly is running with ray-tracing implemented. 

Last edited by sc94597 - on 17 November 2023

Around the Network

Potential graphics aside I hope Nintendo gets serious with surround sound. The switch has subpar surround. Frankly it isn't even as good as the ps3 muchless ps4 and ps5.



sc94597 said:
Conina said:

Please use Wh (watt hours) instead of Ah (ampere hours) for battery capacity comparisons.

Ampere hours are depending on the voltage of the devices.

Switch and Steam deck have different voltages, Switch 2 will probably also have different voltages.

Watt hours are a unit of energy that represents the capacity of power (in watts) to be expended over a period of one hour.

In contrast, amp hours measure the current (in amps) over a period of time.

Oh I know. Wasn't meaning to compare total energy capacity between the two, but charge capacity is more directly correlated with the surface area of the battery than energy and we don't know the voltages of the Switch 2 to compare on a energy basis anyway. We don't even know the size and target TGP of the Switch 2 (only rumors on those.) So charge capacity (correlating with the size of mobile devices increasing) is all we can compare on right now. 

Edit: NatetheHate has suggested the Switch 2 will have an 8inch screen size, which is why I was comparing to the Steam Deck. It's probably going to be bigger than the Switch OLED (although probably not as big as the Steam Deck.) 

I guess we're hoping for a S2 big enough that it can fit in a large enough battery and adequate cooling to drive the chipset at decent frequencies, but not big enough so that it can accommodate the gpu at 8nm haha.

I guess a larger switch would potentially also render all current joycons useless, not sure how that'd go down with fans...



I remember Digital Foundry did a test before Witcher 3 was announced for Switch. They made a mock PC with the lowest specs that were somewhat close to a Switch. I don't remember exact details but I seem to recall Witcher 3 just wasn't working well. Yet the port Switch got while blurry. It works. I guess I just I'm just saying some mock DF PC to run on speculative specs what can run on it seems silly. Devs will scale and optimize whatever it is. Also feel a lot of people are setting themselves up for disappointment. A lot of people kept thinking the Switch would have an X2 and 6GB of ram. I like beefy specs as well but not expecting top of the line here even for a handheld.



Bite my shiny metal cockpit!

I expect PS4-like performance in handheld mode and PS4-pro like performance in docked mode, albeit with a few more modern features. So I guess the end result will look slightly better than PS4 and PS4 Pro. And that's perfectly fine, Nintendo games will look great on the system.

As for other titles, since the Series S is a thing I expect most major games to run on Switch 2. If a game can run on Series S the devs should be able to make it run on Switch 2 with some sacrifices. (1080p@30fps vs 1440p@60fps / turn down graphical fidelity or whatever). That's fine. The only games that will struggle to run on the platform are absolutely massive (western) AAA games and there's only a handful of those every year.



Around the Network
Biggerboat1 said:

I guess a larger switch would potentially also render all current joycons useless, not sure how that'd go down with fans...

I'd be shocked if the Joy-Cons of the Switch 2 stay as unergonomic as the current ones. They can do so much better!



Conina said:
Biggerboat1 said:

I guess a larger switch would potentially also render all current joycons useless, not sure how that'd go down with fans...

I'd be shocked if the Joy-Cons of the Switch 2 stay as unergonomic as the current ones. They can do so much better!

Do we get Nintendo will actually give the choice to buy a left joycon with an actual d-pad included this time ?



Switch Friend Code : 3905-6122-2909 

On Paper, It Will be better than a PS4 pro, especialy the CPU.
But battery life and temperature limitations are going to drag It down to a level below PS4 pro.



Biggerboat1 said:
sc94597 said:

Oh I know. Wasn't meaning to compare total energy capacity between the two, but charge capacity is more directly correlated with the surface area of the battery than energy and we don't know the voltages of the Switch 2 to compare on a energy basis anyway. We don't even know the size and target TGP of the Switch 2 (only rumors on those.) So charge capacity (correlating with the size of mobile devices increasing) is all we can compare on right now. 

Edit: NatetheHate has suggested the Switch 2 will have an 8inch screen size, which is why I was comparing to the Steam Deck. It's probably going to be bigger than the Switch OLED (although probably not as big as the Steam Deck.) 

I guess we're hoping for a S2 big enough that it can fit in a large enough battery and adequate cooling to drive the chipset at decent frequencies, but not big enough so that it can accommodate the gpu at 8nm haha.

I guess a larger switch would potentially also render all current joycons useless, not sure how that'd go down with fans...

Well, given that a 550Mhz (peak efficiency for Orin on 8nm with 12SM) would consume something like 7.5W (double what the original Switch did in handheld mode) the battery would have to be about double the capacity to get a similar life-span when considering cooling has to run faster too, if it were on 8nm. 5nm just makes more sense. 

Never thought joy-con forward compatibility (beyond using them in docked mode or for Switch games) was a thing people expected. Nintendo always changes their primary controllers with each new platform. But there isn't a reason they couldn't use some sort of adapter or have the joycons fit a larger Switch. I'd expect it to be more an increase in width than thickness. That fits the modded joycons people tend to use. 

Last edited by sc94597 - on 19 November 2023

Leynos said:

I remember Digital Foundry did a test before Witcher 3 was announced for Switch. They made a mock PC with the lowest specs that were somewhat close to a Switch. I don't remember exact details but I seem to recall Witcher 3 just wasn't working well. Yet the port Switch got while blurry. It works. I guess I just I'm just saying some mock DF PC to run on speculative specs what can run on it seems silly. Devs will scale and optimize whatever it is. Also feel a lot of people are setting themselves up for disappointment. A lot of people kept thinking the Switch would have an X2 and 6GB of ram. I like beefy specs as well but not expecting top of the line here even for a handheld.

One of the things to consider though is that the Matrix Awakens is a demo, not a full game. Spending time to optimize a demo makes very little sense from a financial perspective, unless you're trying to impress an audience of the capabilities of the system, showing off its feature-sets (i.e raytracing.) And from the sounds of it the demo looks good, at least from how the journalists describing it perceived it. I'm not sure if somebody saw the Witcher 3 in the form it ran on the Switch, they'd say the same. 

Furthermore, by the time the Switch 2 releases at the end of 2024 or beginning of 2025, an Orin chip die-shrunk to 5nm/4nm isn't really "beefy" or "top of the line." Most current hardware would be on 3nm by that time (Blackwell for Nvidia, Apple is already on 3nm now, AMD plans to use Sammy 4nm and TSMC 3nm, etc.) Nvidia's lovelace (5nm) will be aging technology, and it makes sense for all of their legacy chips to be on that node. The Switch 2, with those specs, probably won't even be the most powerful gaming handheld on the market. AMD's Zen 5 (Nirvana) should be out by then, and mobile APU's on Zen 5 will be much more powerful than the Switch 2, especially if they target a higher power-profile. Mobile ram chips have been rapidly decreasing in price over this year -- most high-end tablets/phones should have 16GB - 20GB (right now they range from 12-16GB) by then, with mid-ranged phones/tablets having 12GB. In 2017, the Pixel 2 (a relatively high-end smartphone that cost $649-$949) had 4GB of LPDDR4X ram, in comparison to the Switch's 4GB of LPDDR4. 

Like with any console hardware, one should expect mid-ranged specs for the particular form-factor. PS5 and XBS X were mid-ranged with respect to PC hardware when they released at $400-$450, in late 2020, and the Switch 2 should be about mid-ranged for tablet hardware when it releases for a similar price in 2024/2025. A 5nm die-shrunk, 12 SM, Orin chip with 12GB of VRAM will essentially be mid-ranged for tablets, and low-end compared to gaming platforms in general, but still probably more capable than the DF test, by at least a little bit.

Last edited by sc94597 - on 19 November 2023