By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - How Will be Switch 2 Performance Wise?

 

Switch 2 is out! How you classify?

Terribly outdated! 3 5.26%
 
Outdated 1 1.75%
 
Slightly outdated 14 24.56%
 
On point 31 54.39%
 
High tech! 7 12.28%
 
A mixed bag 1 1.75%
 
Total:57
bonzobanana said:

I honestly don't think TSMC is that expensive compared to Samsung and in fact it seems like TSMC is on a different scale of orders to Samsung's fabrications.

This is my bread and butter. What 'we' think is irrelevant. This isn't something that can be based on "thoughts" or "opinions".
Fab costs are often 50% higher with TSMC verses Samsung.

For the 2nm node... Samsung is charging $20,000 per wafer. TSMC? $30,000.
Evidence:
https://www.techpowerup.com/341465/samsung-cuts-2-nm-node-pricing-by-33-in-tsmc-competition-push

https://smart.dhgate.com/samsung-8nm-vs-tsmc-7nm-why-did-nvidia-choose-samsung-and-did-it-backfire/

https://semiwiki.com/semiconductor-manufacturers/samsung-foundry/7926-samsung-vs-tsmc-7nm-update/

So now we can move on from this... Facts are in Samsung is cheaper, TSMC is more expensive... And that is why nVidia/Nintendo went with Samsung, purely down to cost.

bonzobanana said:

Even the cheapest nastiest tablets and smartphones have chips fabricated by TSMC which admittedly would be a older fabrication process but you only have to step up a bit to see fabrications better than Switch 2 on far cheaper devices. It is Nvidia that sells the T239 chip to Nintendo so obviously it is Nvidia that deals with the fabricator and places orders and of course in recent times they have switched from Samsung to TSMC for their later chips but the T239 was designed around Samsung's 8Nm (really 10Nm mainly) process which is what Nvidia were using at the time for their PC chipsets and Nintendo might already have been in possession of 100s of thousands of pre-fabricated T239 chips anyway from the aborted Switch Pro model. Also the size of the Switch 2 T239 is mid-level it is neither small or large in die area there are more complicated chips out there. 

Phone/Tablet chips have different design goals to the Tegra.
Tegra Orin was designed for industrial, signage, vehicles, automation and other applications with higher TDP's and silicon die area sizes, it was not designed as a phone/tablet SoC as it's primary market.

Remember... Nintendo is a console manufacturer, they are trying to sell a budget device to the mainstream, the dollar value is probably the most important factor in determine bang-for-buck with your hardware... And it's hard to pass over Samsung currently when they are the cheapest fabricator in the business.

bonzobanana said:

Gflops is still widely used, yes everyone knows its a general indication only but its widely used, Geekerwan used it with their comparison, Techpowerup still lists it for all graphics chipsets etc. No one is saying it is perfect but often gflops is criticised in forums because a person wants to blur everything and create a defence for weaker graphics cards. We all know results will vary depending on how game engines use that graphics card, if its used on a fixed platform and what other resources the gpu have but it is still a widely used metric for the industry despite its weaknesses. We all know there are other huge factors like memory bandwidth and upscaling technologies, ray tracing etc. There is absolutely no point constantly saying gflops isn't perfect as we all know that but it still is the only generalised part of the spec that can be used. You can't compare a Switch 2 to a Steam Deck visually directly because the Switch 2 version is custom designed from the ground up to work with the hardware where as the Steam Deck just gets a generalised PC version. Lots of things are scaled back for the weaker hardware. The whole industry still uses gflops despite its huge flaws to give a GENERAL indication of potential power independent of the CPU typically and it will be in the right ballpark area. Each GPU architecture has strengths and weaknesses which of course is not incorporated in a simple gflops figure and even the gflops figure itself is split into fp16, fp32 and fp64 and those figures vary with different cards so one card good at fp32 maybe considerably weaker than the other card it is being compared to at fp16 for example but generally it is the fp32 figure compared which is used most often in game engines.  

Just because it's "widely used" doesn't mean it actually holds any legitimate relevance.

Remember console manufacturers and the industry as a whole used to use "bits" as a determiner of performance for decades... And there was a period where a 32bit CPU could outperform a 64bit one.
Shit. There was a 32bit console that was several multiples faster than a 64bit one.
And that was a blatant misleading of consumers during that period... It's marketing. Marketing isn't always about truth... Marketing is about trying to make your product appeal to the masses to accrue sales... And Gflops/Tflops is no different.


Again... Gflops/Tflops represents single precision floating point performance, it's an estimated mathematical upper-limit of that.

It doesn't tell us real world performance - Which is often lower due to inefficiencies in core designs and the need to keep pipelines fed with work with smarter compilers and larger caches.

It doesn't tell us geometry throughput.
It doesn't tell us pixel fillrate throughput.
It doesn't tell us texel fillrate throughput.
It doesn't tell us integer performance.
It doesn't tell us A.I. inference performance.
It doesn't tell us Ray Tracing performance.
It doesn't tell us if the hardware is capable of Rapid Packed Math for double the performance at half the precision.
It doesn't tell us how much bandwidth the chip has.

And that means that a GPU with less teraflops can (And has!) outperformed a GPU with more teraflops.
Or... An identical GPU with the same Gflop/Tflop can have 50% less performance just from a change of DRAM type.

Tflops/Gflops is a bullshit denominator, it always was.
It's a marketing selling point for the most part.

For example... Geforce 1030 DDR4 vs GDDR5.
Same Gflop. Almost twice the performance with the GDDR5 variant.


Or take the Radeon 5870 vs 7850. The 5870 has almost 1 Teraflop more... But will always lose to the Radeon 7850.

Efficiency is more important. Architecture is more important.

Now that I have proven that Gflops/Tflops is bullshit... We can move onto the next point of contention.

bonzobanana said:

You can't compare a Switch 2 to a Steam Deck visually directly because the Switch 2 version is custom designed from the ground up to work with the hardware where as the Steam Deck just gets a generalised PC version. Lots of things are scaled back for the weaker hardware. The whole industry still uses gflops despite its huge flaws to give a GENERAL indication of potential power independent of the CPU typically and it will be in the right ballpark area. Each GPU architecture has strengths and weaknesses which of course is not incorporated in a simple gflops figure and even the gflops figure itself is split into fp16, fp32 and fp64 and those figures vary with different cards so one card good at fp32 maybe considerably weaker than the other card it is being compared to at fp16 for example but generally it is the fp32 figure compared which is used most often in game engines.  

The actual literal reason why the Steamdeck cannot be compared to the Switch 2 is that... The Tegra chip in the Switch 2 is a far more modern and efficient architecture that the competing AMD variant.
Ampere is a generation or two ahead of RDNA2 in the Steam Deck.

RDNA2 doesn't support FP8 or Bfloat for FSR4 upscaling to compete with DLSS, it needs to use the less efficent INT8 format (Which isn't represented in the Tflops number)

And like I alluded to prior with evidence... Similar Gflops doesn't give similar real world performance.

bonzobanana said:

On the Switch 2 I've seen enough developer reports on how they optimise for Switch 2, they simplify code, remove some minor features with high GPU or CPU requirements to create a game engine with good fps on weaker hardware which has always been the case for fixed platforms. They drive the hardware better with more direct access to the chipsets. They create a conversion with not much important missing so a good game experience. It's not that the Switch 2 has amazing magically hardware that isn't shown in the gflops figures its just fixed platform optimisations. Same was true of PS4, Switch 1 and all fixed platforms basically. The Switch 2 chipset was designed back in 2019/2020 and its release was stalled for many years. It's an old architecture now which has been improved upon a few times in later chipsets. 

Console developers don't build games to the metal anymore (Aka. Assembly), there is literally zero point with how good compilers are these days.

They don't "Simplify code".

What actually gives consoles an edge over the PC releases is actually the low-level programming application interface with tight integration with the driver framework which allows for a far more efficient draw calls to be performed and significantly reduced overheads.

In-fact, many of the higher settings and assets on the PC version of the game still exists on the console disk release and sometimes a developer will release an "enhancement patch" with the Pro or successor console, we saw this often with the Xbox Series and Playstation 5 cross-releases.

Switch 2 is based on nVidia Tegra Ampere... The Ampere architecture was actually released in 2020.
But GPU designs take many many many years to build and ratify, which is why AMD and nVidia have multiple GPU development teams that leap-frog releases.. So the design of the GPU in Tegra Orin potentially predates the original Switch release.

bonzobanana said:

Comparing my old RX 580 to my gaming laptop with RTX 2050 mobile. The gflops figure is about the same but the RX 580 is much, much older but a desktop GPU with more memory bandwidth but lacks some of the modern features of the RTX 2050 mobile. The RTX 2050 is far superior to the T239 on just about every level but still performs lower than the RX 580. This is often the case with mobile chips which are more power restrained.

https://technical.city/en/video/Radeon-RX-580-vs-GeForce-RTX-2050-mobile

It's like sometimes you read postings where the person is pretending that just because the architecture is newer it is magically superior on every level that certainly isn't true. I absolutely love my RTX 2050 laptop, it was £350 well spent but it doesn't quite perform as well as my dusty old desktop pc. I was playing Cyberpunk on that card what seems like 5 years ago now with 40-60fps with decent detail settings.


I don't get it. You literally just went on a tirade on how Glfops/Tflops is 100% relevant... But then literally prove my point that at the same Gflop/Tflop level, there is different real-world performance?


But thanks for EVENTUALLY agreeing with me in the end I guess. What a waste of my time this eventually became.





--::{PC Gaming Master Race}::--

Around the Network

These guys say the new 1.06 patch of Star Wars Outlaws increases sharpness (perceived resolution) a lot. Maybe a better profile of DLSS was used. Ray tracing noise (rain drops) have been reduced greatly as well (maybe some ray reconstruction going on). 



Yeah I am just finishing up the last few missions of Outlaws now, and it looks terrific; compared to the launch version, image quality is substantially cleaner, and the hair which had a lot of aliasing before seems to have gotten some kind of filter to smooth it over a bit.

The game was already impressive at launch, but the work Ubisoft Redlynx and co have done post-launch to make it even better make it the gold standard for Switch 2 ports to date.



Yeah, those "rain drops" or "light splotching" (as folks at DF tend to call it) was probably its biggest issue - if SWO wasn't treated as "surprise miracle port", it would get a lot more flak for that, so great that they've managed to solve that issue, since it's borderline a technical bug quality issue.

It seems to be sharper as well on overall, though her hair still looks like it can't quite decide in what plane of multiverse wants to exists.



Star Wars Outlaws high-end PC vs Switch 2

14:50



Around the Network

Oh, there's demo for SWO?! Great, might as well check it out both on PC and SW2 over the weekend.



HoloDust said:

Oh, there's demo for SWO?! Great, might as well check it out both on PC and SW2 over the weekend.

It's worth mentioning that apparently the demo uses the launch code, so doesn't have the three major graphics/perf updates since and as such looks worse than the full game.



curl-6 said:
HoloDust said:

Oh, there's demo for SWO?! Great, might as well check it out both on PC and SW2 over the weekend.

It's worth mentioning that apparently the demo uses the launch code, so doesn't have the three major graphics/perf updates since and as such looks worse than the full game.

Thanks for heads-up. As I said a while back I find even something like XSS version to be not something I would want on big TV, but I'm interested to see what it looks like in handheld mode - not that I would really play such game that way (at least not on anything less than 10" screen), but curious to see it with my own eyes.



Well, after trying SWO both on PC and Switch 2 over weekend, my takeaways:

- it looks fairly bad and blurry on TV in docked mode, but, depending on personal taste, it might not be much of an issue.
- in handheld mode it looks much more acceptable, as I suspected - I don't find such games to be good fit for such a small screens (I know first hand there are folks who don't have problem with that), but if I did, I'd have no problem playing it that way, from visuals standpoint.
- I'm not sure what where they thinking when designing her hair, it looks like some fuzzy object hovering around her face, but that said, it looks like crap on PC as well, even when you turn on everything to Ultra (+ Ray Reconstruction + RTXDI), with DLAA on.
- I didn't get around trying the game on 9060XT, I'm eager to see what it looks like there, since AMD GPU's don't have access to nVidia's Ray Reconstruction, and that solves that light splotching completely (albeit with hefty performance hit).



RDR on Switch2 will use DLSS at "high resolution" and 60 fps. It will show the capabilities of DLSS on S2 because high res and 60fps was not something we had before.  

Edit it's RD1

Last edited by numberwang - on 13 November 2025