| bonzobanana said: Well I've got a RTX 2050 mobile laptop which is basically Switch 2 GPU architecture but over twice the power and compared to the RX 580 (Xbox One X) it is still weaker in real world tests as you would expect. The Switch 2 is under half that performance when docked. https://pc-builds.com/compare/gpu/0Yg1aB/radeon-rx-580/geforce-rtx-2050 Lets not forget Nintendo does not give out full specs and development hardware is not retail hardware. The Geekerwan analysis is the best we have at the current time. Lets also not forget the Switch 2 is based on a dated mainly 10Nm fabrication process which is very power hungry and it only comes with a 19Wh battery well below other mobile gaming devices. This idea that somehow Nintendo have clocked the system to high levels despite always clocking low previously and simply not having that level of power. Even the Geekerwan analysis only gave the theoretical peak figures for GPU performance when portable, likely below 1 Teraflop in real terms a lot of the time. This is nothing new we know Nintendo always clocks lower, they always have, it means greater reliability, longer battery life and less returns for them, a cooler running system is a longer life unit. It's what Nintendo does to maximise profits. Why do we keep getting people pretending Nintendo have clocked to the maximum clocks. Development hardware is always clocked higher than retail hardware especially where Nintendo is concerned. I don't get why we have to pretend the Switch 2 is more powerful. The Geekerwan analysis was brilliant it gave us a decent and a REALISTIC viewpoint of the hardware without fanboy nonsense. Nothing we are seeing in how the Switch 2 performs is contradicting it. Ultimately again we are in a situation where people just want to believe what they want to believe which is pure nonsense. The Switch 2 is a decent portable handheld but its based on a fabrication process from 2020 and has a tiny battery and yet people keep pretending its clocked to high limits. The reason the Switch 2 can perform well is the fantastic DLSS upscaling and of course its a fixed platform so can be fully optimised to work around weaknesses. Some games do this and some do not, quick and dirty ports to Switch 2 show a very low performance level. |
A few things:
1. The GPU in the PS4 Pro isn't an RX 580 or equivalent to one. Not sure why you're bringing it up, other than maybe you're conflating a Switch 2 comparison with PS4 Pro with a comparison with the more performant Xbox One X? A much closer analogue to the PS4 Pro's GPU is a slightly power-limited R9 290 (say an R9 290 set to 85% of its max clock rate.)
2. An SDK isn't Nintendo releasing specific specifications to the public. The target audience are video game developers. Nintendo is obviously going to include documentation on what the SW2 is capable of in a development kit package used by game developers. Digital Foundry has already verified the SDK leak as accurate.
Nintendo Switch 2: today we can finally confirm full specifications, CPU/GPU clock speeds and how much in the way of system resources are available to game developers: https://t.co/sMc5vZqjth pic.twitter.com/HXOe5gqvVs
— Digital Foundry (@digitalfoundry) May 14, 2025
And again, the clock rates used to calculate the TFLOPs here, aren't the theoretical maximum (1.4Ghz) but the maximum available to developers for their games (1.007Ghz), so your point of dev kits having more resources is moot here, given that this is specifically a target for games running on the system.
|
Make no mistake though, this is a full Ampere GPU, rated for 3.072 TFLOPs when running in docked mode according to Nintendo itself, so by extension that'll drop down to 1.71 TFLOPs when running in mobile mode. TLOPs comparisons against other devices are irrelevant at this point. |
I could see the SW2 being hacked in a few months or years and the max clock rates reporting as such while games are running, and you'll still be in denial somehow.
3. The Geekerwan simulation is fine. Your extrapolation beyond it isn't. He compared a simulated SW2 (not even the SW2 itself) to other hardware on a synthetic benchmark. He never provided a 2.2 TFLOP number. You erroneously pulled that out of the fact that their simulation gave a similar score to a GTX 1050ti, but I showed in the post you just quoted GPUs with TFLOPs much higher getting similar scores -- like the R9 380. Extrapolating from a single data point will lead you to silly conclusions like this.
It's not "wanting to believe what they want to believe" it is knowing the facts as they exist based on the evidence we have. #2 above especially, is something you're motivationally ignoring. It's a fact that it has been verified what the clock rates of the SW2 are by Digital Foundry. And they treat it as such, like everyone else who isn't stuck in the year 2024.
Finally, you still haven't answered the core question. If the Switch 2 is 55% as performant as the PS4 Pro (which your motivated reasoning implies), why is it rendering a purely rasterized load (no DLSS) in FO4, at the same resolution and with a better frame-rate than PS4 Pro? A minimal difference in LODs (and that is assuming it even is the case) isn't going to suddenly make hardware that is nearly half as performant pull this off. What explanation do you have for this?
Last edited by sc94597 - on 09 March 2026






