By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - How will be Switch 2 performance wise?

 

Which console will be close in performance?

XB1 13 8.78%
 
PS4 49 33.11%
 
PS4 pro 47 31.76%
 
XB1X 8 5.41%
 
XBox Series S 24 16.22%
 
PS5 3 2.03%
 
XBox Series X 4 2.70%
 
Total:148

A Switch 2 (Docked mode) performing "perceptively equidistant" between the PS4 and PS5 isn't too unrealistic.

It can be achieved via two routes: 

  1. Nvidia goes with TSMC 5nm/4nm, allowing them to get roughly GTX 1060 Max-Q 60W (released mid-2017)/RTX 2050 (released end-2021) 30W level performance in a 18W total power profile. This fits roughly with a halving of power requirements every 36-42 months (June 2017 ; 1060 Max-Q -> Dec 2021; RTX 2050-> November 2024 ; hypothetical Switch 2 release.)
  2. Nintendo puts some hefty active cooling in the dock and scales the power level to something like 30W when docked on 8nm Ampere, also allowing for GTX 1060 Max-Q/RTX 2050 30W level performance. Think of this as sort of a Rog Ally solution, but better implemented. 

The closer the Switch 2 releases to 2025 the more #1 becomes not just likely, but economical. 

Specs like that would put it perceptively equidistant between the PS4 base and PS5, roughly on par with a PS4 Pro (but without the Jaguar and HDD bottleneck that - that had.) It would also allow the Switch 2 to target PS4/Steam Deck level performance in handheld mode. 

Last edited by sc94597 - on 14 November 2023

Around the Network
Leynos said:

It won't be as powerful as an RG Ally but $300 cheaper as well.

If the Digital Foundry thought experiment is a reliable guess, it would put it roughly Rog Ally level of performance, when the Ally is at Performance mode (15W.) 

For example, 

Cyberpunk 2077 is running at 30-40 FPS 1080p DLSS Quality Medium Settings. 

A ROG Ally gets about 35-40 FPS 1080p FSR 2.0 Medium Settings at the Turbo power profile. At 15W the Rog Ally drops to about 30-35 FPS. 

Personally, I think the Digital Foundry guess is an underestimate. The Switch 2 will have a VRAM advantage over the RTX 2050 that could make up for having lower clock-speeds/fewer CUDA cores in video memory bottlenecked titles (like quite a few that released this year). Plus there is the wild-card that is a die-shrink to 5nm/4nm. And then of course closed-platform optimizations will help, as alluded to in the Digital Foundry video. 

Edit: 

As for cost, this is what an RTX 2050 laptop costs right now, and there is a lot of redundancy here (i.e iGPU + dGPU, larger screen, etc.) 

Last edited by sc94597 - on 16 November 2023

$400 handheld with ps5 visual output is very reasonable. Nothing crazy here.



i7-13700k

Vengeance 32 gb

RTX 4090 Ventus 3x E OC

Switch OLED

Chrkeller said:

$400 handheld with ps5 visual output is very reasonable. Nothing crazy here.

Only 4 people voted PS5/Series X in the poll. 

As far as I can see all of the posts stating PS5/Series X level are sarcastic in this thread. 

The upper-limit for Switch 2 is performance on par with lower mid-end platforms from 7-8 years before its release, and low-end ones from 3 years before release. 



Ran the Cyberpunk 2077 benchmark on my Rog Ally at each power mode. Graphics were set to medium 1080p FSR Quality Mode.  

Silent (12W)

Performance (15W)

Turbo (30W)

The Digital Foundry Switch 2-like system at DLSS Quality averaged at 33 FPS, so within 97% of the Performance mode (15W) of the Rog Ally and 75% of Turbo mode (30W.) 

For comparison, on my Rog Ally: 

  • God of War (Original settings no FSR) runs about 35 FPS in Performance mode (15W), 38 FPS in Turbo mode (30W) at 1080p. Basically putting it in between the PS4 (30fps with some drops) and PS4 Pro (48 fps average.) With FSR Quality it averages at 42 FPS (Turbo) and 39 FPS (Performance.) With FSR Balanced it goes to about 45 FPS (Turbo) and 40 FPS (Performance.) 

  • Horizon: Zero Dawn (Original settings no FSR) benchmarks at 41 FPS turbo mode and 37 FPS performance mode. With FSR Ultra Quality (better than native) I get 52 fps in Turbo mode and 35fps in performance power mode. We don't know what the unlocked performance for the PS4 Pro would've been, but I am guessing it was low because of the poor CPU and they decided to sell framerate mode as a "more stable" 30fps instead of having it fluctuate between 30-40fps.

Consider that the ROG Ally is a semi-open (semi-open because it does get some dedicated optimizations) platform running Windows and the Switch 2-like laptop is an open platform. Closed platforms like the Switch 2, PS4, and PS4 Pro get the advantage of fine-tuned optimization, at the very least. 

Last edited by sc94597 - on 15 November 2023

Around the Network

The poll categories are meaningless because it's not how hardware functions any longer. Hardware architecture is a massive factor these days in terms of what games a system can play. XBox Series S also dramatically lowers the floor for all next-gen games as basically every game has to have a version for that.

What is a Steam Deck? 1.6 teraflops? OK, so only a PS4? But it runs almost every PS5 exclusive. Sure the settings are turned down a bit, but it does run almost every PS5 game and many of them at an acceptable 30-40 fps. So how is that "only a PS4"? Is a Super NES a Super NES if it can run Mario 64 and GoldenEye 007 and Final Fantasy VII and Metal Gear Solid? Even with a few compromises? No, that's not a Super NES at all in that case. And the Steam Deck ports are not even specifically optimized for Steam Deck hardware, as in ports built specifically only for that hardware like Switch 2 will get, if they were the performance of those games would probably go up a notch or two. You have to look at that Steam Deck or ROG Ally performance and understand a dedicated hardware system will get better performance than that, what is Low Settings/30 fps on Steam Deck could easily be Medium Settings/40 fps on a system properly optimized. 

The Digital Foundry experiment they set up with a 2050 GPU was running Cyberpunk 2077 at basically the PS5 settings because of DLSS. It was running A Plague's Tale: A Requiem, which is a PS5/XSX exclusive title at an acceptable resolution and frame rate, that is one of the best looking games on the PS5, it looks better than Final Fantasy 7 Rebirth even.

Of course it will not run as well as the PS5/XBS but no one is expecting literally the same settings, we all know some compromises will be made, but for the most part it's still the same freaking game that you can now take with you on the go. But at the same time PS5/XS don't run at the same settings as high end PCs, yet people still play and enjoy those games just fine.

The Switch 2 will have more PS5/XSX ports than the Switch 1 had PS4/XB1 ports. There are several factors that help it out (DLSS, XBox Series S being a lower target spec for the home consoles there was no XBox One Series S or PS4 1/2, diminishing returns in game development, developers knowing from day 1 Switch is a hit brand). It will have so many PS5/XBS ports that it won't even be a big deal any longer is my prediction (Monster Hunter 6, Call of Duty, Madden NFL, etc. etc.). 

Last edited by Soundwave - on 15 November 2023

Power is measured in Watts right? Lower than any option.

Considering operations per second, also lower than any option when handheld. Maybe only above Xbox one when docked.

But considering visuals / performance, I think it will be better than PS4, potentially on par with XSS for some games when docked.



sc94597 said:

Personally, I think the Digital Foundry guess is an underestimate. The Switch 2 will have a VRAM advantage over the RTX 2050 that could make up for having lower clock-speeds/fewer CUDA cores in video memory bottlenecked titles (like quite a few that released this year). Plus there is the wild-card that is a die-shrink to 5nm/4nm. And then of course closed-platform optimizations will help, as alluded to in the Digital Foundry video. 

PC memory works differently than consoles.

Yes the RTX 2050 only has 4GB of memory. - But that 4GB is dedicated -only- to graphics and isn't shared with anything else... Compared to consoles where that pool of Ram needs to be split for A.I, Scripting, Sound, Networking and a heap more.

The PC also has System memory which will augment the GPU memory, streaming data on a needs basis (Or depending on the data, accessing it from system memory directly). - And that had a total of 16GB of System Memory for a total of 20GB of Ram in the entire system.

Then you have oodles of CPU and GPU caching to hide bandwidth and latency deficits.

It's always a bizarre misnomer when someone claims any console has a memory advantage over the PC, that has never been true and never will be.

More memory doesn't make up for lower clocks/cuda cores either.

A Geforce RTX 2050 is a really really good baseline to aim Nintendo's next handheld for, Nintendo doesn't chase high-end, high performing, power hungry components anymore... And with Nintendo only making handhelds these days, they need to be smarter with their choices of components... And they are.
Tegra is a good choice, despite the fact there are faster mobile chips on the market.



--::{PC Gaming Master Race}::--

Pemalite said:

PC memory works differently than consoles.

Yes the RTX 2050 only has 4GB of memory. - But that 4GB is dedicated -only- to graphics and isn't shared with anything else... Compared to consoles where that pool of Ram needs to be split for A.I, Scripting, Sound, Networking and a heap more.

The PC also has System memory which will augment the GPU memory, streaming data on a needs basis (Or depending on the data, accessing it from system memory directly). - And that had a total of 16GB of System Memory for a total of 20GB of Ram in the entire system.

Then you have oodles of CPU and GPU caching to hide bandwidth and latency deficits.

It's always a bizarre misnomer when someone claims any console has a memory advantage over the PC, that has never been true and never will be.

More memory doesn't make up for lower clocks/cuda cores either.

A Geforce RTX 2050 is a really really good baseline to aim Nintendo's next handheld for, Nintendo doesn't chase high-end, high performing, power hungry components anymore... And with Nintendo only making handhelds these days, they need to be smarter with their choices of components... And they are.
Tegra is a good choice, despite the fact there are faster mobile chips on the market.

I am aware that unified memory needs to address the demands of both the GPU and CPU. 

My point was though if you have a VRAM capacity bottleneck (because you can only ever dedicate a fixed 4GB of relatively high bandwidth memory to graphics and swapping assets between the system ram and video ram slows down/interrupts the pipeline and has associated copy-function overhead anyway), having a large share of unified memory that can be apportioned flexibly is an advantage (assuming there is memory free to allocate.) 

Given the rumors, we can expect the Switch 2 to be able to allocate something like 6 - 8 GB of its unified memory to graphics use-cases. That's an advantage over the 2050, given that the relative throughput of the video memory is roughly comparable (112 GB/s vs. 102 GB/s) and a large share of the laptop's system ram is eaten up by windows bloat. 

So you have a situation where the 2050 has 4GB of VRAM @112 GB/s and 16GB - (Windows bloat) of system ram @ something like 42 GB/s (for DDR5 laptop.) 

vs 

Switch 2 with 12 to 16GB  - (Switch OS bloat << Windows bloat) of unified memory at 102 GB/s.

I probably should've been more precise in what I was saying when I said "make up for lower clocks/CUDA cores." What I intended to say is that if you have a situation where the most common bottleneck in the graphics pipeline is VRAM capacity,  the extra core/clock throughput is wasted anyway. And if that is the bottleneck, then yes having more effective graphics-purposed memory will lead to better performance. In those instances where it isn't the bottleneck, then no it "doesn't make up for it." I'd expect that if they loosened up the clock-rate limits in the Cyberpunk benchmark for example, they probably wouldn't get that much better performance because it is likely the VRAM that is the bottleneck, not the compute-capacity. 

Last edited by sc94597 - on 15 November 2023

sc94597 said:

I am aware that unified memory needs to address the demands of both the GPU and CPU. 

My point was though if you have a VRAM capacity bottleneck (because you can only ever dedicate a fixed 4GB of relatively high bandwidth memory to graphics and swapping assets between the system ram and video ram slows down/interrupts the pipeline and has associated copy-function overhead anyway), having a large share of unified memory that can be apportioned flexibly is an advantage (assuming there is memory free to allocate.) 

Considering how low-performing an RTX 2050 is, it would likely be a waste having more than 4GB of memory, it's not pushing high-end 1440P/4k visuals, not with only a paltry 112GB/s of memory bandwidth anyway...

Which is literally 1/3rd the speed of an RTX 2060 desktop part which already is a struggle at  1440P in most modern titles.

sc94597 said:

Given the rumors, we can expect the Switch 2 to be able to allocate something like 6 - 8 GB of its unified memory to graphics use-cases. That's an advantage over the 2050, given that the relative throughput of the video memory is roughly comparable (112 GB/s vs. 102 GB/s) and a large share of the laptop's system ram is eaten up by windows bloat. 

That is blatantly false.

The Switch 2 will not allocate RAM for any specific uses other than OS.
The rest is 100% up to the developer, always has been, regardless of console platform.

Not only that, there has literally been ZERO communication on what hardware the Switch is using, let alone how it will be used and/or allocated, meaning your entire argument is a false fabrication from the very start.

Keep in mind that Console vs PC OS bloat is pretty much equal these days.

The Xbox One/Playstation 4/Xbox Series X/Playstation 5 ALL allocate at-least 2.5GB of Ram OR MORE for the OS, which is actually a little more than Windows. - Obviously you want more than just 2.5GB of Ram to make any of these devices usable so you can run more than just the OS.

It's not 2001 anymore.

sc94597 said:

So you have a situation where the 2050 has 4GB of VRAM @112 GB/s and 16GB - (Windows bloat) of system ram @ something like 42 GB/s (for DDR5 laptop.) 

vs 

Switch 2 with 12 to 16GB  - (Switch OS bloat << Windows bloat) of unified memory at 102 GB/s.

Doesn't work like that.

sc94597 said:

I probably should've been more precise in what I was saying when I said "make up for lower clocks/CUDA cores." What I intended to say is that if you have a situation where the most common bottleneck in the graphics pipeline is VRAM capacity,  the extra core/clock throughput is wasted anyway. And if that is the bottleneck, then yes having more effective graphics-purposed memory will lead to better performance. In those instances where it isn't the bottleneck, then no it "doesn't make up for it." I'd expect that if they loosened up the clock-rate limits in Cyberpunk for example, they probably wouldn't get that much better performance because it is likely the VRAM that is the bottleneck, not the compute-capacity. 

Look. Developers work within the confines of the hardware.

There are GPU tasks which are not very VRAM heavy, which will simply be emphasized more in a VRAM limited environment.

Plus the whole stream-from-SSD thing with compression/decompression has made big improvements in terms of efficient memory utilization.
PC also tends to stream data from disk to system memory and/or video memory (Sometimes both!), which is dynamically optimized by the system drivers for maximum utilization with minimal developer interference.

People need to stop over-hyping the Switch 2 to be something it's not.

It's not high-end hardware, never was, never will be... And simply cannot be in a mobile device.



--::{PC Gaming Master Race}::--