By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - How Will be Switch 2 Performance Wise?

 

Switch 2 is out! How you classify?

Terribly outdated! 3 5.26%
 
Outdated 1 1.75%
 
Slightly outdated 14 24.56%
 
On point 31 54.39%
 
High tech! 7 12.28%
 
A mixed bag 1 1.75%
 
Total:57

@sc94597

DOOM Dark Ages is one of those games that scale very little with graphic settings - in its case only some ~20% drop in performance when going from Low all the way to Ultra Nightmare, so my guess most of the performance gains would have to be done via lower resolution, since that is where it scales as expected, and probably via additional cutbacks, as with DOOM on Switch 1.

EDIT: I've just looked up results for low settings @1080 DLSS quality (so 720 native) on 3050 8GB, which is GPU that is at the very bottom of benchmarks for this game, yet still some 3x SW2 docked -  so, around 55-60fps. So yeah, even with going way down in resolution, I'm guessing some assets will have to be cut. No doubt, we will see SW2 port at some point.

Last edited by HoloDust - on 28 May 2025

Around the Network

@HoloDust

Yeah I was able to download a performance mod that further tweaked the config to get a solid 40fps at 1080p Balanced.

GPU utilization was around 90-95% of the 65% down-clock after I tweaked some windows settings to save on CPU utilization. Memory consumption still 5.5 GB VRAM and ~3GB system ram. So roughly akin to what I'd imagine would be the available resources for Switch 2 docked mode. 

Switch 2 handheld mode is what I wonder about the most. It probably will end up like a Doom 2016 on Switch situation, very blurry, unless the Switch 2 does indeed have a bespoke DLSS model that can clean up the low resolution input well.



sc94597 said:

@HoloDust

Yeah I was able to download a performance mod that further tweaked the config to get a solid 40fps at 1080p Balanced.

GPU utilization was around 90-95% of the 65% down-clock after I tweaked some windows settings to save on CPU utilization. Memory consumption still 5.5 GB VRAM and ~3GB system ram. So roughly akin to what I'd imagine would be the available resources for Switch 2 docked mode. 

Switch 2 handheld mode is what I wonder about the most. It probably will end up like a Doom 2016 on Switch situation, very blurry, unless the Switch 2 does indeed have a bespoke DLSS model that can clean up the low resolution input well.

After seeing that Hogwarts Legacy scene, the one that later DF talked about in length, I was very surprised at how bad DLSS is doing on Switch (and it is DLSS, according to official press release), compared to PC counterparts when it comes to disocclusion artifacts. Honestly, it looked like its temporal data is scaled down a lot, and that's what's producing so much disocclusion artifacts. So, I'm quite curious to see, as someone who plays lot of games with DLSS upscaling, what exactly SW2 solution is doing and what it can pull off.

Last edited by HoloDust - on 29 May 2025

HoloDust said:
sc94597 said:

@HoloDust

Yeah I was able to download a performance mod that further tweaked the config to get a solid 40fps at 1080p Balanced.

GPU utilization was around 90-95% of the 65% down-clock after I tweaked some windows settings to save on CPU utilization. Memory consumption still 5.5 GB VRAM and ~3GB system ram. So roughly akin to what I'd imagine would be the available resources for Switch 2 docked mode. 

Switch 2 handheld mode is what I wonder about the most. It probably will end up like a Doom 2016 on Switch situation, very blurry, unless the Switch 2 does indeed have a bespoke DLSS model that can clean up the low resolution input well.

After seeing that Hogwarts Legacy scene, the one that later DF talked about in length, I was very surprised at how bad DLSS is doing on Switch (and it is DLSS, according to official press release), compared to PC counterparts when it comes to disocclusion artifacts. Honestly, it looked like its temporal data is scaled down a lot, and that's what's producing so much disocclusion artifacts. So, I'm quite curious to see, as someone who plays lot of games with DLSS upscaling, what exactly SW2 solution is doing and what it can pull off.

The Switch 2 isn't a powerful system so is upscaling from a much lower resolution in order to create it's 1080p portable visuals but its also only a 7.9" screen so hopefully most artifacts won't be as obvious. Many PC games are using DLSS at a much higher native resolution so its a much easier upscale. I have to say though DLSS on Switch 2 looks amazing compared to FSR 3.2 on AMD graphics hardware, if you think DLSS has artifacts you can't have seen FSR 3.2. Switch 2 seems to be doing an amazing job overall in upscaling but ultimately its guesswork upscaling which will go wrong on occasion. It's never going to be as clean an image as a pure natively rendered image at the same resolution. I'm sure where possible i.e puzzle games and more classic arcade games the Switch 2 will just natively render at the higher resolution.



bonzobanana said:
HoloDust said:

After seeing that Hogwarts Legacy scene, the one that later DF talked about in length, I was very surprised at how bad DLSS is doing on Switch (and it is DLSS, according to official press release), compared to PC counterparts when it comes to disocclusion artifacts. Honestly, it looked like its temporal data is scaled down a lot, and that's what's producing so much disocclusion artifacts. So, I'm quite curious to see, as someone who plays lot of games with DLSS upscaling, what exactly SW2 solution is doing and what it can pull off.

The Switch 2 isn't a powerful system so is upscaling from a much lower resolution in order to create it's 1080p portable visuals but its also only a 7.9" screen so hopefully most artifacts won't be as obvious. Many PC games are using DLSS at a much higher native resolution so its a much easier upscale. I have to say though DLSS on Switch 2 looks amazing compared to FSR 3.2 on AMD graphics hardware, if you think DLSS has artifacts you can't have seen FSR 3.2. Switch 2 seems to be doing an amazing job overall in upscaling but ultimately its guesswork upscaling which will go wrong on occasion. It's never going to be as clean an image as a pure natively rendered image at the same resolution. I'm sure where possible i.e puzzle games and more classic arcade games the Switch 2 will just natively render at the higher resolution.

Not sure if you watched the video, it was very noticeable, and some days ago DF did an episode on Hogwarts where they've discussed that very scene - standard DLSS 3/4 just doesn't look that bad in 1440p in any setting (and that is resolution Hogwarts is supposedly outputting, in docked mode), so it is very weird that it does on SW2 - as if it has reduced precision or a shorter accumulation window for temporal data, which is why disocclusion artifacts are pronounced so heavily.

So maybe there's actual SW2 Ultra Lite DLSS after all, that cuts back on some stuff to be much lighter to run, but in return can't handle some problems as well as standard DLSS. (again, do watch DF video, starting at around 13:15, at full screen preferably).



Around the Network
HoloDust said:
bonzobanana said:

The Switch 2 isn't a powerful system so is upscaling from a much lower resolution in order to create it's 1080p portable visuals but its also only a 7.9" screen so hopefully most artifacts won't be as obvious. Many PC games are using DLSS at a much higher native resolution so its a much easier upscale. I have to say though DLSS on Switch 2 looks amazing compared to FSR 3.2 on AMD graphics hardware, if you think DLSS has artifacts you can't have seen FSR 3.2. Switch 2 seems to be doing an amazing job overall in upscaling but ultimately its guesswork upscaling which will go wrong on occasion. It's never going to be as clean an image as a pure natively rendered image at the same resolution. I'm sure where possible i.e puzzle games and more classic arcade games the Switch 2 will just natively render at the higher resolution.

Not sure if you watched the video, it was very noticeable, and some days ago DF did an episode on Hogwarts where they've discussed that very scene - standard DLSS 3/4 just doesn't look that bad in 1440p in any setting (and that is resolution Hogwarts is supposedly outputting, in docked mode), so it is very weird that it does on SW2 - as if it has reduced precision or a shorter accumulation window for temporal data, which is why disocclusion artifacts are pronounced so heavily.

So maybe there's actual SW2 Ultra Lite DLSS after all, that cuts back on some stuff to be much lighter to run, but in return can't handle some problems as well as standard DLSS. (again, do watch DF video, starting at around 13:15, at full screen preferably).

I personally don't think the Switch 2 has the CPU resources that many are claiming. An 8 core A78C CPU at 1Ghz is about a passmark of 1900 although from what was claimed the Switch 2 doesn't have the 8MB of cache that a full implementation would have only 4MB so that would reduce performance a bit. The PS4 has around a passmark of 1300 for its eight Jaguar cores. However the Switch 2 uses more of its CPU performance with features like gamechat in fact 2 of its cores are used.  I'm assuming the figures being quoted are the short life bursts to 1.7Ghz that the Switch 2 is capable of but that isn't sustained performance where as the Jaguar cores are at 1.6Ghz each. That is their base clock speed. The PS4 uses one of its cores for the operating system etc which admittedly came later with a revision to the operating system. Originally it was 2 like the Switch 2. The Switch 2 being a portable system will always be trying to reduce power and lowering Mhz where as when docked this isn't such an issue its more thermal management.

I see claims that the Switch 2 is super powerful in CPU terms but I really can't see it myself. The Arm Cortex A78C is an old CPU of the same era as the graphics architecture and the 'C' is mainly enhanced security features which obviously Nintendo would want. How those enhanced security features effect performance I don't know.

There is a CPU element to upscaling as well as GPU and its likely some games will reduce the level of upscaling quality to reduce CPU load. Which maybe what Hogwarts is doing.

Development kits for Switch 2 have been with developers since 2019/2020 incredibly although when Nintendo delayed launching their updated Switch because the Switch 1 was still selling incredibly well I guess they would have abandoned development but they have been perfecting their development on the T239 chipset for a long time. So I don't think Switch 2 is necessarily going to be achieving much greater optimisation over the years. To developers they have had it a very long time it is old technology to them.

I guess Nintendo were morally obliged to stick with T239 to a degree based on all the development work that would already have been done on it.

Just for comparison on a overclocked modded Mariko Switch 1 with the 4 cores operating at 2Ghz you can get about 1200 passmark score where as a stock Switch 1 is around 600.

So they are not that far apart from each other really except for standard CPU frequencies on the original Switch.

I used a much later ARM A78 chip for comparison as couldn't find passmark information for the older A78C but they should be comparable being on the same architecture. I can't imagine the later chip being anymore than 5% faster if that. Obviously you need to adjust the performance to the much lower Ghz of the Switch 2 which is about 1Ghz.

https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A78AE+8+Core+1984+MHz&id=6298

The AMD Jaguar 1.6GHz 8-core CPU used in the PlayStation 4 and Xbox One has a PassMark rating around 1200-1400. It's considered a low-power, efficient processor, but not as powerful as modern CPUs like those in the Ryzen series or the i7-4790K. 



bonzobanana said:

I see claims that the Switch 2 is super powerful in CPU terms but I really can't see it myself. The Arm Cortex A78C is an old CPU of the same era as the graphics architecture and the 'C' is mainly enhanced security features which obviously Nintendo would want. How those enhanced security features effect performance I don't know.

....

I used a much later ARM A78 chip for comparison as couldn't find passmark information for the older A78C but they should be comparable being on the same architecture. I can't imagine the later chip being anymore than 5% faster if that. Obviously you need to adjust the performance to the much lower Ghz of the Switch 2 which is about 1Ghz.

https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A78AE+8+Core+1984+MHz&id=6298

I think you're mixing up the A78AE with the A78C here. The A78C is the SOC that aims to maximize performance on consumer devices (2-in-1 ARM Windows laptops, handheld gaming pcs, and tablets mostly.)  The main purpose isn't to be "more secure" the main purpose is to be a core-dense consumer SoC for chips that have a higher power-envelope than most smartphones. 

Here are the ARM suffix meanings: 

A - Application/General Purposed basic general-purposed cores that are efficiency minded

X - Application/General Purposed aimed for higher performing, especially single-threaded, workloads. Exclusive to certain CXC partners. 

C - Application General Purposed Compute/Core-dense customized for relatively high-performing (compared to smartphone) battery-efficient tablets and laptops 

AE - Automotive Enhanced customized for automotive edge-compute

R - Real-time aimed for deterministic edge-compute

M - Microcontroller aimed for low-powered embedded devices

The major difference between the A78AE and A78C is that an 8-core A78AE SoC has two clusters of 4-cores whereas the A78C has one cluster for all 8 cores. This means the A78C should have moderately better multi-threaded performance than the A78AE, for a given frequency/power-profile because the AE needs an interconnect between the two clusters to access each other's cache.

As for the real-world CPU performance, it should have a healthy advantage over the 8th Generation console's CPUs (which were abysmally weak even upon release), while being limiting compared to what exists in the 9th Generation consoles and even Steam Deck. How that actually manifests in games remains to be seen. A lot of CPU compute can be offloaded to the GPU these days, as GPGPU API's (especially Nvidia's) are much more mature than in say -- 2013. 

Sources:

https://www.tomshardware.com/news/arm-looks-to-laptops-cortex-a78c-processor-for-pcs-announced

The Arm Cortex-A78C is built on the foundation of the Cortex-A78 for smartphones and tablets, but is customized to offer the performance required for workloads that are run on notebooks and other types of personal computers. Arm says that Cortex-A78C-powered laptops will offer all-day battery life, but will also be capable of running demanding applications, such as professional productivity suites as well as games.

Last edited by sc94597 - on 31 May 2025

sc94597 said:
bonzobanana said:

I see claims that the Switch 2 is super powerful in CPU terms but I really can't see it myself. The Arm Cortex A78C is an old CPU of the same era as the graphics architecture and the 'C' is mainly enhanced security features which obviously Nintendo would want. How those enhanced security features effect performance I don't know.

....

I used a much later ARM A78 chip for comparison as couldn't find passmark information for the older A78C but they should be comparable being on the same architecture. I can't imagine the later chip being anymore than 5% faster if that. Obviously you need to adjust the performance to the much lower Ghz of the Switch 2 which is about 1Ghz.

https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A78AE+8+Core+1984+MHz&id=6298

I think you're mixing up the A78AE with the A78C here. The A78C is the SOC that aims to maximize performance on consumer devices (2-in-1 ARM Windows laptops, handheld gaming pcs, and tablets mostly.)  The main purpose isn't to be "more secure" the main purpose is to be a core-dense consumer SoC for chips that have a higher power-envelope than most smartphones. 

Here are the ARM suffix meanings: 

A - Application/General Purposed basic general-purposed cores that are efficiency minded

X - Application/General Purposed aimed for higher performing, especially single-threaded, workloads. Exclusive to certain CXC partners. 

C - Application General Purposed Compute/Core-dense customized for relatively high-performing (compared to smartphone) battery-efficient tablets and laptops 

AE - Automotive Enhanced customized for automotive edge-compute

R - Real-time aimed for deterministic edge-compute

M - Microcontroller aimed for low-powered embedded devices

The major difference between the A78AE and A78C is that an 8-core A78AE SoC has two clusters of 4-cores whereas the A78C has one cluster for all 8 cores. This means the A78C should have moderately better multi-threaded performance than the A78AE, for a given frequency/power-profile because the AE needs an interconnect between the two clusters to access each other's cache.

As for the real-world CPU performance, it should have a healthy advantage over the 8th Generation console's CPUs (which were abysmally weak even upon release), while being limiting compared to what exists in the 9th Generation consoles and even Steam Deck. How that actually manifests in games remains to be seen. A lot of CPU compute can be offloaded to the GPU these days, as GPGPU API's (especially Nvidia's) are much more mature than in say -- 2013. 

Sources:

https://www.tomshardware.com/news/arm-looks-to-laptops-cortex-a78c-processor-for-pcs-announced

The Arm Cortex-A78C is built on the foundation of the Cortex-A78 for smartphones and tablets, but is customized to offer the performance required for workloads that are run on notebooks and other types of personal computers. Arm says that Cortex-A78C-powered laptops will offer all-day battery life, but will also be capable of running demanding applications, such as professional productivity suites as well as games.

I'm not confusing them as such. I'm looking for similar chips in the series that do have benchmark results which should give similar results per mhz for the same number of cores based on the same CPU architecture. Lets not forget the reduced cache memory and the fact the Switch 2 has an incredibly low amount of power available when portable. It allows a maximum of 10W per hour with a minimum 2 hour runtime. The whole SOC has 4-6W to use per hour and is on a dated power hungry Samsung 10/8Nm fabrication process. I don't see the point of comparing the Switch 2 with other devices that have much higher battery resources. It is run at almost the same speed in portable mode vs docked. I've not seen anything that says the A78C performs better and its clear the implementation on Switch 2 is cut down with reduced cache and a much older implementation that goes back to 2020/2021 than some of the later A78 based chips.



bonzobanana said:
sc94597 said:

I think you're mixing up the A78AE with the A78C here. The A78C is the SOC that aims to maximize performance on consumer devices (2-in-1 ARM Windows laptops, handheld gaming pcs, and tablets mostly.)  The main purpose isn't to be "more secure" the main purpose is to be a core-dense consumer SoC for chips that have a higher power-envelope than most smartphones. 

Here are the ARM suffix meanings: 

A - Application/General Purposed basic general-purposed cores that are efficiency minded

X - Application/General Purposed aimed for higher performing, especially single-threaded, workloads. Exclusive to certain CXC partners. 

C - Application General Purposed Compute/Core-dense customized for relatively high-performing (compared to smartphone) battery-efficient tablets and laptops 

AE - Automotive Enhanced customized for automotive edge-compute

R - Real-time aimed for deterministic edge-compute

M - Microcontroller aimed for low-powered embedded devices

The major difference between the A78AE and A78C is that an 8-core A78AE SoC has two clusters of 4-cores whereas the A78C has one cluster for all 8 cores. This means the A78C should have moderately better multi-threaded performance than the A78AE, for a given frequency/power-profile because the AE needs an interconnect between the two clusters to access each other's cache.

As for the real-world CPU performance, it should have a healthy advantage over the 8th Generation console's CPUs (which were abysmally weak even upon release), while being limiting compared to what exists in the 9th Generation consoles and even Steam Deck. How that actually manifests in games remains to be seen. A lot of CPU compute can be offloaded to the GPU these days, as GPGPU API's (especially Nvidia's) are much more mature than in say -- 2013. 

Sources:

https://www.tomshardware.com/news/arm-looks-to-laptops-cortex-a78c-processor-for-pcs-announced

The Arm Cortex-A78C is built on the foundation of the Cortex-A78 for smartphones and tablets, but is customized to offer the performance required for workloads that are run on notebooks and other types of personal computers. Arm says that Cortex-A78C-powered laptops will offer all-day battery life, but will also be capable of running demanding applications, such as professional productivity suites as well as games.

I'm not confusing them as such. I'm looking for similar chips in the series that do have benchmark results which should give similar results per mhz for the same number of cores based on the same CPU architecture. Lets not forget the reduced cache memory and the fact the Switch 2 has an incredibly low amount of power available when portable. It allows a maximum of 10W per hour with a minimum 2 hour runtime. The whole SOC has 4-6W to use per hour and is on a dated power hungry Samsung 10/8Nm fabrication process. I don't see the point of comparing the Switch 2 with other devices that have much higher battery resources. It is run at almost the same speed in portable mode vs docked. I've not seen anything that says the A78C performs better and its clear the implementation on Switch 2 is cut down with reduced cache and a much older implementation that goes back to 2020/2021 than some of the later A78 based chips.

Comparing an A78C cluster to a 2-cluster A78AE is like comparing an i7 4790 to an i5 4690. Both are "8-core" chips in a similar sense to how in the latter comparison  both are "4-core" chips. But there is a significant difference between having 2 x 4-core clusters and having 1 x 8 core clusters, just like there is a difference between having a 4 core chip with hyper-threading and one without. 

The comparison you are doing is only valid for guessing relative single core performance, which should be similar for an A78AE and A78C. 

Your power argument doesn't make sense. These are SoCs designed for low power systems in the same consumption range as a Switch 2. (The think-pads with A78C cores have TDPs of about 7W, and peak at 20W when plugged in.) Now the Switch 2's CPU is going to be under-clocked compared to them,  because it has a heftier GPU than they do also competing for power resources. 

Yes, the Switch 2 has reduced L3 cache compared to the maximum the A78C allows, but that isnt reduced compared to an A78AE which also only has a max of 4MB.

The A78C IS the "later" A78 core. It was announced and released a few months after the others (September vs. November of the same year.)

ARM's November 2020 announcement of the cores where they emphasize how the homogenous octacore layout improves multi-core performance can be found in the link below.

https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-cortex-a78c

"The newest member, Cortex-A78C, builds on the success of these designs with the latest architecture updates for enhanced compute performance, scalability, and security.

...

Cortex-A78C enables more homogeneous multi big core computing, with support for up to 8 big CPU core clusters. The octacore (up to 8 big CPU cores) configurations lead to more scalable multi-threaded performance improvements when compared to Cortex-A78

"

Last edited by sc94597 - on 31 May 2025

bonzobanana said:
HoloDust said:

Not sure if you watched the video, it was very noticeable, and some days ago DF did an episode on Hogwarts where they've discussed that very scene - standard DLSS 3/4 just doesn't look that bad in 1440p in any setting (and that is resolution Hogwarts is supposedly outputting, in docked mode), so it is very weird that it does on SW2 - as if it has reduced precision or a shorter accumulation window for temporal data, which is why disocclusion artifacts are pronounced so heavily.

So maybe there's actual SW2 Ultra Lite DLSS after all, that cuts back on some stuff to be much lighter to run, but in return can't handle some problems as well as standard DLSS. (again, do watch DF video, starting at around 13:15, at full screen preferably).

I personally don't think the Switch 2 has the CPU resources that many are claiming. An 8 core A78C CPU at 1Ghz is about a passmark of 1900 although from what was claimed the Switch 2 doesn't have the 8MB of cache that a full implementation would have only 4MB so that would reduce performance a bit. The PS4 has around a passmark of 1300 for its eight Jaguar cores. However the Switch 2 uses more of its CPU performance with features like gamechat in fact 2 of its cores are used.  I'm assuming the figures being quoted are the short life bursts to 1.7Ghz that the Switch 2 is capable of but that isn't sustained performance where as the Jaguar cores are at 1.6Ghz each. That is their base clock speed. The PS4 uses one of its cores for the operating system etc which admittedly came later with a revision to the operating system. Originally it was 2 like the Switch 2. The Switch 2 being a portable system will always be trying to reduce power and lowering Mhz where as when docked this isn't such an issue its more thermal management.

I see claims that the Switch 2 is super powerful in CPU terms but I really can't see it myself. The Arm Cortex A78C is an old CPU of the same era as the graphics architecture and the 'C' is mainly enhanced security features which obviously Nintendo would want. How those enhanced security features effect performance I don't know.

There is a CPU element to upscaling as well as GPU and its likely some games will reduce the level of upscaling quality to reduce CPU load. Which maybe what Hogwarts is doing.

Development kits for Switch 2 have been with developers since 2019/2020 incredibly although when Nintendo delayed launching their updated Switch because the Switch 1 was still selling incredibly well I guess they would have abandoned development but they have been perfecting their development on the T239 chipset for a long time. So I don't think Switch 2 is necessarily going to be achieving much greater optimisation over the years. To developers they have had it a very long time it is old technology to them.

I guess Nintendo were morally obliged to stick with T239 to a degree based on all the development work that would already have been done on it.

Just for comparison on a overclocked modded Mariko Switch 1 with the 4 cores operating at 2Ghz you can get about 1200 passmark score where as a stock Switch 1 is around 600.

So they are not that far apart from each other really except for standard CPU frequencies on the original Switch.

I used a much later ARM A78 chip for comparison as couldn't find passmark information for the older A78C but they should be comparable being on the same architecture. I can't imagine the later chip being anymore than 5% faster if that. Obviously you need to adjust the performance to the much lower Ghz of the Switch 2 which is about 1Ghz.

https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A78AE+8+Core+1984+MHz&id=6298

The AMD Jaguar 1.6GHz 8-core CPU used in the PlayStation 4 and Xbox One has a PassMark rating around 1200-1400. It's considered a low-power, efficient processor, but not as powerful as modern CPUs like those in the Ryzen series or the i7-4790K. 

A78AEx8 at Switch 2 CPU clocks gives 493/2735 single/multi-core in Geekbench 6. PS4 does 197/990.

Switch 2 CPU is not that great, compared to what PC/Android handhelds pull off, let alone PS5, but it's quite solid and much, much better than PS4.

Not that it matters much, DLSS is GPU based, not CPU - CPU certainly influences overall performance, just not DLSS part of it.