By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HoloDust said:
bonzobanana said:

I personally don't think the Switch 2 has the CPU resources that many are claiming. An 8 core A78C CPU at 1Ghz is about a passmark of 1900 although from what was claimed the Switch 2 doesn't have the 8MB of cache that a full implementation would have only 4MB so that would reduce performance a bit. The PS4 has around a passmark of 1300 for its eight Jaguar cores. However the Switch 2 uses more of its CPU performance with features like gamechat in fact 2 of its cores are used.  I'm assuming the figures being quoted are the short life bursts to 1.7Ghz that the Switch 2 is capable of but that isn't sustained performance where as the Jaguar cores are at 1.6Ghz each. That is their base clock speed. The PS4 uses one of its cores for the operating system etc which admittedly came later with a revision to the operating system. Originally it was 2 like the Switch 2. The Switch 2 being a portable system will always be trying to reduce power and lowering Mhz where as when docked this isn't such an issue its more thermal management.

I see claims that the Switch 2 is super powerful in CPU terms but I really can't see it myself. The Arm Cortex A78C is an old CPU of the same era as the graphics architecture and the 'C' is mainly enhanced security features which obviously Nintendo would want. How those enhanced security features effect performance I don't know.

There is a CPU element to upscaling as well as GPU and its likely some games will reduce the level of upscaling quality to reduce CPU load. Which maybe what Hogwarts is doing.

Development kits for Switch 2 have been with developers since 2019/2020 incredibly although when Nintendo delayed launching their updated Switch because the Switch 1 was still selling incredibly well I guess they would have abandoned development but they have been perfecting their development on the T239 chipset for a long time. So I don't think Switch 2 is necessarily going to be achieving much greater optimisation over the years. To developers they have had it a very long time it is old technology to them.

I guess Nintendo were morally obliged to stick with T239 to a degree based on all the development work that would already have been done on it.

Just for comparison on a overclocked modded Mariko Switch 1 with the 4 cores operating at 2Ghz you can get about 1200 passmark score where as a stock Switch 1 is around 600.

So they are not that far apart from each other really except for standard CPU frequencies on the original Switch.

I used a much later ARM A78 chip for comparison as couldn't find passmark information for the older A78C but they should be comparable being on the same architecture. I can't imagine the later chip being anymore than 5% faster if that. Obviously you need to adjust the performance to the much lower Ghz of the Switch 2 which is about 1Ghz.

https://www.cpubenchmark.net/cpu.php?cpu=ARM+Cortex-A78AE+8+Core+1984+MHz&id=6298

The AMD Jaguar 1.6GHz 8-core CPU used in the PlayStation 4 and Xbox One has a PassMark rating around 1200-1400. It's considered a low-power, efficient processor, but not as powerful as modern CPUs like those in the Ryzen series or the i7-4790K. 

A78AEx8 at Switch 2 CPU clocks gives 493/2735 single/multi-core in Geekbench 6. PS4 does 197/990.

Switch 2 CPU is not that great, compared to what PC/Android handhelds pull off, let alone PS5, but it's quite solid and much, much better than PS4.

Not that it matters much, DLSS is GPU based, not CPU - CPU certainly influences overall performance, just not DLSS part of it.

I didn't want to use Geekbench 6 as that also factors in some GPU functionality I just wanted to compare CPU performance isolated from the GPU to get a fair perspective of the actual CPUs. As for DLSS while the work is primarily done by the GPU a lot of work is done by the CPU. So the CPU performance especially the low CPU performance of the Switch 2 will surely have an impact especially it seems as the older RTX 2050 seems to need more CPU resources in this regard compared to the RTX 4050 so this is better optimised in later cards and the Switch 2 is of the same generation as the RTX 2050 just a much lower power version. Also you can see on PC benchmarks that CPU power has an effect on DLSS frame rates. DLSS performance scales with CPU performance. It doesn't take the burden of upscaling off the CPU as if it did you could have 360p upscaling to 1080p with a much inferior CPU instead but same frame rates. However the reality is it really takes the burden of the GPU more because now it can upscale to 4K with decent frame rates and image quality that it could never produce natively. It enables that GPU to punch well above its normal frame rates for that output resolution. However the burden to the CPU is greater. Of course many PCs have more than enough CPU power so its not an issue but the Switch 2 doesn't its going to be a matter of optimising CPU performance surely a lot of the time.

There was a comment about Hogwarts having poor DLSS upscaling compared to some other titles and this could be related to more limited CPU resources for that game and reducing the quality of DLSS. We will get a much more accurate picture in a few days when people start analysing games on retail hardware. The fact Nintendo have released information about optimising the operating system and trying to get background tasks onto one CPU core rather than two makes me think people are likely to be disappointed in the performance initially and they are trying to limit damage of that by stating like the PS4 it will eventually be better optimised releasing more performance for games but we shall see. I believe if I have remembered rightly FSR 3 etc on AMD chipsets takes far less GPU and CPU resources to upscale but then its much inferior results. XeSS takes a lot of CPU resources but gives much better results. Surely the fact XeSS operates on both AMD and Nvidia chipsets too shows its more CPU bound as well as graphic architecture is less of an issue for that upscaling technology.

Yes, while DLSS (Deep Learning Super Sampling) primarily relies on the GPU's Tensor Cores for its AI-powered upscaling, there's a CPU element involved as well, especially with newer features like DLSS Frame Generation.
Elaboration:
Tensor Cores (GPU):
DLSS's core functionality, which is the AI upscaling and frame generation, is handled by the GPU's Tensor Cores, specifically the RTX 20, RTX 30, RTX 40, RTX 50, and Quadro RTX series.
CPU's Role (Rendering, Pre-processing):
The CPU is responsible for rendering the game world and preparing the data for DLSS.
For DLSS Frame Generation, the CPU needs to efficiently manage the rendering of the initial frames, as the generated frames rely on that base.
A powerful CPU can help reduce the workload on the GPU, potentially leading to better performance, especially when DLSS is enabled.
DLSS and Frame Generation:
DLSS Frame Generation, which is available on RTX 40 and RTX 50 series GPUs, boosts frame rates by using AI to generate new frames.
This requires the CPU to efficiently handle the initial frame rendering and provide the necessary input for the AI.
A CPU bottleneck could potentially limit the benefits of DLSS Frame Generation, as it relies on the base frames generated by the CPU.
Impact of CPU on DLSS:
In some cases, a weak CPU can bottleneck the performance of DLSS, especially in CPU-intensive games or when running higher DLSS settings.
While the GPU handles the AI-powered upscaling and frame generation, the CPU's performance can impact the overall experience.