bonzobanana said:
I didn't want to use Geekbench 6 as that also factors in some GPU functionality I just wanted to compare CPU performance isolated from the GPU to get a fair perspective of the actual CPUs. As for DLSS while the work is primarily done by the GPU a lot of work is done by the CPU. So the CPU performance especially the low CPU performance of the Switch 2 will surely have an impact especially it seems as the older RTX 2050 seems to need more CPU resources in this regard compared to the RTX 4050 so this is better optimised in later cards and the Switch 2 is of the same generation as the RTX 2050 just a much lower power version. Also you can see on PC benchmarks that CPU power has an effect on DLSS frame rates. DLSS performance scales with CPU performance. It doesn't take the burden of upscaling off the CPU as if it did you could have 360p upscaling to 1080p with a much inferior CPU instead but same frame rates. However the reality is it really takes the burden of the GPU more because now it can upscale to 4K with decent frame rates and image quality that it could never produce natively. It enables that GPU to punch well above its normal frame rates for that output resolution. However the burden to the CPU is greater. Of course many PCs have more than enough CPU power so its not an issue but the Switch 2 doesn't its going to be a matter of optimising CPU performance surely a lot of the time. There was a comment about Hogwarts having poor DLSS upscaling compared to some other titles and this could be related to more limited CPU resources for that game and reducing the quality of DLSS. We will get a much more accurate picture in a few days when people start analysing games on retail hardware. The fact Nintendo have released information about optimising the operating system and trying to get background tasks onto one CPU core rather than two makes me think people are likely to be disappointed in the performance initially and they are trying to limit damage of that by stating like the PS4 it will eventually be better optimised releasing more performance for games but we shall see. I believe if I have remembered rightly FSR 3 etc on AMD chipsets takes far less GPU and CPU resources to upscale but then its much inferior results. XeSS takes a lot of CPU resources but gives much better results. Surely the fact XeSS operates on both AMD and Nvidia chipsets too shows its more CPU bound as well as graphic architecture is less of an issue for that upscaling technology. Yes, while DLSS (Deep Learning Super Sampling) primarily relies on the GPU's Tensor Cores for its AI-powered upscaling, there's a CPU element involved as well, especially with newer features like DLSS Frame Generation. |
Those Geekbench results are CPU only.
CPU can affect overall GPU performance and induce bottlenecks - as I said, it has almost nothing to do with DLSS which is GPU dependent.
Switch 2, it seems, will have custom DLSS solution which, from what's been seen so far, has lower precision and shorter accumulation window for temporal data. Maybe not always, and not in all titles, but from Hogwarts it is obvious that its DLSS implementation, at least in some cases, is not as good as standard DLSS (though it doesn't look too good in CP2077 as well, just not as pronounced everywhere).







