Pemalite said:
Using those Tensor cores comes at a cost of power. |
Nearly all evidence we've seen is that DLSS doesn't fully utilize tensor cores 100% of the time. Instead you see a spiking utilization pattern where most of the time utilization is low and then, when needed, it spikes up to double digits (high single digits on a 4090, but obviously the Switch 2 isn't comparable there), which is consistent with what we see when we run comparable DL models in other work loads.
The GPU might be fully-utilized in this case, as resources are re-allocated to improve other parts of the graphics pipeline when saved from running at the higher resolution, or the developer could just under-clock the GPU for better battery life, as they do in lighter-weight games or as we see on PC handhelds when selecting power limits.
Last edited by sc94597 - on 22 April 2025






