By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

I think you are missing the point.
The entire graphics engine on the Switch 2 will be under 100% utilization... It's a walled garden with fixed hardware... That's texture mapping units, that's pixel shaders, vertex shaders, polymorph and more. That will be pegged fully.

Tensor operations done on the tensor cores is an added "extra" load on top that adds to power consumption, doesn't reduce it.

FSR is obviously different again on the Steamdeck as it's using the pixel shaders, not seperate tensor cores.

And again... The fact that Switch 2 has lower battery life than the Switch 1 literally proves there is no power reductions anyway.

No, you're missing the point of my original post here. Which is evident by the fact you're bringing up Switch 1, which wasn't anything we were comparing to. 

Let's breakdown what the original point was, 

You have a goal, and two ways to achieve it, Let's say the goal is to be able to render a game at 1080p: 

  1. You render the game at 1080p natively and fully utilize the CUDA cores to do so. 
  2. You render the game at 540p, upscale it to 1080p using DLSS, the game actually has better image quality than the native 1080p version. 

Now you've saved a significant amount of the CUDA utilization. You can ostensibly then do two things, or a combination of both of them: 

  1. Reduce the max clock rates of the CUDA cores to save on power consumption. 
  2. Re-allocate the now freed up resources to other workloads in the pipeline. 

Are you seriously arguing that developers will always and only choose #2 and never choose #1, or a combination of both #1 and #2? And if they do choose #1, do you think the extra load on the tensor cores from the very spikey DLSS workload is going to fully eat into the power-consumption savings obtained by reducing the clock rate of the CUDA cores? 

And it doesn't matter that the Steam Deck/FSR uses pixel shaders for this argument that was originally made. Using FSR, the user is able to cap the clock rates and get a similar enough experience qualitatively to the higher clocked more power consumptive workload they would've chosen if battery life weren't a constraint they cared about. Sure there is a slight difference in that pixel shaders are much more general-purpose than matrix cores/tensor cores, and there is a more direct trade-off, but for the discussion of being able to reduce the max clock rate and save power, upscaling does give developers (and users) more options than they would've otherwise had. 

As for your comparison to Switch 1, what makes you think the Switch 2 wouldn't come out even worse without DLSS being an option? Maybe more games are at the minimum of the battery-life expectancy range than the maximum in that alternative reality? 

Last edited by sc94597 - on 22 April 2025