Forums - Nintendo Discussion - DLSS 2.1 Ultra Performance Mode (Switch 2 Discussion)

New DLSS 2.1 offers a new performance mode which reconstructs resolution by a factor of 9x. The 1440p to 8K stuff was not so impressive but the 360p to 1080p conversion is honestly insane.  

Starts at 16:50 

I can't help but be excited by how this technology continues to bode well for a future Switch 



Around the Network

Resolution will be a thing of the past similar to the megapixel race in digital cameras.

Nvidia/Nintendo are a head of the game.



I was expecting Soundwave to be the creator of this thread.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

DonFerrari said:
I was expecting Soundwave to be the creator of this thread.

Heh you do still post since the "fallout"



 "I think people should define the word crap" - Kirby007

Join the Prediction League http://www.vgchartz.com/predictions

Instead of seeking to convince others, we can be open to changing our own minds, and seek out information that contradicts our own steadfast point of view. Maybe it’ll turn out that those who disagree with you actually have a solid grasp of the facts. There’s a slight possibility that, after all, you’re the one who’s wrong.

Can someone explain to me what the difference is between DLSS to normal resolution pixels? And am I correct in assuming DLSS is a more efficient way to have better visuals, which means weaker hardware can have better visuals, or is that wrong?



Around the Network
Dulfite said:
Can someone explain to me what the difference is between DLSS to normal resolution pixels? And am I correct in assuming DLSS is a more efficient way to have better visuals, which means weaker hardware can have better visuals, or is that wrong?

DLSS is based on Deep Learning. Super sampling works by making images sharper by rendering them in a higher resolution and then take that more precise information and use it to draw a sharper image. In DLSS you use an AI to learn how a game is supposed to look like in a higher resolution and then apply the resulting algorithm to the picture. This way the algorithm can seemingly create information out of nowhere, thanks to past training. That process is way less demanding on hardware than having to calculate a whole picture in a high resolution. It's so effective that you can basically do the opposite of regular super sampling, which downscales a high resolution to a lower one. DLSS instead uses a lower resolution and then upscales it with additional information from the deep learning algorithm.

That way you can have a better looking picture while still using way less compute power. A technique like that is great for consoles and especially for Nintendo, since they like to underpower their hardware. If the game is built with DLSS in mind it could actually be an amazing innovation and deliver high fps in high resolutions and highly detailed textures on comparably weak machines.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

For next gen switch and/or switch pro, DLSS will change the game. Ninty might not need to sacrifice battery life or pricing to match next gen much better.



Just a guy who doesn't want to be bored. Also

Dulfite said:
Can someone explain to me what the difference is between DLSS to normal resolution pixels? And am I correct in assuming DLSS is a more efficient way to have better visuals, which means weaker hardware can have better visuals, or is that wrong?

You are correct in assuming DLSS is a more "efficient" way to have better visuals while using the same processing power. 

But I want to explain much deeper than that. I'm going to simplify and overlook a lot of complexities and nuances for the sake of making this more digestible for people who aren't into tech. I'll also explain a bit about how game processing works beyond just DLSS so other stuff the likes of CGI and Pemalite explain around here makes more sense in the future for many. This post isn't just for you so I'm sorry if you already understand this stuff beyond the question you asked. I really don't mean to be talking down to you if it comes off that way. I just want to make it more general for anyone interested.

Let's break how a game processes a frame down into 2 things

  1. Logic - What's happening in the game. Where objects are located, what they're doing, the implication of what objects are doing has on other objects (i.e. physics), and where objects are going. The result of logic processing is a few wireframes (skeletons) of the game world at a specific point of time. These wireframes represent different types of information about the game world like the depth of objects in relation to one another and "motion vectors" which represents which directions objects are moving in. This typically happens 30 or 60 times a second for console games. And this typically takes place on the CPU. 
  2. Image - Taking all the wireframe information from a particular camera view and then painting it in with colour, textures, shadows, and effects. If you want to really simplify it, you can think of it as painting made up of dots. The higher the number of dots (pixels) you paint an image with, the more work it is. And each dot takes a bunch of information about light, texture, shadows, object, and depth to figure out what exactly needs to be painted for each dot. The more effects you are using, the more time each dot takes to paint. And this process typically takes place on the GPU.

The general aim of upscaling techniques is to paint with less dots and then use math to fill in more dots (the missing information) to make it fit the wanted output resolution. The math that is used is what separates different forms of upscaling and DLSS is a very advanced way of filling in that extra information.

For example, the easiest form of upscaling an image is to just duplicate information that is already there. Going from 1080p to 4K is just doubling the information of a pixel both horizontally and vertically. So the game engine processes the image for a 1080p image and then before outputting it to the TV, makes each dot/pixel take up 4 dots instead in a square fashion. 

Checker-boarding is much more advanced than that. It renders the initial image at a lower resolution and then uses information from previous frames and motion vectors (how the objects are moving), along with other information, to help fill in the gaps of this lower resolution image before outputting it to the TV. It also applies additional image filters like sharpening to focus on smoothing edges in the image (where colours change drastically between neighbouring pixels - also known as high frequency areas in image processing). But that leads to a different but related discussion of anti-aliasing.

DLSS takes this process of filling in the image with more information one step further. With checker-boarding and upscaling, the same math model is applied to all images and all pixels pretty much the same way. But now imagine you could learn "best case scenarios" and know how to treat different situations with different methods of upscaling. This is what DLSS does. Roughly the process is:

  1. An AI (created by Nvidia) is shown how a game should look at a very high resolution.
  2. Then the AI is fed lower resolution images/videos of similar scenes and it works out how to apply math to the image to make it look like the high resolution one.
  3. This process is iterated on a few times to improve the math. The machine is also given different scenes from the game to help improve the math.
  4. The machine spits out a bunch of math that can be used to take a lower resolution image and make it look higher resolution. The math involves existing image data, data from past images, and more and is even more context aware based on what might be happening in the game. The process from steps 1 to 4 is known as machine learning, or "deep-learning"; which is where DLSS comes from. DLSS = Deep Learning Super Sampling
  5. The math is included in the game files or the driver/code used to operate the GPU. 
  6. Now when the game is running the initial image is rendered at a lower resolution but then upscaled using the fancy math produced by the AI to make an image much closer to (and in some cases even better than) an image that would require much more processing power at a higher resolution.  

For those who were interested, I hope this layman explanation helped

EDIT: And now I see I was beaten to the punch with a much more succinct but still good explanation. Haha! Back to work I go.

Last edited by trasharmdsister12 - on 09 October 2020

adding what the previous posters said upscaling 1440p with DLSS to 8k requires 3000 more vram and native 8k requires 12000 more vram, for the game Control!



 "I think people should define the word crap" - Kirby007

Join the Prediction League http://www.vgchartz.com/predictions

Instead of seeking to convince others, we can be open to changing our own minds, and seek out information that contradicts our own steadfast point of view. Maybe it’ll turn out that those who disagree with you actually have a solid grasp of the facts. There’s a slight possibility that, after all, you’re the one who’s wrong.

If this technology could upscale all Switch games to "only" 1080p, I'd be more than happy, especially in games like the witcher or doom 

Last edited by Kristof81 - on 09 October 2020