By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Could Next-Gen Switch Use Nvidia DLSS AI Upscaling?


https://www.youtube.com/watch?v=coOzBPGl-O8

Nvidia's DLSS upscaler is starting to become seriously good - and AI upscaling has been added to the latest Shield Android TV. So what happens if we apply *both* of these technologies to Switch titles? Could AI inferencing be a good match for a next-gen Switch? We've got some practical tests here that deliver some fascinating results.

---

End result, magic does not exist.
But it could work on some titles.



@Twitter | Switch | Steam

You say tomato, I say tomato 

"¡Viva la Ñ!"

Around the Network

Yes they can , because Switch 2 will be using new GPU that use AI scaling.



If the Switch 2 has Tensor cores, then yes it can have DLSS.
If it doesn't, then no. No it can not have DLSS.

Basically the technology already exists today for a hypothetical Switch 2 to have DLSS... If it used Tegra Xavier or Tegra Orin.



--::{PC Gaming Master Race}::--

Overall I think DLSS kinda sucks, there other methodes just as good or better.

But he makes a good point.... a more powerfull switch useing DSLL, might be able to match a base PS4 in terms of image quality.
Maybe Switch 2, is around what a base PS4 is today, via scaleing tricks.



At UNI 13 years ago i played with neural networks for my project on images and video enhancements. The videos used to take hours to process lol.

To think that this can be done in a video game and in real time now is insane. I'm sure people will come up with their own methods for this in future and def Nintendo should look into it. It might be the thing it needs to their next gen switch to compensate for the lack of raw power.



 

 

Around the Network

as far as I understand DLSS, the games that use it need to be processed intensely by a server farm to get to the point where the AI rendering result is as good as a higher native res and faster in performance - I don't think we heard how long it really takes to get there (we just know the earliest DLSS enabled titles didn't really beat out using slightly lower native res)

obviously server farms will get beafier and this whole process should become better and faster in the future, but still I'm quite sure it won't get to the point that everyone can use it for their games any time soon

Last edited by Lafiel - on 06 February 2020

Lafiel said:

as far as I understand DLSS, the games that use it need to be processed intensely by a server farm to get to the point where the AI rendering result is as good as a higher native res and faster in performance - I don't think we heard how long it really takes to get there (we just know the earliest DLSS enabled titles didn't really beat out using slightly lower native res)

obviously server farms will get beafier and this whole process should become better and faster in the future, but still I'm quite sure it won't get to the point that everyone can use it for their games any time soon

Its like checkerboarding... its a technique to use less to show more.

Honestly most games that come with a resolution scaleing option (like 1080p, at 70%) + Sharpening filter,
could be near equal or better than DLSS.


Atleast it was like that early on, DLSS isnt magic, its alot of work for little effort.


https://www.youtube.com/watch?v=dADD1I1ihSQ

https://www.youtube.com/watch?v=3DOGA2_GETQ

DLSS is overrated by far.
Resolution scaleing + sharpening filters beat it, in alot of games.
(without needing games to have additional packages of stuff for games, to use DLSS, or haveing server farms makeing these optimisations)

People need to understand that DLSS has downsides too.
The "deep learning" of the machine determines how things turn out.
Sometimes this manifests in a slight blurr + oil painting effect when you look at things.

Last edited by JRPGfan - on 07 February 2020

JRPGfan said:

Its like checkerboarding... its a technique to use less to show more.

Honestly most games that come with a resolution scaleing option (like 1080p, at 70%) + Sharpening filter,
could be near equal or better than DLSS.


Atleast it was like that early on, DLSS isnt magic, its alot of work for little effort.

I agree that the early implementations weren't worth it, but the most recent DLSS implementations in Wolfenstein Youngblood and Deliever us the Moon show that the technique can do what it promised to do - show visual results on par with a given native res, while offering clearly better performance (and clearly looking better than res scaling + sharpening at an equivalent performance).

We unfortunately don't know whether or not these results are easily reproduceable with the majority of titles or due to Ws:YB/DutM being well suited for DLSS. Neither do we know how long/how much processing time it takes Nvidia to train the AI enough to get to these results, which would determine how feasible (from a cost/time/energy point of view) it is to implement it in 100+ very demanding games per year.



Yes.
The problem with the first test was that he used a AI trained for generic movies.
The key point is to train the net for specific content.
The developers provide tons of 4k, antialiased, etc images to the net and the respective 360p/720p/1080p native image. train a week or two. And load the model along with the game. The result would be far, far better. Train to avoid jaggies while adding details would be piece of cake, specially if it only have to resolve types of images you know to encounter in the game.

I dont know where the processing power for forward a image in a net would be, I first supose that was a lot, so It could be incorporated to the dock(as it is resolved after rendering time), so, there would be a higher price dock that would upscale to 4k.(as optional accessory for high end users)
However, the video remembered me that new mobile chips of NVIDIA bring also the tensor cores embeded in the SoC, it could be the switch system, and far more cheaper than including in a dock specially for it...

Anyway, I already considered that. Even before the RTX boards with DLSS:
http://gamrconnect.vgchartz.com/thread.php?id=236112&page=1



JRPGfan said:

Its like checkerboarding... its a technique to use less to show more.

Sort of.
The "A.I" does sample multiple frames of information to infer what it looks like at a higher resolution frame.
But it's a completely different approach to the same issue.

JRPGfan said:

Honestly most games that come with a resolution scaleing option (like 1080p, at 70%) + Sharpening filter,
could be near equal or better than DLSS.

Atleast it was like that early on, DLSS isnt magic, its alot of work for little effort.

When DLSS works the way it was intended, the game will look higher resolution.

You are right, it isn't magic, it's using the vast computational capabilities of nVidia's server farms.

JRPGfan said:

DLSS is overrated by far.
Resolution scaleing + sharpening filters beat it, in alot of games.
(without needing games to have additional packages of stuff for games, to use DLSS, or haveing server farms makeing these optimisations)

People need to understand that DLSS has downsides too.
The "deep learning" of the machine determines how things turn out.
Sometimes this manifests in a slight blurr + oil painting effect when you look at things.

Resolution scaling is taking a game and dropping the resolution dynamically based on load in order to retain a fixed framerate on hardware that isn't capable enough to run it natively like consoles or low-end PC GPU's.

DLSS takes a different approach.

It does actually perform a few passes, like Anti-Aliasing and a Sharpening pass, which is why it often gets derided in various circles.

Let's not kid ourselves though, it isn't magic... And it is more impactful the lower in resolution you go, if you are already gaming at 1440P-2160P, it's probably a fairly useless feature unless you really need the extra performance, but at say... 480P and 720P? That is where it starts to get interesting.

JRPGfan said:

People need to understand that DLSS has downsides too.
The "deep learning" of the machine determines how things turn out.
Sometimes this manifests in a slight blurr + oil painting effect when you look at things.

Every rendering technique has a downside... The Blur issue isn't a new one, Console games often employ lower-quality motion blur and anti-aliasing which often results in a pretty soft image... It's more of an issue on the PC, because we typically have avoided that issue entirely due to having the power to run better motion blur and anti-aliasing... So we are more prone to having a whinge if things don't go our way visually.

For something like a Switch device, where a games blur is compounded due to the lower rendering resolution... I see it as a potential for a good uplift in visuals.

But then you have the other issue, those visual improvements will be tied to the servers, thus if they ever get shut down, put behind a pay-wall, then your games will get downgraded visually in the future.

Last edited by Pemalite - on 07 February 2020

--::{PC Gaming Master Race}::--