By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Nvidia reveals DLSS 5 , essentially applies AI filter to games in real time.

HoloDust said:

Now, if developers can train it on Ground Truth of their game in the future (I can envision something like Path Traced version of the game with 1024 or so rays cast per pixel, which GPUs can't run in real time, and then that being fed into DLSS5 as training data)

I think the long term goal is to push more and more into efficient on-line learners rather than pre-trained models. Both Nvidia and AMD have had "Neural Radiance Caching" for RTGI  and path-tracing in their SDKs for about a year now, and Nvidia's solution has been implemented in Portal RTX and Quake III iirc. 

The point of DLSS 5 seems to be to use a pre-trained model to blend the lighting and materials more cleanly, but it seems to me that just like with path-tracing, an on-line learner could do this better than a pre-trained model (and would be very game-specific/art-style specific because it learns from the game's data as you play.) 



Around the Network
sc94597 said:
curl-6 said:

It's actually quite rational to dislike LLMs as their contributions to society and the world are overwhelmingly negative.

DLSS, including DLSS 5, doesn't use VLMs/LLMs. The equivocation of LLMs with AI/DL generally is where the irrationality arises.

As far as AI technologies go, DLSS is pretty ethical: 

1. It is trained on data provided by developers consensually. Because these models need to inference at real-time they can't be as large as a VLM (Video-Language Model.) So instead the requisite training data is the buffer data available in a game engine, most importantly motion vector data.  

2. They are relatively small, small enough that if you had the data you could train them on a few tens of thousand dollar workstation. Meaning the environmental impact is also quite small. Labor costs are probably the bulk of the costs for training DLSS models.

3. Their usage is purely option. DLSS has always been a toggle-feature. 

The only main ethical issue that arises with them is that they can displace some digital artists by reducing available jobs. But that is also true of other rendering technologies, like ray tracing or even middle-ware like game engines.  

I use DLSS myself from time to time, especially in more demanding titles. My main gripe with the technology is that it removes focus on optimization in favor of software bypasses, this reduces the required competence required to utilize tools and engines for development. UE5, with all its baked-in functionality, led to similar issues; developers have enough "auto-fill" options to not need proper skill and understanding of various, singular tweaks and adjustments. All in all, this leads to poorly optimized games that function poorly on hardware that can't use the latest generation Super-Sampling and/or fram-gen, or even games that run poorly with DLSS enabled, taking ages to fix (if it happens at all) by developers who skill and insights.



Otter said:

Beyond A.I skeptisim though there is just genuine criticism over what its representing. Even 4 years ago this bottom image would look fake or photoshopped.

DLSS <4 is different because it maintains the vision of a game with only the slightest misrepresentations (occasional textures/particles fx etc). This is several 100 magnitudes more disruptive. 


I think this is the meat of your point, and I'd like to address this. I think this argument can be applied to many rendering technologies. For example, older games were designed with certain limitations in mind by their artists. When their textures are upgraded or let's say path-tracing is applied with something like RTX Remix, the atmosphere and style can (and almost certainly does) change drastically, and it's not always in line with say the original concept art (not that the original game is exactly reflective of concept art, either.) This "misrepresents" the original vision in the same way DLSS 5 seems to. And yes, people who are very purist about intentionality in their media consumption do boycott these sort of remasters. They represented a small part of a large spectrum. On the other extreme of the spectrum are people who love to mod their games, and this I think is a larger segment of PC gamers than the prior. Although I am mostly basing that on intuition more.

For new titles released after the technology itself, I don't think this same dichotomy exists. Nobody (or rather very few people) say(s) that Cyberpunk 2077 is misrepresented by path-tracing, even though the game looks drastically different with it on. That's because the game was developed with ray-tracing in mind, and it was a natural step to transition to path-tracing from there. It was a technology that was anticipated. This isn't the case with DLSS 5, and there will need to be time for artists and developers to catch up and experiment. 

I also think people are extrapolating too much from what they know (or think they know) about pre-trained image and video AI models and applying it here, are some times confirming biases when analyzing an image, without considering all possibilities. The whole point of using a model trained on motion vectors towards deterministic targets is that the artists have more control over the results. On the other-hand vision and image models fail to achieve their goal when they are made too deterministic and that is a very real trade-off that results in very obvious "hallucinations." A model like DLSS 5 is neither trained like a diffusion image/video model, nor is it trained like an autoregressive next-token predictor. Its main goal is to match a ground-truth target using a lossy input of meaningful features. By its nature it is meant to be as deterministic as possible. There is no "creativity" element like with image and video generators. Yet I've been seeing people looking for "creative" aspects that I am not convinced are actually that. There are other explanations (like the model not accurately predicting the ground-truth; for example, mistaking a depth shadow as makeup), and those explanations make it likely that the ability to accurately meet a target ground truth improves over time.  

Last edited by sc94597 - 13 hours ago

Mummelmann said:

I use DLSS myself from time to time, especially in more demanding titles. My main gripe with the technology is that it removes focus on optimization in favor of software bypasses, this reduces the required competence required to utilize tools and engines for development. UE5, with all its baked-in functionality, led to similar issues; developers have enough "auto-fill" options to not need proper skill and understanding of various, singular tweaks and adjustments. All in all, this leads to poorly optimized games that function poorly on hardware that can't use the latest generation Super-Sampling and/or fram-gen, or even games that run poorly with DLSS enabled, taking ages to fix (if it happens at all) by developers who skill and insights.

I know this is a common sentiment amongst some gaming enthusiasts, but I am not convinced that there is actual evidence for it. (See my conversation with Curl-6.) 

I think there might have been a very slight drop-off in game optimization compared to 2013-2017, but before then AAA games weren't very well optimized at all, in my opinion. On consoles, games usually struggled to hit a 30fps target, and on PC even people with the highest end GPUs would struggle to set all settings to maximum. Games would release with max settings that would only be hit after a new generation of GPUs would release after the game's release. 

I also don't think there has ever been a time period in the history of gaming where somebody can play modern games on as old of hardware or on as low-budget of hardware as we can now. Crimson Desert has a minimum requirement of a GTX 1060. That's a ten year old mid-ranged GPU. iGPUs are viable options for playing games. Sure, it is in part because of Moore's Law slowing, but if these new features like DLSS, Ray Tracing, etc have not made the games literally un-playeable on ten year old GPUs that don't support them well, how exactly are they crutches? In the 2000's you could have four year old hardware that struggled or just outright didn't support new games. 



dane007 said:

Look at the VA for her with makeup. It's very close to what dlss5 does

Are you referencing Grace from RE9? Grace's voice actor and face model are two different women that do not look the same.

Julia Pratt is Grace's face model

Angela Sant'Albano is her voice actress



Around the Network

I'm down with this, like every technological advancement. And, like every technological advancement, it takes time for us to really see what it'll do in the real world. A few hand picked examples really aren't a good way to judge it. I just know that, on average, technology moves forward and things get better. I have no reason to doubt that will be true here as well.



dane007 said:

Look at the VA for her with makeup. It's very close to what dlss5 does

The main issue I see is that the mo-cap model and the DLSS 5 off in-game Grace Ashcroft (right) both have normal, round cheeks. The AI lighting on DLSS 5 (middle) is causing shading under the cheek bone that gives the appearance of the buccal fat removal cosmetic procedure that has become popular among women these days.

Last edited by shikamaru317 - 11 hours ago

curl-6 said:
Norion said:

The thing is that it's way more than just a few. Progress seemed like it might've been slowing some for a stretch last year but in the past few months it's picked up a lot to where recently it's gotten good enough at coding that the field of coding is in the process of being revolutionized with tons of professional coders adopting it lately. In general people are increasingly using LLMs to help them do important stuff like research things, look over legal contracts and so on. It's also getting capable at cybersecurity, for example just earlier this year Mozilla partnered with Anthropic which resulted in February having twice as many Firefox bugs found compared to a usual month. 

The legal stuff and others works done by AI are plagued with errors though because LLMs hallucinate and can't differentiate true from false data.

Given the flipside is deepfake revenge/child porn, rampant misinformation, a pretense for suits to lay off workers, slop infesting every corner of the internet, scams, environmental destruction, prices for RAM and stuff going through the roof, a bubble that threatens to crash the economy and more, I'd say the bad still far outweighs any good.

As haxxiy mentioned there has been significant improvement with that in the past year. Someone still shouldn't blindly trust an initial output but that's no different from the internet where it can be incredibly useful but someone shouldn't blindly trust the first result on Google. The main thing is you recognize that the tech is powerful enough to do a lot of harm when used poorly but have been downplaying how that power can be used for a lot of good as well when used right.

I imagine you've seen Genie 3 since it's been talked about on here some and even though that tech is still primitive this article shows how it's already being used to help train self driving cars so is already helping save lives and prevent injuries through just that one use case and that's before it starts getting used for things like exposure therapy, training robots and helping out scientific research. Basically thinking it's more negative than positive right now is fair enough but I do think your view could be more balanced. For the future I'm optimistic for when things get advanced enough that robots are doing all required labour to keep society running making it so people at large no longer have to work to have a comfortable life.

Last edited by Norion - 11 hours ago

shikamaru317 said:
dane007 said:

Look at the VA for her with makeup. It's very close to what dlss5 does

The main issue I see is that the mo-cap model and the DLSS 5 off in-game Grace Ashcroft (right) both have normal, round cheeks. The AI lighting on DLSS 5 (middle) is causing shading under the cheek bone that gives the appearance of the buccal fat removal cosmetic procedure that has become popular among women these days.

Yep, it seems obvious DLSS 5 is angling for a contoured+highlights aesthetic (typical in modern beaty standards) beyond just trying to depict the scene more naturally. 

Also wasn't sure whether it was just a frame misalignment but when DF pause their video 1:40 (just as shown here in your pic), the DLSS 5 image looks like it has her mouth apart whereas in the OG it seems to be closed. Genuinely curious whats going on there but again it mirrors the typical fashion/beaty trend of women models leaving the mouth slightly agape

Last edited by Otter - 11 hours ago

sc94597 said:

I think this is the meat of your point, and I'd like to address this. I think this argument can be applied to many rendering technologies. For example, older games were designed with certain limitations in mind by their artists. When their textures are upgraded or let's say path-tracing is applied with something like RTX Remix, the atmosphere and style can (and almost certainly does) change drastically, and it's not always in line with say the original concept art (not that the original game is exactly reflective of concept art, either.) This "misrepresents" the original vision in the same way DLSS 5 seems to. And yes, people who are very purist about intentionality in their media consumption do boycott these sort of remasters. They represented a small part of a large spectrum. On the other extreme of the spectrum are people who love to mod their games, and this I think is a larger segment of PC gamers than the prior. Although I am mostly basing that on intuition more.

For new titles released after the technology itself, I don't think this same dichotomy exists. Nobody (or rather very few people) say(s) that Cyberpunk 2077 is misrepresented by path-tracing, even though the game looks drastically different with it on. That's because the game was developed with ray-tracing in mind, and it was a natural step to transition to path-tracing from there. It was a technology that was anticipated. This isn't the case with DLSS 5, and there will need to be time for artists and developers to catch up and experiment. 

I also think people are extrapolating too much from what they know (or think they know) about pre-trained image and video AI models and applying it here, are some times confirming biases when analyzing an image, without considering all possibilities. The whole point of using a model trained on motion vectors towards deterministic targets is that the artists have more control over the results. On the other-hand vision and image models fail to achieve their goal when they are made too deterministic and that is a very real trade-off that results in very obvious "hallucinations." A model like DLSS 5 is neither trained like a diffusion image/video model, nor is it trained like an autoregressive next-token predictor. Its main goal is to match a ground-truth target using a lossy input of meaningful features. By its nature it is meant to be as deterministic as possible. There is no "creativity" element like with image and video generators. Yet I've been seeing people looking for "creative" aspects that I am not convinced are actually that. There are other explanations (like the model not accurately predicting the ground-truth; for example, mistaking a depth shadow as makeup), and those explanations make it likely that the ability to accurately meet a target ground truth improves over time.  

I hear you in all of what you're saying but as said at the end of my post, this shouldn't of been the way the technology was first showed off if the current results look so polarising and obviously unnatural (too removed from the existing character aesthetic). The Grace image is literally the thumbnail/ poster child for the technology, so Nvidia are clearly very happy with the results. I have no doubt things can be improved but what is on display sets a bad precedent and is deservingly getting backlash. 

I'm sure I'm not the only one who literally thought this was a mistaken April fools video uploaded too early when the DF video dropped.

Last edited by Otter - 10 hours ago