By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Nvidia reveals DLSS 5 , essentially applies AI filter to games in real time.

I mean if recent history is any precedent, people will be complaining about how AMD and Intel are lagging behind on neural-rendering features in two years, or how they're not supporting older generation GPUs (see: FSR4 on RDNA 2 and 3 situation.) We saw the same reaction with frame-generation, "the ghosting is horrible, and latency!", Ray Reconstruction "shimmering! flickering!" and DLSS 1 "the jitter!" All of these in their mature forms are excellent products that most people choose to toggle. And the best part is that it is always optional and up to user-preference. 

Heck at least in those instances the artifacts are objectively identifiable and concrete. In this case it really is about a very subjective preference. I am very much a believer in a modicum of The Death of the Author, especially when it comes to mixed-media that are the product of collective force like video games - which always have had their inconsistencies, partly because of the difficulty of synthesis of the diverse components that make them but also the various hands that are touching them having different, inconsistent, visions in themselves. 

 



Around the Network
curl-6 said:
Norion said:



Unfortunately though a lot of people act irrationally over anything AI related currently so there's a major lack of nuance in reactions to this.

It's actually quite rational to dislike LLMs as their contributions to society and the world are overwhelmingly negative.

DLSS, including DLSS 5, doesn't use VLMs/LLMs. The equivocation of LLMs with AI/DL generally is where the irrationality arises.

As far as AI technologies go, DLSS is pretty ethical: 

1. It is trained on data provided by developers consensually. Because these models need to inference at real-time they can't be as large as a VLM (Video-Language Model.) So instead the requisite training data is the buffer data available in a game engine, most importantly motion vector data.  

2. They are relatively small, small enough that if you had the data you could train them on a few tens of thousand dollar workstation. Meaning the environmental impact is also quite small. Labor costs are probably the bulk of the costs for training DLSS models.

3. Their usage is purely option. DLSS has always been a toggle-feature. 

The only main ethical issue that arises with them is that they can displace some digital artists by reducing available jobs. But that is also true of other rendering technologies, like ray tracing or even middle-ware like game engines.  



sc94597 said:
curl-6 said:

It's actually quite rational to dislike LLMs as their contributions to society and the world are overwhelmingly negative.

DLSS, including DLSS 5, don't use VLMs/LLMs. The equivocation of LLMs with AI/DL generally is where the irrationality arises.

As far as AI technologies go, DLSS is pretty ethical: 

1. It is trained on data provided by developers consensually. Because these models need to inference at real-time they can't be as large as a VLM (Video-Language Model.) So instead the requisite training data is the buffer data available in a game engine. 

2. They are relatively small, small enough that if you had the data you could train them on a few tens of thousand dollar workstation. Meaning the environmental impact is also quite small. Labor costs are probably the bulk of the costs for training DLSS models.

3. Their usage is purely option. DLSS has always been a toggle-feature. 

The only main ethical issue that arises with them is that they can displace some digital artists by reducing available jobs. But that is also true of other rendering technologies, like ray tracing or even middle-ware like game engines.  

Upscaling is one of very few legitimately positive uses for AI technology, and pretty much nobody takes issue with such use; what we're seeing here though is just a hideous slop filter that makes everything look ugly and uncanny, like those horrendous deepfake ads on social media.



curl-6 said:

Upscaling is one of very few legitimately positive uses for AI technology, and pretty much nobody takes issue with such use; what we're seeing here though is just a hideous slop filter that makes everything look ugly and uncanny, like those horrendous deepfake ads on social media.

Outside of your personal opinion - which I disagree with (at the very least the intensity of it), what is the social harm of the existence of an optional toggle feature like this? You keep suggesting it is being forced on you, but DLSS even today is optional and that likely won't change with DLSS 5. 

I also don't really see the difference between inferring pixels (DLSS 1+), frames (DLSS 3+), denoising rays (DLSS 3+) and inferring lighting, contact shadows, and (possibly) PBR materials in DLSS 5. DLSS 5 is a lot closer to upscaling in this sense than it is to an auto-regressive VLM, in that it is works best when kept deterministic, small and targeted to specific inference-cases. A VLM can't achieve this in real-time (well it can if you set the temperature hyper-parameter to 0 and have the requisite hardware, but then you would get an even more boring output.)  

Last edited by sc94597 - 8 hours ago

sc94597 said:
curl-6 said:

Upscaling is one of very few legitimately positive uses for AI technology, and pretty much nobody takes issue with such use; what we're seeing here though is just a hideous slop filter that makes everything look ugly and uncanny, like those horrendous deepfake ads on social media.

Outside of your personal opinion - which I disagree with (at the very least the intensity of it), what is the social harm to the existence of an optional toggle feature like this? You keep suggesting it is being forced on you, but DLSS even today is optional and that likely won't change with DLSS 5. 

I also don't really see the difference between inferring pixels (DLSS 1+), frames (DLSS 3+), denoising rays (DLSS 3+) and inferring lighting, contact shadows, and (possibly) PBR materials in DLSS 5. DLSS 5 is a lot closer to upscaling in this sense than it is to an auto-regressive VLM, in that it is works best when kept deterministic, small and targeted to specific inference-cases. A VLM can't achieve this in real-time (well it can if you set the temperature hyper-parameter to 0, but then you would get an even more boring output.)  

Normalizing slop harms gaming as a whole by enabling progressive enshittification.

AI upscaling and frame gen are technically optional too, and not inherently bad in and of themselves, but were nonetheless quickly abused to cut corners and make gaming worse.

This stuff looks absolutely hideous and people are allowed to not want low quality slop to become the norm.

Last edited by curl-6 - 8 hours ago

Around the Network
curl-6 said:
Norion said:


Unfortunately though a lot of people act irrationally over anything AI related currently so there's a major lack of nuance in reactions to this.

It's actually quite rational to dislike AI/LLMs as their contributions to society and the world are overwhelmingly negative.

I mean I was referring to lack of nuance and not how much someone likes or dislikes it so there are indeed significant concerns and there will probably be a difficult transition period but overwhelmingly negative isn't right. Further implementation in the medical field will save tons of lives every year and that alone will easily make up for any current issues.



curl-6 said:

Normalizing slop harms gaming as a whole by enabling progressive enshittification.

AI upscaling and frame gen are technically optional too, and not inherently bad in and of themselves, but were nonetheless quickly abused to cut corners and make gaming worse.

This stuff looks absolutely hideous and people are allowed to not want a future where this is the norm.

I don't agree at all that gaming is worse because of upscaling and frame generation. As an example, the Switch 2 would be a relatively poorer product without DLSS. Frame generation has allowed PC gamers with low-specs to improve their experiences and also reduce e-waste by using low-powered secondary GPUs as Lossless Scaling cards.  

Your perspective basically amounts to "I personally find this hideous therefore the option shouldn't exist." 



Norion said:
curl-6 said:

It's actually quite rational to dislike AI/LLMs as their contributions to society and the world are overwhelmingly negative.

I mean I was referring to lack of nuance and not how much someone likes or dislikes it so there are indeed significant concerns and there will probably be a difficult transition period but overwhelmingly negative isn't right. Further implementation in the medical field will save tons of lives every year and that alone will easily make up for any current issues.

Honestly, the internet as a whole kinda killed off nuance long ago.

There are a few instances where AI can be useful like upscaling, and potentially medical imaging and the like too, though with the latter there's risks when AI models hallucinate and get things wrong, so you still very much need human intervention when patients' lives could be on the line.



sc94597 said:
curl-6 said:

Normalizing slop harms gaming as a whole by enabling progressive enshittification.

AI upscaling and frame gen are technically optional too, and not inherently bad in and of themselves, but were nonetheless quickly abused to cut corners and make gaming worse.

This stuff looks absolutely hideous and people are allowed to not want a future where this is the norm.

I don't agree at all that gaming is worse because of upscaling and frame generation. As an example, the Switch 2 would be a relatively poorer product without DLSS. Frame generation has allowed PC gamers with low-specs to improve their experiences and also reduce e-waste by using low-powered secondary GPUs as Lossless Scaling cards.  

Your perspective basically amounts to "I personally find this hideous therefore the option shouldn't exist." 

The problem occurs when stuff this kind of stuff is used to cut corners, for instance, some devs now build their games with the assumption that you'll be using frame gen or reconstruction, so instead of actually optimizing their games to perform at a decent level on a range of hardware, they'll just get it running at a shitty performance level even on expensive kit, and effectively force the user to use frame gen or reconstruction just to get a playable experience.

A player's experience can thereby be negatively affected even if they choose not to use it.



curl-6 said:

The problem occurs when stuff this kind of stuff is used to cut corners, for instance, some devs now build their games with the assumption that you'll be using frame gen or reconstruction, so instead of actually optimizing their games to perform at a decent level on a range of hardware, they'll just get it running at a shitty performance level even on expensive kit, and effectively force the user to use frame gen or reconstruction just to get a playable experience.

A player's experience can thereby be negatively affected even if they choose not to use it.

People say this, but I can't think of a single game where you "need frame gen or reconstruction" to play it at say console-level settings with modern hardware. High-end features might require these to be able to play the game at enthusiast settings (like with path tracing), but even horribly optimized games still play okay without DLSS or frame-gen. Usually the actual culprit of poor optimization isn't some developer intent but invasive technologies like Denuvo, financial/labor constraints, hardware limitations (lack of mesh-shaders or decent HW RT acceleration) or poorly suited game engines (thinking of Monster Hunter Wilds and RE Engine for open world games in general, as an example.) 

Which concrete example are you thinking of and why do you think it is a developer intention to depend on upscaling and not an actual technical or organizational limitation?