By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Nvidia reveals DLSS 5 , essentially applies AI filter to games in real time.

Soundwave said:

I think what this will lead to is eventually a work flow where game's graphics are actually very basic, like maybe even PS2 or PS3 quality (if even that?) ... and then the bulk of processing spend is going to be to get the generative AI to quickly manipulate the image up to look like whatever photo/video reference it has. What DLSS5 is showing is it's effectively possible to do this in real time. 

I have no idea where a company like Nintendo will sit on this in the long term, but theoretically it's possible to imagine a setup where it terms of raw compute, a Switch 3 for example is not really much more powerful than a Switch 2 on the compute side, maybe even less so, but all the "new" aspect of the hardware is a ton of added Tensor cores that handle the "image manipulation" (AI filter basically). Basically the silicon will be just a ton of Tensor cores dedicated to making the AI filter super fast and be able to extrapolate from some photo reference data it's been fed (trained on), the actual "real rendering" side of the chip is likely to be gimped because it won't be needed. 

Because really why even bother with heavy native rendering when you can just focus the hardware towards basically becoming a AI filter machine. AI image manipulation doesn't care if you ask it to make something photoreal versus hyper cartoony versus hyper detailed or not detailed really. It's just manipulating pixels on a 2D image and then outputting it. Looks like it basically just needs to know from vector data which way things in the scene are moving and it can create a new image that "looks better" just from that. 

But what that will do is basically ice artists out of the gaming process. Why pay an artist when the final frame is generated by an AI. It will eventually devalue graphics because who cares about graphics when its just an AI algo spitting out a 2D image filter basically, does anyone get really excited when they see more advanced AI filters that are more photorealistic going forward? I doubt it, it was neat maybe the first few times you saw it, now it's just a "whatever" or even annoying. 

That sounds bad to me because how can you control your art design and direction? Are you just gonna let an AI bot determine it for you? How will you ensure consistency?



Just a guy who doesn't want to be bored. Also

Around the Network
Soundwave said:
Chrkeller said:

Same way people still had jobs after the industrial revolution.  Same way people had jobs after robotics.  Same way people had jobs after the internet.  The world adjusts.  

Says who? There has never been technology like AI that effectively replaces human intelligence and need for human labour in virtually every industry. 

There's no magic rule that says "the world will adjust just fine". Just because you had a good day on Monday doesn't mean you're guaranteed to have a great day on Friday. You could get hit by a bus on Friday. The world doesn't give a shit about maintaining some kind of feel good equilibrium for everyone. 

Same thing was said about the industrial revolution, robotics and the internet.  You aren't the first to say "OMG this is the biggest change ever."



“Consoles are great… if you like paying extra for features PCs had in 2005.”
Soundwave said:
Chrkeller said:

Same way people still had jobs after the industrial revolution.  Same way people had jobs after robotics.  Same way people had jobs after the internet.  The world adjusts.  

Says who? There has never been technology like AI that effectively replaces human intelligence and need for human labour in virtually every industry. 

There's no magic rule that says "the world will adjust just fine". Just because you had a good day on Monday doesn't mean you're guaranteed to have a great day on Friday. You could get hit by a bus on Friday. The world doesn't give a shit about maintaining some kind of feel good equilibrium for everyone. 

Im less worried about their intelligence and more about the slopfication of everything,  the finiancialization of everything and the forcing of the minimal amount of workers to do the maximum amount of work for the least pay they can get away with. That has been a problem of capitalism since its existence and this just accelerates it.

People forget that the luddites were right about some things and they were rallying against something which i think was actually beneficial to humans. We don't need change for the sake of it. Why do we need AI? Why is it forced on us?

And its multiple different techs smooshed together so when we criticize all the slop stuff, defenders go find advances in chemical and medicine making, which is different tech and far more restricted to certain fields. 



Just a guy who doesn't want to be bored. Also

I think its an insult to the industrial revolution and the internet to compare AI to it. I think AI is the mask and the danger behind the mask is overfinancialization, slopification and enshitification.



Just a guy who doesn't want to be bored. Also

Soundwave said:
sc94597 said:

If the measure of "AI filter" is that it works on 2D image data + buffer data, then all DLSS with the exception of Ray Reconstruction are "AI filters."

Personally I think having access to buffer data is a very important difference between this and the "AI filters."

I also think that the final release version probably will end up having G-buffer data in its training set and inference to solve some of the criticisms we've been seeing, if it doesn't already (that is still ambiguous.) 

I think Nvidia's idea was that neural shaders would be responsible for pre-processing of materials and lighting, then DLSS5 would be a final touch up, but they really should just merge the technologies. If DLSS RR can be a DLSS (brand-wise) despite not being purely post-processed, then so can neural shaders. 

Or maybe it is time for DLSS to be abandoned as a brand and just go back to describing their super-sampler? 

The significant difference being that the final image is really no longer something an artist who worked on that game can say is their art work really. 

It's a different image, now Nvidia is being clever and knowing that this will be controversial so they're keeping the image manipulation to look fairly close to the original image for now and trying BS like saying "it's just a lighting filter" (when its not), but likely there's no real limit to how far this could be pushed, it just comes down to which image data set you feed the AI algorithm. If you wanted for example to make the main character look just like a photo real Megan Fox and gave the AI algorithm enough image data, that likely is possible for example. The generative AI doesn't care, it just has its 2D input (image) and then will create from its dataset something it deems "looks better". 

At some point then why even hire a full art staff. What art staff's will become is basically a small handful of people that are there probably just to create reference images for the generative AI to understand roughly what the art style/look of the game should be and then it takes over. And even that at some point probably you're not even going to hire reference artists as generative AI will understand basically every kind of art style there is. 

If artists have access to the tool during the development, it shouldn't be an issue at all. Assets are tested in different environments as is. Also "it's a different image" is true for any post-process effect. That isn't very meaningful as a distinction here. 

This isn't like a Stable Diffusion, Sora, etc. because the training domain and target domain are entirely different, the training objective is different, and the hyper-parameters are likely set to be deterministic (which isn't the case for generative video models.) Also this model has access to buffer data, while video/image models only have access to color buffers and maybe exposure. 



Around the Network
Eagle367 said:

I think its an insult to the industrial revolution and the internet to compare AI to it. I think AI is the mask and the danger behind the mask is overfinancialization, slopification and enshitification.

Or it can be used to increase supply, outpacing demand, which lowers consumer prices.  It can be used to advance medical technology, so people live longer with their loved ones.  I mean, we don't have to boil AI done to such nonsense.  



“Consoles are great… if you like paying extra for features PCs had in 2005.”
Eagle367 said:

I think its an insult to the industrial revolution and the internet to compare AI to it. I think AI is the mask and the danger behind the mask is overfinancialization, slopification and enshitification.

I mean "over-financialization", "slopification", and "enshitification" (not necessarily in that language) was one of the criticisms of the industrial revolution. Thinking of textiles before and after industrialization. One of the big arguments was that it commercialized and reduced the quality of what were unique craft outputs.  



Eagle367 said:
Soundwave said:

I think what this will lead to is eventually a work flow where game's graphics are actually very basic, like maybe even PS2 or PS3 quality (if even that?) ... and then the bulk of processing spend is going to be to get the generative AI to quickly manipulate the image up to look like whatever photo/video reference it has. What DLSS5 is showing is it's effectively possible to do this in real time. 

I have no idea where a company like Nintendo will sit on this in the long term, but theoretically it's possible to imagine a setup where it terms of raw compute, a Switch 3 for example is not really much more powerful than a Switch 2 on the compute side, maybe even less so, but all the "new" aspect of the hardware is a ton of added Tensor cores that handle the "image manipulation" (AI filter basically). Basically the silicon will be just a ton of Tensor cores dedicated to making the AI filter super fast and be able to extrapolate from some photo reference data it's been fed (trained on), the actual "real rendering" side of the chip is likely to be gimped because it won't be needed. 

Because really why even bother with heavy native rendering when you can just focus the hardware towards basically becoming a AI filter machine. AI image manipulation doesn't care if you ask it to make something photoreal versus hyper cartoony versus hyper detailed or not detailed really. It's just manipulating pixels on a 2D image and then outputting it. Looks like it basically just needs to know from vector data which way things in the scene are moving and it can create a new image that "looks better" just from that. 

But what that will do is basically ice artists out of the gaming process. Why pay an artist when the final frame is generated by an AI. It will eventually devalue graphics because who cares about graphics when its just an AI algo spitting out a 2D image filter basically, does anyone get really excited when they see more advanced AI filters that are more photorealistic going forward? I doubt it, it was neat maybe the first few times you saw it, now it's just a "whatever" or even annoying. 

That sounds bad to me because how can you control your art design and direction? Are you just gonna let an AI bot determine it for you? How will you ensure consistency?

The AI basically decides based on whatever prompt criteria you've likely outlined (ie: "make this look moar real"). I suppose you can make it more specific by saying like "here's image data of an actor/actress, make the main model look more like that". That type of thing. But yes, essentially you are ceding control over your final image to an AI algo to essentially "decide" what the image should look like. 



Soundwave said:
Eagle367 said:

That sounds bad to me because how can you control your art design and direction? Are you just gonna let an AI bot determine it for you? How will you ensure consistency?

The AI basically decides based on whatever prompt criteria you've likely outlined (ie: "make this look moar real"). I suppose you can make it more specific by saying like "here's image data of an actor/actress, make the main model look more like that". That type of thing. But yes, essentially you are ceding control over your final image to an AI algo to essentially "decide" what the image should look like. 

This model is almost certainly not trained on text-to-image. A future model might be, but not DLSS 5. 



Chrkeller said:
CaptainExplosion said:

Then how do they expect us to have income when all the jobs are being taken by AI?

Same way people still had jobs after the industrial revolution.  Same way people had jobs after robotics.  Same way people had jobs after the internet.  The world adjusts.  

I'm a little skeptical that this is a change that can be adjusted too properly, not to mention the adjustment to the industrial revolution was far from painless, the loss of just 10's of millions of jobs worldwide during that period led to mass migrations and famines, and some families simply wasted away before new liveable wage employment could be found, other families were forced to send their children into dangerous industrial jobs just to make ends meet.

We have seen estimates that huge percentages of the workforce will be severely impacted by AI and automation in the coming decades, starting with estimates as high as 30% of jobs over the next 15 years or so being significantly impacted by AI automation (25% of work hours and therefore paid hours removed by AI, plus 6-7% of the workforce displaced), culminating in estimates as high as 70% of jobs heavily impacted by 50 years from now. How does a planet recover from a couple billion people losing their jobs and many more losing a good chunk of their paid work hours to AI, with inflation already the way it is? There are only but so many new jobs that AI can create in fields like software design, computer hardware fabrication, programming, mechanical maintenance (for automated factory line machines and such), electrical grid expanse and maintenance, new fuel jobs to power that expanded electrical grid capacity, etc., nowhere near enough to counter billions worldwide unable to find work. Worse, many of the jobs that can be replaced, and are already seeing some replacement, are menial labor jobs, jobs often worked by lower IQ individuals or those with learning disabilities, you can't just train people like that to be efficiently capable of higher learning jobs, no matter how much money you funnel into continuing education programs for AI misplaced workers, some of them just aren't capable of much more than they are doing now. 

Last edited by shikamaru317 - 2 days ago