By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Nvidia reveals DLSS 5 , essentially applies AI filter to games in real time.

2036

Be me.
AI Slop is in all new games except Nintendo, which is $120 each. 
Co-Pilot won't leave you alone on Xbox or PC.
GPUs are $10,000.
PS6 is $1200.
Xbox is $2000.
Gen Alpha calls you a boomer for owning "LoCompute". "Streammm yu gam yu dinoooo¡¡¡".
Trump is in his "tOtally constitutional" fourth term.
Get home from 12 hour shift burying corpses.
Boot up PS5.
Puts in disc.
No AI slop.
No Internet needed.
No pop up adds mid game.
Feels good.

Last edited by Cerebralbore101 - 1 day ago

Around the Network
Cerebralbore101 said:

2036

Be me.
AI Slop is in all new games except Nintendo, which is $120 each. 
Co-Pilot won't leave you alone on Xbox or PC.
GPUs are $10,000.
PS6 is $1200.
Xbox is $2000.
Gen Alpha calls you a boomer for owning "LoCompute". "Streammm yu gam yu dinoooo¡¡¡".
Trump is in his "tOtally constitutional" fourth term.
Get home from 12 hour shift burying corpses.
Boot up PS5.
Puts in disc.
No AI slop.
No Internet needed.
No pop up adds mid game.
Feels good.

If GPUs are 10k then the ps6 cant be $1200.



“Consoles are great… if you like paying extra for features PCs had in 2005.”
curl-6 said:
Norion said:

People also did those things before the internet but that doesn't mean the invention of that didn't bring about massive benefits. Same goes for how the internet wasn't needed and society would've continued functioning just fine without it so those aren't arguments against something by themselves. For the last part it depends on how someone uses it. People can absolutely be lazy and overly rely on it but as this post shows people can get a lot of value out of it when it's used right.

Also I think a big part of this is that the aspects people dislike are often very in your face while the positives are more in the background so are generally less known about. Like stuff like the Genie 3 and Firefox examples I brought up before are largely things that only people that are into the subject of AI will know about while something like skyrocketing RAM prices is gonna be known about by way, way more people. As a result many people seem to have a false impression that the RAM price increase is happening just to do stuff like create more advanced AI images and videos instead of doing things that are far more worthwhile.

Whether those applications outweigh the downsides is a matter of debate though; I mean to take the internet for example, I actually made a thread about whether that was a net positive or negative for humanity and a lot of people felt it had largely been detrimental, especially in terms of allowing misinformation and the resulting authoritarian ideologies to thrive; one could argue the state of the US at the moment was largely enabled by the internet's ability to radicalise people en masse.

Similarly, innovations such as DDT, lead and asbestos all had useful and positive applications, yet in retrospect we would have been better off without them.

Are there useful applications for AI? Yes, there are. The question is though whether those advances are worth the consequences.

Everyone knows the benefits of the internet existing so I won't go into them but without it I never would've met my boyfriend and a lot of my close friends so the perspective that it's been largely detrimental is wild to me. I think anyone who feels that way is taking the benefits of it for granted and is overly focusing on the negatives. And it is a matter of debate yeah, that's why I've been bringing up various use cases as a way to show how the technology is already having significant benefits since the positives often go unnoticed compared to the negatives.

In general a lot of this is that a ton of the benefits are a future thing. Like the technology is already doing massive things like helping out medical research, scientific research in general and letting self driving cars be developed so huge future benefits are already baked in at this point so for me I'm taking that into account in my perspective of it. While just how exactly impactful AI will become is up in the air it's already clear that it's gonna help save a lot of lives and prevent a lot of suffering in the future and that's easily worth all the current negatives. Since a lot of the current positives are more in the background so aren't directly impacting most people yet a way to think of this is it's like going through some discomfort for a period of time to reap a ton of rewards later on.

Last edited by Norion - 1 day ago

curl-6 said:
ConciousMan said:

Sorry, I don't think you see that the problem is not AI, but the current state of the world and political system. Our societal systems are not ready for the majority of people to quit their jobs and do what that want to be masterful at. The problem lies in how people build the tool and use it, not the tool itself. Imagine the world where humans couldn't create a machine that generates electricity. Would we have video games or consoles?

A push back against advancement in technology will limit growth and cause greater problems. Right now more humans are dying because of something else rather than AI and those matters should be addressed first IMO.

The tool itself enables the misuse though, so it's still a problem facilitated by said tool. Yes, people are dying of other things, but people were dying of other things back when we as a society adopted CFCs or lead or Agent Orange, yet those things still should have been resisted and not embraced despite them representing technical innovations.

But you need to look at a large picture too. Is the human population more advanced and more populous, than before? How do you envision is traveling to other planets without help of advances machinery including some form of AI? The current problem with the whole AI development framework is in my opinion lack of transparency and a huge investments from big tech.

Because of that, lots of people will lose jobs short term, as those companies will lay off substantial amount of workforce to justify spendings in AI development. I strongly believe that we can solve it by having open source movements created by sane and moral group of people that will share publicly every detail about ongoing development and focus on security, ethicality, and applicability first. It's normal that a lot of people are against something like AI when they don't see benefits of it or lose their income because of those LLMs. The net positives are things like automated society that can enjoy even greater security and faster advancements than before AI adoption.

Right now we are far away from real AI systems, as currently what we can use is semi-intelligence made by big techs and for profit organizations.



Cerebralbore101 said:

AI Slop is in all new games except Nintendo, which is $120 each. 

Why "except Nintendo?" They're fully aligned with Nvidia. SW3 will almost certainly have a Nvidia chip and neural rendering features. It'll be interesting to see the Nintendo-primary users in this thread navigate this. I am sure there will be some arbitrary distinction of why it isn't gen-AI. 



Around the Network
sc94597 said:
Cerebralbore101 said:

AI Slop is in all new games except Nintendo, which is $120 each. 

Why "except Nintendo?" They're fully aligned with Nvidia. SW3 will almost certainly have a Nvidia chip and neural rendering features. It'll be interesting to see the Nintendo-primary users in this thread navigate this. I am sure there will be some arbitrary distinction of why it isn't gen-AI. 

Nintendo doesn't have much control over this stuff anyway, they will eventually just have to go whatever way the broader industry goes. 

My guess is they will make a Switch 2 Pro at some point this generation provided RAM/silicon prices normalize some at some point to give the Switch 2 greater momentum as perhaps their last "traditional" kind of CPU/GPU driven hardware. Switch 3 (or whatever it is) will be a more radical leap towards a neural rendering pipeline (we are talking like 2032 or 2033 or something) as that's likely where Nvidia is going. The rise of the NPU (CUDA core) driven silicon and less of the actual GPU. 

Eventually I think the NPUs will be able to infer and create visuals from even lower end inputs but even the current Switch 2 can output PS5 range visuals as is, that is likely more than enough reference data already to give a bunch of generative AI tuned NPU cores a base to "upscale" from. Like I said a "Switch 3" may just be a recycled, die shrunk Switch 2 GPU that is dirt cheap by 2033 and the real upgrade is a bunch of NPU/CUDA cores that handle the real-time "filter effects" to bring the visuals up and then maybe an improvement on the CPU side. 

They call this DLSS5 but really we know that's just marketing, this is really like Neural Rendering 1 or something else, likely just like the original DLSS, the second or third iteration of this technique will see large gains too, the first version of DLSS kind of sucked and then it took off with DLSS2 and 3. Eventually as the years go on, I think you can have a setup where it can take the DK model from the left side here and create a model not far off from the official CGI render on the right, it's just a matter I think of pumping your NPU compute on chip up and up. 

Last edited by Soundwave - 1 day ago

Soundwave said:
sc94597 said:

Why "except Nintendo?" They're fully aligned with Nvidia. SW3 will almost certainly have a Nvidia chip and neural rendering features. It'll be interesting to see the Nintendo-primary users in this thread navigate this. I am sure there will be some arbitrary distinction of why it isn't gen-AI. 

Nintendo doesn't have much control over this stuff anyway, they will eventually just have to go whatever way the broader industry goes. 

My guess is they will make a Switch 2 Pro at some point this generation provided RAM/silicon prices normalize some at some point to give the Switch 2 greater momentum as perhaps their last "traditional" kind of CPU/GPU driven hardware. Switch 3 (or whatever it is) will be a more radical leap towards a neural rendering pipeline (we are talking like 2032 or 2033 or something) as that's likely where Nvidia is going. The rise of the NPU (CUDA core) driven silicon and less of the actual GPU. 

Eventually I think the NPUs will be able to infer and create visuals from even lower end inputs but even the current Switch 2 can output PS5 range visuals as is, that is likely more than enough reference data already to give a bunch of generative AI tuned NPU cores a base to "upscale" from. Like I said a "Switch 3" may just be a recycled, die shrunk Switch 2 GPU that is dirt cheap by 2033 and the real upgrade is a bunch of NPU/CUDA cores that handle the real-time "filter effects" to bring the visuals up and then maybe an improvement on the CPU side. 

They call this DLSS5 but really we know that's just marketing, this is really like Neural Rendering 1 or something else, likely just like the original DLSS, the second or third iteration of this technique will see large gains too, the first version of DLSS kind of sucked and then it took off with DLSS2 and 3. 

So the neat thing is that the tensor cores are technically already an NPU in all but the fact that they are integrated on the GPU die and not separate. You're probably right in that future chips will be tensor-core heavy. 

UE5 already has its own neural shaders within the engine, and Nvidia's working on a fork of UE5 to bring in their neural shaders, with them likely merging down the line. 

I think the future is going to be online training of thousands of small neural-shaders. Right now neural shaders are pre-trained small MLPs (very small few hundreds to thousands of parameters) so as the hardware becomes more capable they probably should just be trained online, with some sort of abstracted API available to the developer to set the objectives of each shader. 

Then the final pass would just be a post-processing clean-up. Nvidia already has an online learner in the form of Neural Radiance Caching

DLSS 5 seems like a stop-gap solution to me until the developer tools and developer experience are more mature for direct neural rendering. I can actually see game companies hiring Machine Learning Engineers to their staff to help the 3D Rendering Engineers implement artistic visions from the artists. The development and testing environments just need to mature. 

Edit: And AMD has direct equivalents as well. 



The other implication I can see from the DLSS5 DF video is it looks like DLSS5 can basically give you a lighting engine "for low cost", in some of the shots it's clearly creating lighting where none exists, so from that I infer it could probably provide a developer with a lighting engine for a game with little to no work.

Now, I mean will that look the same as a path traced scene, probably not, but the implications of this could basically be for a developer that doesn't want to put in the effort of having any advanced lighting and doesn't want to spend a ton of resources even on baked lighting ... I suspect you can just let an algorithm like DLSS5 (and it's inevitable successors that will be even better) to take over the lighting almost entirely. 

In other words if you just need a quick and cheap lighting engine that "pops" on screen and is eye pleasing to Joe Average Gamer, it looks to me like this can give you that even from not much reference material. Like I said there are some shots where the lighting is added to some scenes from basically nothing, the algorithm they have is good enough apparently to just add that in real time. And this will likely get even better with future iterations.

This is something that's different from both baked lighting and path/ray traced lighting, it could become an attractive "third option" for a lot of developers.

Again, I'm not saying any of this is good/great, it's more of a sobering understanding of what likely is coming.

Last edited by Soundwave - 1 day ago

Soundwave said:
sc94597 said:

Why "except Nintendo?" They're fully aligned with Nvidia. SW3 will almost certainly have a Nvidia chip and neural rendering features. It'll be interesting to see the Nintendo-primary users in this thread navigate this. I am sure there will be some arbitrary distinction of why it isn't gen-AI. 

Nintendo doesn't have much control over this stuff anyway, they will eventually just have to go whatever way the broader industry goes. 

My guess is they will make a Switch 2 Pro at some point this generation provided RAM/silicon prices normalize some at some point to give the Switch 2 greater momentum as perhaps their last "traditional" kind of CPU/GPU driven hardware. Switch 3 (or whatever it is) will be a more radical leap towards a neural rendering pipeline (we are talking like 2032 or 2033 or something) as that's likely where Nvidia is going. The rise of the NPU (CUDA core) driven silicon and less of the actual GPU. 

Eventually I think the NPUs will be able to infer and create visuals from even lower end inputs but even the current Switch 2 can output PS5 range visuals as is, that is likely more than enough reference data already to give a bunch of generative AI tuned NPU cores a base to "upscale" from. Like I said a "Switch 3" may just be a recycled, die shrunk Switch 2 GPU that is dirt cheap by 2033 and the real upgrade is a bunch of NPU/CUDA cores that handle the real-time "filter effects" to bring the visuals up and then maybe an improvement on the CPU side. 

They call this DLSS5 but really we know that's just marketing, this is really like Neural Rendering 1 or something else, likely just like the original DLSS, the second or third iteration of this technique will see large gains too, the first version of DLSS kind of sucked and then it took off with DLSS2 and 3. Eventually as the years go on, I think you can have a setup where it can take the DK model from the left side here and create a model not far off from the official CGI render on the right, it's just a matter I think of pumping your NPU compute on chip up and up. 

But that means laying off character artists and modelers. -_-



CaptainExplosion said:
Soundwave said:

Nintendo doesn't have much control over this stuff anyway, they will eventually just have to go whatever way the broader industry goes. 

My guess is they will make a Switch 2 Pro at some point this generation provided RAM/silicon prices normalize some at some point to give the Switch 2 greater momentum as perhaps their last "traditional" kind of CPU/GPU driven hardware. Switch 3 (or whatever it is) will be a more radical leap towards a neural rendering pipeline (we are talking like 2032 or 2033 or something) as that's likely where Nvidia is going. The rise of the NPU (CUDA core) driven silicon and less of the actual GPU. 

Eventually I think the NPUs will be able to infer and create visuals from even lower end inputs but even the current Switch 2 can output PS5 range visuals as is, that is likely more than enough reference data already to give a bunch of generative AI tuned NPU cores a base to "upscale" from. Like I said a "Switch 3" may just be a recycled, die shrunk Switch 2 GPU that is dirt cheap by 2033 and the real upgrade is a bunch of NPU/CUDA cores that handle the real-time "filter effects" to bring the visuals up and then maybe an improvement on the CPU side. 

They call this DLSS5 but really we know that's just marketing, this is really like Neural Rendering 1 or something else, likely just like the original DLSS, the second or third iteration of this technique will see large gains too, the first version of DLSS kind of sucked and then it took off with DLSS2 and 3. Eventually as the years go on, I think you can have a setup where it can take the DK model from the left side here and create a model not far off from the official CGI render on the right, it's just a matter I think of pumping your NPU compute on chip up and up. 

But that means laying off character artists and modelers. -_-

I hate to say it, but I think in the long run Nintendo will cave. 

At their heart, Nintendo loves being a company with smaller teams being able to make games quickly and affordably (this is the heart of the Famicom and Super Famicom days). I think they long for those days, I don't think they like what game development is today, they just tolerate it because they have to. 

Generative AI also kind of solves the "graphics problem" for them, it will inevitably allow you to take a lower end base image and let the AI make a higher end visual result and in a way it kind of makes the graphics side of the game less important. Anyone/everyone will just have graphics that punch way above their weight class. 

Unfortunately characters artists and modelers will be pared down, but the ugly truth to that is that saves Nintendo money. 

I do think they will feel bad about certain elements of AI, but ultimately cave. It's not like Nintendo is a GPU supplier anyway, the writing is on the wall here, likely future "Nvidia GPUs" will become more like NPU/CUDA core driven devices anyway. Nintendo can't be some lone bastion of keeping traditional rendering going forever when none of the other major players in the industry are. 

I do think they will put up a token fight at first and resist for a while, but ultimately they will cave as other big companies standardize this kind of workflow.