By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Nvidia reveals DLSS 5 , essentially applies AI filter to games in real time.

sc94597 said:
Eagle367 said:

Those two things are not the same for one and for two that critique is correct. The quality of textile has fallen, for example. 

I wasn't disagreeing that it was correct. My point was that the situation we find ourselves in now is very much analogous to what the original luddites found themselves in. Textile production was no less intellectual and creative than any of the current jobs being outmoded. It's only thought to be so because the labor was devalued with the introduction of industry. 

I really don't believe in the concept of unskilled labor. The automation of physically involved labor and intellectual labor is the same in my opinion. There is a lot of creativity in physical labor just like there is in intellectual labor. And we're starting to discover that it is harder to automate the last few pieces of physical labor which haven't been automated than it is to automate a lot of intellectual labor, because embodied intelligence is lagging behind other fields of AI and automation. 

I think its still an insult to make the comparisons. For one, i don't buy this is some revolution. I think its gonna dominate the cycle, ruin lives and slowly fade away as only the somewhat useful tools continue to exist. Most will die or be some novelty apps about this era in time. I don't buy the hype.



Just a guy who doesn't want to be bored. Also

Around the Network
Eagle367 said:

I think its still an insult to make the comparisons. For one, i don't buy this is some revolution. I think its gonna dominate the cycle, ruin lives and slowly fade away as only the somewhat useful tools continue to exist. Most will die or be some novelty apps about this era in time. I don't buy the hype.

What is your explanation for stagnant hiring of entry-level knowledge workers? Do you think it is all outsourcing? 



sc94597 said:
Soundwave said:

To steer the topic back to DLSS5, I guess the big question I have right now is currently Nvidia only showed examples of an AI creating images that are relatively close to the source graphics, my question would be is there some limit on that? For example if the developer wants Grace in RE9 to look like a photoreal version of the actress Jennifer Lawrence (for example) instead ... can they give the DLSS model photo data of Jennifer Lawrence that it would spit out a final image that looks just like Jennifer Lawrence?

There are only two ways they could do this.

1. They pre-trained the model on generalized image-to-image. This is unlikely for a few reasons. Good general image-to-image models are relatively huge. The open-source ones start at around 13 billion parameters. That is not feasible to inference in real time even on a single data center GPU, let alone gaming ones. An RTX 5090 inferences on these models about 2 images per second, just for context. The datasets used to train them are also huge. Nvidia doesn't have access to any buffer data on these data samples like they do with their regular DLSS training sets. Now Nvidia could train an (or more likely source an already trained) image to image model and use it as a teacher for their specialized gaming specific model. But there are two issues with that. The first is that it would very much skew the codomain so much that you are risking the efficacy of your gaming specific model. The second is that it is a very inefficient method given the target objective is so specific. 

2. They have invested heavily in model interpretation research and pulled off something like Claude's Golden Gate Bridge experiment but for image models rather than LLMs. If that were the case, they'd be able to allow much more control than you are talking about. You really don't need text or image inputs in this case, and can just directly control the model parameter weights. See: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

This model is more likely something like what is described in this paper, 

https://arxiv.org/pdf/2105.04619

but without using the G-buffer at all (if we take Nvidia's press release at face-value that they only use color and velocity buffers) and probably using a vision transformer instead of a CNN. 

I wonder if what they're doing isn't all that dissimilar from these kinds of videos that are all over Tiktok/Instagram etc.:

https://www.instagram.com/reel/DVxSh0ODVec/

Seems like Google/Youtube does not allow or want too many videos like this because they're hard to find on Youtube but all over the place on Insta/Tiktok. 

Just instead of a person it's a lighter data set that's trained more to enhance things like eyes, lips, wrinkles, increasing brightness, etc. of game characters from a bunch of data. 

Last edited by Soundwave - 2 days ago

Soundwave said:
sc94597 said:

There are only two ways they could do this.

1. They pre-trained the model on generalized image-to-image. This is unlikely for a few reasons. Good general image-to-image models are relatively huge. The open-source ones start at around 13 billion parameters. That is not feasible to inference in real time even on a single data center GPU, let alone gaming ones. An RTX 5090 inferences on these models about 2 images per second, just for context. The datasets used to train them are also huge. Nvidia doesn't have access to any buffer data on these data samples like they do with their regular DLSS training sets. Now Nvidia could train an (or more likely source an already trained) image to image model and use it as a teacher for their specialized gaming specific model. But there are two issues with that. The first is that it would very much skew the codomain so much that you are risking the efficacy of your gaming specific model. The second is that it is a very inefficient method given the target objective is so specific. 

2. They have invested heavily in model interpretation research and pulled off something like Claude's Golden Gate Bridge experiment but for image models rather than LLMs. If that were the case, they'd be able to allow much more control than you are talking about. You really don't need text or image inputs in this case, and can just directly control the model parameter weights. See: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

This model is more likely something like what is described in this paper, 

https://arxiv.org/pdf/2105.04619

but without using the G-buffer at all (if we take Nvidia's press release at face-value that they only use color and velocity buffers) and probably using a vision transformer instead of a CNN. 

I wonder if what they're doing isn't all that dissimilar from these kinds of videos that are all over Tiktok/Instagram etc.:

https://www.instagram.com/reel/DVxSh0ODVec/

Seems like Google/Youtube does not allow or want too many videos like this because they're hard to find on Youtube but all over the place on Insta/Tiktok. 

Just from googling Glorify, uses Leonardo.AI which in turn uses Stable Diffusion XL. 

That is an 8B parameter model, which an RTX 5090 takes 6 seconds to produce one 1024 x 1024 image or 15 seconds to create 4 images when batch inferencing. Nvidia would have to improve that by 180 times to get a stable 30 fps @1024 x 1024, and the model would take up a fourth of the 5090's VRAM to do that. 

Now Stable Diffusion XL is pretty old, but that is still a huge difference. 

If I were to guess the parameter count of the current DLSS 5 model is probably anywhere between 300M to 1B parameters in size. Huge compared to ~20M-60M for an average CNN, but nowhere near a general Image to Image model.



Norion said:
curl-6 said:

Saying somebody is ignorant if they disagree with you is uncalled for and not helpful.

It's not an ignorant statement cause he disagrees, it's an ignorant statement cause it's completely false. The evidence is so overwhelming of the benefits of the technology that implying that all the investment in it isn't leading to anything useful is flat earth level of ignorance at this point and should get called out.

I'd contend that the evidence is overwhelming that AI is terrible for the world and society, so saying it's a good thing is akin to saying the Earth is flat.

Just a few years ago the world got along perfectly fine without AI, and in a short space of time it's done immense damage, from the enshittification of the internet to mass layoffs to deepfake CP/Revenge porn, to the erosion of critical thinking skills, and so on. 

We simply don't need it and it causes more problems than it solves.



Around the Network
firebush03 said:
BraLoD said:

I don't see signs pointing to Sony dropping physical media, their already have the modular disc drive model going on with the PS5 Pro, it's extremely likely the PS6 will do exactly the same, offer the console without it but the option to go for it if/when you want to.

There is no doubt that Sony will continue to stick with physical media: Looking at the Steam Deck’s modest commercial performance (likely attributable to its lack of promotion/presence at non-Steam retailers), it should not be underestimated just how important it is to any game publisher that they continue to have a presence at in-person retailers.

That said, however, seeing as internet is required to install a disc drive onto your PS5, if Sony does choose the route of releasing exclusively digital revisions of hardware moving forward (though I doubt they will seeing as the PS5 with a disc drive installed continues to be by far the most popular SKU), then that would certainly place a looot of these “physical only” gamers in a bind: Once the servers go down, how are you going to play these physical games? How will they be any different from a GKC besides the need to install your game?

Chrkeller said:

Lol. Switch 2, for third party, would be a boat anchor without DLSS. Ain't nobody playing 540p in 2025 on new releases.

curl-6 said:
ConciousMan said:

My comment was more about technology than certain types of advances in our society. In my point of view, we will limit progress by limiting number of actions and experiments we can make as species. The biggest challenge when it comes to AI and robotics, is to put certain circuit breakers to be able to turn those things off if they decide to lie and act against our wishes, in my opinion. Without this technology, on the other hand, I don't see humanity to be able to colonize other planets that are very far away based on what I've read about this part of the universe.

It would be tons of text if we were to argue about micro transactions too. For me, I can enjoy those free to play games that rely on microtransactions to make money, so certainly they have their use cases. The same for pay to win. Imagine trying to progress way faster to realize that the game is a waste of time 🤣 And in turn coming up with some other productive ideas and projects.

The same applies to technology though, it can be harmful and dangerous if we embrace it without carefully weighing the consequences. 

Microtransactions for instance may seem innocuous on paper, cos hey, nobody's forcing you to buy them, right? The problem is though that publishers then design their games to psychologically pressure you into spending money in order to progress effectively or have a good experience, or put them in games that are already full price, so even something that's technically optional can still ruin the experience.

I love finding costumes for characters in chests in a game world. Collecting all the costumes by exploring and unlocking them is part of the gameplay for me. When that is locked behind, even a $1 microtransaction it really hurts the fun of the gameplay loop for me. Because it's gone and replaced by spending money. 



sc94597 said:
CaptainExplosion said:

And like I mentioned before, THIS happened, and could have happened with a military network.

You could give a thousand examples like this and it doesn't change the argument being made here. 

The statement being addressed wasn't "the bad things about AI outweigh the good." The statement was "there are no good applications of AI." 

Statement 1 can be true if statement 2 is false. 

Statement 2 is true, because AI will kill patients when used for medicine. It can't be trusted to make art, let alone with people's lives.

Eagle367 said:
sc94597 said:

I wasn't disagreeing that it was correct. My point was that the situation we find ourselves in now is very much analogous to what the original luddites found themselves in. Textile production was no less intellectual and creative than any of the current jobs being outmoded. It's only thought to be so because the labor was devalued with the introduction of industry. 

I really don't believe in the concept of unskilled labor. The automation of physically involved labor and intellectual labor is the same in my opinion. There is a lot of creativity in physical labor just like there is in intellectual labor. And we're starting to discover that it is harder to automate the last few pieces of physical labor which haven't been automated than it is to automate a lot of intellectual labor, because embodied intelligence is lagging behind other fields of AI and automation. 

I think its still an insult to make the comparisons. For one, i don't buy this is some revolution. I think its gonna dominate the cycle, ruin lives and slowly fade away as only the somewhat useful tools continue to exist. Most will die or be some novelty apps about this era in time. I don't buy the hype.

Neither do I, BECAUSE IT WILL KILL US ALL. Anyone who's advocating for AI doesn't know or care about the consequences towards the environment, towards the economy, AND towards people's lives.

How many times do I have to bring up that AI told Sewell Setzer III to kill himself?

And he's not the only one killed by AI, there's also Adam Raine, same circumstances too.

That's two suicides caused by AI too many, and it's enough to prove that AI can't be trusted with anything. Anyone who glosses over that doesn't value human lives.



CaptainExplosion said:
sc94597 said:

You could give a thousand examples like this and it doesn't change the argument being made here. 

The statement being addressed wasn't "the bad things about AI outweigh the good." The statement was "there are no good applications of AI." 

Statement 1 can be true if statement 2 is false. 

Statement 2 is true, because AI will kill patients when used for medicine. It can't be trusted to make art, let alone with people's lives.

Lol okay. Not much I can do here with a response like that.



You know what was a fun network of computers working to help science? Folding at home. I loved that. 2006-2008 I remember a lot of tech shows (DL.TV,Revision 3/Techzilla/Twit.tv) and such talking about it and using either PCs or a PS3. I used my PC (didn't have a PS3 yet).



Bite my shiny metal cockpit!

sc94597 said:
CaptainExplosion said:

Statement 2 is true, because AI will kill patients when used for medicine. It can't be trusted to make art, let alone with people's lives.

Lol okay. Not much I can do here with a response like that.

You just proved my point. Thanks for spitting on Sewell Setzer III and Adam Raine's graves.