DLSS5 off vs On

Bite my shiny metal cockpit!

We won lads, we can finally stick it to the devs who insist on making every character an ugly monster 😌
Ride The Chariot | ‘26 Completion
This looks so bad, like a bad AI phone filter. It also ruins the creative vision that the character artists on these games had for these characters. And worst of all, AI slop like this is part of why it costs $370 bucks to buy 32 GB of DDR5 RAM (up from about $90 in 2024) and like $150 to buy a 1 TB SSD (up from about $70 in 2024), part of why most GPU models up between $50 and $500 over MSRP, part of why PS6 will be $800+ and part of why Project Helix will be $1000+, because the AI datacenters that power this kind of slop are taking all the RAM and NAND memory, causing supply and demand issues because we can't build new fabrication facilities fast enough to accommodate the increased demand from AI.



I think the artistic intent argument falls flat when the actual creators of the games are interested in using the technology and take part in its implementation. They're going to use the technology to effectuate their intent to the best of their abilities.
|
“At CAPCOM, we strive to create experiences that feel cinematic, compelling and deeply believable — where every shadow, texture and ray of light is crafted with intention to enhance atmosphere and emotional impact,” said Jun Takeuchi, executive producer and executive corporate officer at CAPCOM. “DLSS 5 represents another important step in pushing visual fidelity forward, helping players become even more immersed in the world of Resident Evil.” |
As for the specific technology behind this, it's almost certainly trained on game data that Nvidia gets from their developer partners. I wouldn't even be surprised if there is some on-line (post)-training going on, like they do with Neural-Radiance Caches for Path Tracing. You need motion vectors and cooperative vectors, and publicly accessible video data doesn't provide those. Hence Nvidia says,
|
Video AI models have rapidly learned to generate photoreal pixels, but they run offline, are difficult to precisely control and often lack predictability, with every new prompt generating bespoke content. For games, pixels must be deterministic, delivered in real time and tightly grounded in the game developer’s 3D world and artistic intent. |
I for one thinks this looks incredible.
It makes the scene look so insanely rich & vibrant. The 'Off' version looks plain flat in comparison.
To me this is a waaaay more noticeable difference than raytracing vs baked lighting or even recent generational jumps.
Def way more noticeable than PS4 to PS5. Honestly, it feels more in line with the jump between Xbox to 360.
Do I like Nvidia & AI : no
Does this look like the biggest jump in visual fidelity in 20 years : yes
It'll be interesting to see what they can get it to run on and if amd have an answer, if not we could be in a position in 6 or 7 years where the switch 3 looks as good as PS6, which would be pretty funny.
| shikamaru317 said: This looks so bad, like a bad AI phone filter. It also ruins the creative vision that the character artists on these games had for these characters. And worst of all, AI slop like this is part of why it costs $370 bucks to buy 32 GB of DDR5 RAM (up from about $90 in 2024) and like $150 to buy a 1 TB SSD (up from about $70 in 2024), part of why PS6 will be $800+ and part of why Project Helix will be $1000+, because the AI datacenters that power this kind of slop are taking all the RAM and NAND memory, causing supply and demand issues because we can't build new fabrication facilities fast enough to accommodate the increased demand from AI. |
So for it to be able to inference in real-time these models need to be a lot smaller than what you find in video generation or image generation models, about two-orders of magnitude smaller (think 20-30 million parameters vs. 12 billion +.)
The datasets are also a lot smaller, and synthetically produced (provided by game developers, with motion vectors available as inputs.)
In terms of actual compute costs, training DLSS features is a drop in a very large bucket, and Nvidia has even experimented with online training (continuously training tiny special-purposed models as you play the game on your hardware.)
The synthetic data production probably is more computationally expensive than the actual pre-training.
Last edited by sc94597 - 3 hours agoWhile I do think the demos look pretty uncanny, this is certainly going to be the way of the future. RDNA 5 has neural tech built into the hardware based on xbox presentation. They can also fine tune the Ai intensity and such along with excluding objects.
So while it does look uncanny, the tech behind it is impressive and it will improve as time goes on.
PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850
I'm 50 / 50 on it.... on one hand it does look more natural... on the other hand, sometimes the changes to characters seems abit too far from the original.
This is like some cam phone filter BS ai thingy.
| VersusEvil said: We won lads, we can finally stick it to the devs who insist on making every character an ugly monster 😌 |
hahahaha lmao... that might actually be a upside if this is just like a beauty filter.