As they suspected, there is lighter version of DLSS as well...22:17 for breakdown
Switch 2 is out! How you classify? | |||
| Terribly outdated! | 3 | 5.26% | |
| Outdated | 1 | 1.75% | |
| Slightly outdated | 14 | 24.56% | |
| On point | 31 | 54.39% | |
| High tech! | 7 | 12.28% | |
| A mixed bag | 1 | 1.75% | |
| Total: | 57 | ||
As they suspected, there is lighter version of DLSS as well...22:17 for breakdown
| sc94597 said: So the two types of DLSS have been confirmed. Given this, I suspect by the end of this generation we'll have many more DLSS versions, including some that cost as much as type II, but have the quality characteristics of type I or even better. Models are improving over time given the same parameter count, and not just because of architectural changes, although you could ostensibly transfer learn from a ViT to a CNN to improve it, and likewise from large models to smaller ones. As we also get more RT titles, I would like to see some ray reconstruction, even if it is the older lower-cost preview version. That's probably the most exciting part about SW2. Nvidia's evolving feature-set, which has very much been backward compatible, benefits it over time akin to the old "optimizations" that consoles would experience over the course of a generation. My guess is a SW3 will go hard on neural rendering if it releases in 2031-2033. |
Interesting stuff.
I feel like we should see some significant graphical growth for Switch 2 over the next few years, not just cos of this, but also the fact that third party devs apparently didn't even have the final specs for much of the development of its early titles, and that Nintendo's own games so far have been pretty much all been basically Switch 1 games on steroids.
curl-6 said:
Interesting stuff. I feel like we should see some significant graphical growth for Switch 2 over the next few years, not just cos of this, but also the fact that third party devs apparently didn't even have the final specs for much of the development of its early titles, and that Nintendo's own games so far have been pretty much all been basically Switch 1 games on steroids. |
Honestly, my take away from this is that devs who use DLSS should stick to outputing at 1080p and use "full" DLSS model, cause that "tiny" model really breaks apart - as lot of people were noticing from the start, with Hogwarts being first suspect from the very first trailer.
HoloDust said:
Honestly, my take away from this is that devs who use DLSS should stick to outputing at 1080p and use "full" DLSS model, cause that "tiny" model really breaks apart - as lot of people were noticing from the start, with Hogwarts being first suspect from the very first trailer. |
Having played games like Cyberpunk, Hogwarts, Fast Fusion, and Star Wars Outlaws on Switch 2, Cyberpunk's "full fat" DLSS does look notably smoother, so I would tend to agree, though with something that's already pushing the system really hard, the CNN-like model might have to steep a performance penalty.
I wouldn't say Outlaws for instance looks bad, so the "Lite" DLSS can still look better than nothing as long as the base res isn't sub-HD. 720p to 1080p in Outlaws is passable.
I'd say it depends on the game and on how much processing budget the devs have to work with.
Hopefully as time goes on we will see the algorithms improve.
curl-6 said:
Having played games like Cyberpunk, Hogwarts, Fast Fusion, and Star Wars Outlaws on Switch 2, Cyberpunk's "full fat" DLSS does look notably smoother, so I would tend to agree, though with something that's already pushing the system really hard, the CNN-like model might have to steep a performance penalty. I wouldn't say Outlaws for instance looks bad, so the "Lite" DLSS can still look better than nothing as long as the base res isn't sub-HD. 720p to 1080p in Outlaws is passable. I'd say it depends on the game and on how much processing budget the devs have to work with. Hopefully as time goes on we will see the algorithms improve. |
As I said, I think they should really stick to outputting at 1080p with full DLSS and not trying to go for higher (1440p or 4K) res with lite model - it just looks really bad, with complete lack of any AA on some edges when it breaks apart - and it breaks apart too often.
In-game option would probably be best solution, though that would require more effort from devs.
Yeah, Id rather have a soft and stable 1080p then a messy, jaggy 1440p
I think they should still develop and use the lite models in so much as the feedback on where they're lacking improves the models over time (everywhere.) Would love to see what a low parameter model can achieve if distilled from better larger models. This lite model definitely seems like a v1 to a viable v2 or v3.
We've already seen SWO give better result than Hogwarts Legacy with pretty minor changes to the post-processing.
CNN Autoregressors scale linearly in time complexity according to parameter count, while following an inverse power-law for performance improvements (measured by loss). So if you shift that negative curve a bit to the left by it implicitly learning heuristics from a better, bigger model (such as a leading-edge transformer one) you probably could exceed DLSS 3's ability while using half the parameters (again CNN's scale linearly) like the current small model.
I am not familiar with how vision transformer models scale toward the dozen million parameter counts, but if we guess based on language models, it probably isn't too viable. Usually you need a few hundred million parameters to get something that outperforms "hard-coded" heuristic or frequency algorithms in language transformers, and I am guessing it is similar for vision. That's probably why DLSS 4 is twice as large rather than the same size as DLSS 3. So a direct transformer model on SW2 probably isn't likely, unless there is some architectural change.
But a CNN-ViT hybrid (especially through distillation) is definitely viable and I think we'll see improvements to the "lite" model over time. Some of which can be easily back-ported to old games through updates that just swap out the model.
Note above is on classification tasks, but the results/improvements should generalize to generation as well.
Last edited by sc94597 - on 05 October 2025| Otter said: I'm not sure why Digital Foundry is catching all this heat lol. Everything they said is true |
Your comment about Outlaws didnot last well at all buddy.
| borisr said: Your comment about Outlaws didnot last well at all buddy. |
It was true for the state of the game at the time. Outlaws S2 easily had the biggest optimisation glowup we've seen of any game in recent memory.
And equally my defence of digital foundary was pretty sound. They sung the games praises on release because it actually looked and run well, unlike the trailer. They weren't just being "out of touch" as many suggested. Although I'd admit I didn't think S2 would be able to handle RT GI as well as it did.
I think the lesson with SWO is that optimization usually is the last thing to happen in any coding project, including games, and the state of the project before release probably shouldn't be used to extrapolate general trends on how the platform will fare for certain technologies or relative sales.
SWO releasing on Switch 2 pretty much revived the game and people were giving it a second look because of it.
Hell, I would have never bought the game at all given that I am neither a Star Wars fan nor a Ubisoft style game fan, but the game is enjoyable enough that I might just buy it for PC too when there is a sale. The little bite-sized segments of gameplay fit handheld gaming pretty nicely.