haxxiy said:
That's just what Nvidia calls their proprietary interpolation algorithms. In essence, what they're doing isn't qualitatively any different from any AI processor inside high-end screens. The main difference is that Reflex is mandated on DLSS3, which alleviates some of the intrinsic added latency in FG. But then other proprietary FG modes will also turn off most post-processing to decrease latency with FG, so yeah. To me, this is another G-Sync scenario where a lot of the added proprietary Nvidia stuff to supposedly create a more premium experience ends up promptly ignored in most scenarios and eventually matched by the open standard solution in a few years anyway. |
I think that's underselling the tech behind DLSS quite a lot tbh. The Ai processor in high end screens doesn't have any awareness of motion vectors or anything of that nature because the game engine and such do not provide the screens any information. The Ai processor just looks at the frame data and makes a quick guess as to what the frame supposed to look in between before putting out the next frame. DLSS 3 on the other hand actually has motion vectors and etc that it gets directly from the game engine which allows it to guess significantly more accurately than anything the Ai processor in those TVs can do and that's the key. It's like saying well DLSS is an upscaler just like checkboard rendering and FSR but then you see differences like this:
And it's like sure they are all upscalers but DLSS is significantly better. And sure maybe in 4-6 years, an open source version might match DLSS but what are you gonna do till then? Just be happy with a subpar experience when you could have enjoyed the tech 4-6 years in advance? Remember that DLSS 2.0 released 3 years ago and we have not seen an single upscaler that is on par with DLSS.
PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850