By using this site, you agree to our Privacy Policy and our Terms of Use. Close
haxxiy said:
Captain_Yuri said:

AMD FSR 3 Might Generate Up To 4 Interpolated Frames & Be Enabled On Driver-Side

https://wccftech.com/amd-fsr-3-might-generate-up-to-4-interpolated-frames-be-enabled-on-driver-side/

I don't believe the final product will be a driver side implementation because if FSR 3 is a driver side only implementation, then it will be bad, like really bad. The reason why Nvidia's DLSS 3 works as well as it does is because it is able to get motion vector and optical flow information directly from the engine which allows it to guess what the in between frame is supposed to look like hence why it needs to be implemented at a per game level. But implementing FSR 3 at a driver level would make it so that it will be a glorified interpolation technology similar to what we see on TVs.

That's just what Nvidia calls their proprietary interpolation algorithms. In essence, what they're doing isn't qualitatively any different from any AI processor inside high-end screens. The main difference is that Reflex is mandated on DLSS3, which alleviates some of the intrinsic added latency in FG. But then other proprietary FG modes will also turn off most post-processing to decrease latency with FG, so yeah.

To me, this is another G-Sync scenario where a lot of the added proprietary Nvidia stuff to supposedly create a more premium experience ends up promptly ignored in most scenarios and eventually matched by the open standard solution in a few years anyway.

I think that's underselling the tech behind DLSS quite a lot tbh. The Ai processor in high end screens doesn't have any awareness of motion vectors or anything of that nature because the game engine and such do not provide the screens any information. The Ai processor just looks at the frame data and makes a quick guess as to what the frame supposed to look in between before putting out the next frame. DLSS 3 on the other hand actually has motion vectors and etc that it gets directly from the game engine which allows it to guess significantly more accurately than anything the Ai processor in those TVs can do and that's the key. It's like saying well DLSS is an upscaler just like checkboard rendering and FSR but then you see differences like this:

And it's like sure they are all upscalers but DLSS is significantly better. And sure maybe in 4-6 years, an open source version might match DLSS but what are you gonna do till then? Just be happy with a subpar experience when you could have enjoyed the tech 4-6 years in advance? Remember that DLSS 2.0 released 3 years ago and we have not seen an single upscaler that is on par with DLSS.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850