By using this site, you agree to our Privacy Policy and our Terms of Use. Close
LegitHyperbole said:

Damn, I hope you're right. I suppose looking at it all like that in one area does warp perspective but seeing that DD2 comparison is wild, I have to question why it's so terrible. 

Seems like the whole "engines are scalable now" thing is falling apart. 

FSR seems to be the problem. I wonder can that get better this generation. Like why doesn't Sony work with AMD on that POS. 

FSR has been getting "improvements" over time. - I.E. FSR 1.0 vs 2.0 vs 3.0 vs 3.1.
However we need to understand what FSR is... And what FSR isn't.

FSR isn't using machine learning algorithms to enhance image quality... It's using a bunch of different post-process filters like blurring edges on geometry, scaling the image up, then sharpening.
FSR 2.0 started grabbing "information" from previous frames to enhance current and future frames.

And FSR 3.0 take a few extra approaches.

FSR's advantage is that it doesn't require tensor cores or specialized compute, it's cheap, it runs on everything... It could even run on the Xbox 360 if a developer wanted.
Current PS4 and Xbox One games are even leveraging it, which works well as Graphics Core Next is very compute centric GPU architecture.

However... The reason why FSR exists is because AMD's GPU's are a few generations behind nVidia, so AMD needed to "invent" an approach that will run on it's technology until they scaled up hardware that could take a machine learning approach.

DLSS and PSSR uses machine learning, it's an entirely different and superior approach, PSSR is still a generation behind DLSS but there is massive gains over FSR.

FSR has it's place, no doubt... And it's absolutely brilliant on handhelds/integrated graphics due to how cheap it is to implement, it doesn't require additional expensive silicon.



--::{PC Gaming Master Race}::--