By using this site, you agree to our Privacy Policy and our Terms of Use. Close
BraLoD said:
Pemalite said:

FSR has been getting "improvements" over time. - I.E. FSR 1.0 vs 2.0 vs 3.0 vs 3.1.
However we need to understand what FSR is... And what FSR isn't.

FSR isn't using machine learning algorithms to enhance image quality... It's using a bunch of different post-process filters like blurring edges on geometry, scaling the image up, then sharpening.
FSR 2.0 started grabbing "information" from previous frames to enhance current and future frames.

And FSR 3.0 take a few extra approaches.

FSR's advantage is that it doesn't require tensor cores or specialized compute, it's cheap, it runs on everything... It could even run on the Xbox 360 if a developer wanted.
Current PS4 and Xbox One games are even leveraging it, which works well as Graphics Core Next is very compute centric GPU architecture.

However... The reason why FSR exists is because AMD's GPU's are a few generations behind nVidia, so AMD needed to "invent" an approach that will run on it's technology until they scaled up hardware that could take a machine learning approach.

DLSS and PSSR uses machine learning, it's an entirely different and superior approach, PSSR is still a generation behind DLSS but there is massive gains over FSR.

FSR has it's place, no doubt... And it's absolutely brilliant on handhelds/integrated graphics due to how cheap it is to implement, it doesn't require additional expensive silicon.

How come AMD can't use machine learning on their GPUs until they scale up their hardware, but Sony using an AMD APU can use machine learning and get way better results on AMD hardware than AMD with FSR? Clearly the hardware is capable of it if Sony is pulling it of.

I'm far from an expert but I think AMD means well.  I think FSR works on most all hardware.  It is more versatile but 'open source' means lower quality.