Chazore said: Also, completely forgot to bring it up in my original post, but when AMD came out with their first two iterations of FSR, they extended that olive branch to older Nvidia card users like myself. Now no more than 2 years later, they have taken back that branch, snapped it in half and say "our latest cards only, just like how Nvidia does it, or fuck off".
It's just so weird to see AMD continuously shooting themselves in the foot time and time again. "Here Nvidia users, have something to tide you over", then it's "we're the $50 less company, oh and no FSR for you Nvidia users", like how am I supposed to take either of those as net positives as a consumer?. |
Depends on the hardware requirements of FSR4.
FSR1-2-3 doesn't require the use of specialized cores like Tensor cores, which is why it worked on pretty much everything as it could be computed on the shaders, it was just a relatively simple algorithm.
RDNA3 can perform "Tensor" operations (Which is mostly just Matrix Ops leveraging INT4, INT8, BF16, FP16 for inference) but it lacks the dedicated hardware for it...
And this is where RDNA4 could offer some significant deviation... Which is set to offer FP8 operations, where RDNA3 will likely need to fall back to BF16... And that is likely what broke the camels back in regards to FSR4 support.
It sucks, but we will need to see if they are going to roll FSR4 into GPUOpen, there people will get it to work on other GPU's that do not support FP8 operations or perhaps someone will make a fallback to BFloat16 operations for older hardware.
There is a 3rd alternative... And that's to make a Radeon "tensor" accelerator as a separate add-in card for A.I upscaling and inference that will work with everything... But I would rather see the return of Physics than more A.I stuff.
--::{PC Gaming Master Race}::--