By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Jizz_Beard_thePirate said:

Sure it comes with trade offs but let the users decide whether or not the trade offs are worth it. From the reviews of the leaked version, it's a pretty big improvement in image quality even when being used with RDNA3 INT8 compared to FSR 3 which is currently the only official choice.

And sure Radeon hasn't promised anything to anyone so the logic of "you got what you paid for" certainly applies. But personally speaking, when a company like Nvidia is giving options to 2060 users from 2019 to run the latest DLSS models while Radeon doesn't want to give those that spent $1000 on 7900XTX the time of day even though Radeon had a plenty of marketing around the so called "Ai capabilities" of RDNA 3... Yea it really shows how each company supports their user base and it affects the reputation of each company.

Radeon should be working towards a "we are better than Nvidia" reputation in anyway possible given their single digit market share. But if they are going to act like how they have done in the past year, they absolutely deserve that market share imo.

It does come with tangible benefits. But I would argue that you could just use XeSS anyway.

The difference between the 2060 is that... nVidia included tensor cores, dedicated hardware units for inference tasks, something AMD never did with GCN or RDNA 1/2/3 hardware.
Anything you run on GCN/RDNA 1-2-3 takes resources away from something else.

Still, they could run things on the FP16 hardware or the INT hardware, but if it takes away resources from RDNA4 or newer... Then they are just better off making it open source and let the community manage it.

RDNA4 though has been terrific, one of the best upgrades I have done in recent years from a price/performance/power perspective relative to the competition...  And I don't even use the A.I upscaling junk. - RDNA2 that I had prior was getting long in the tooth but had terrific support for the duration of it's life.

Yea you could use XeSS but like why should that be the only option or even an option in the first place when we know FSR4 works. Did RDNA 3 users pay Intel when they spent $1000 on 7900XTX? Or $500 when they got a 7800XT? No and in fact if anything, XeSS shows that if Radeon actually does try, they could come up with a solution that can work with RDNA 3 and even RDNA 2.

And yes, I understand that Turing had Tensor cores but the point is that both Nvidia and Radeon with Turning and RDNA 3 started making a big spiel about Ai. But the difference is that Nvidia really showed Turing users that their Ai spiel wasn't just for nothing. And yes, it takes away resources from other things but again, we know exactly how well it works and every review that tested FSR4 on RDNA 3 have all said it's well worth the performance hit vs FSR3 because while FSR4 won't give you as much FPS as FSR3, it will still give you more performance compared to Native and actually great image quality on top.

They can make multiple versions of FSR4. One that runs on INIT8 and the other that runs on FP8. If Intel can make multiple versions of XeSS where one runs on dp4a for everyone else and xmx on Intels own hardware, then surely Radeon can make multiple versions of FSR4 for it's own hardware right? Surely it can't be too much to ask for Radeon to support it's own products like Nvidia and Intel does?

And when it comes to getting terrific support with RDNA 2... Like with what? Driver updates which is the most basic thing any manufacture can do these days? Making a basic upscaler that is disliked by every reviewer? Making a frame gen feature that has frame pacing issues that still haven't been fixed after 2+ years? And yea, RDNA 4 is getting great support now since it's their latest and greatest but are they gonna treat RDNA 4 like RDNA 3 when RDNA 5 comes out?



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850