By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Cyran said:

A little correction while it true RDNA 2 did not have anything like Tensor correction, RDNA 3 did add in Matrix cores which is AMD version of Tensor cores.  Both 20 series and 30 series for Nvidia tensor cores did not support FP8 yet they still give the option to use DLSS 4.5.  There is a trade off about a 30% performance drop since it don't have native FP8 support.  There no reason AMD could not given RDNA 3 users the option for that trade off like Nvidia did as the nvida 20/30 series was in the same position RDNA 3 was hardware wise.

I will have to correct your correction.

RDNA 3 did not have dedicated Matrix "cores" like CDNA in AMD's Instinct line, AMD included Matrix Accelerators within their Compute Units and could work alongside the vector units, essentially borrowing the resources that would be used for the FP16 units.
There is still the issue of contention in the CU units due to schedulers and cache being needed for other tasks, it's a shared resource approach.
However, it still doesn't support FP8... And that's the big issue.

RDNA 2 could also do Matrix multiplications in it's vector units in order to support Machine Learning.

...And that is partly the issue here, RDNA1 (Couldn't do it at all), RDNA2, RDNA3, RDNA4 handles Matrix all differently, it's AMD's own fault in that regard, but RDNA4 has aligned itself with the general Industry trend.

And I agree, that AMD could support FSR4 on older Radeon architectures by bifurcating the upscaler, but if we are talking what's best for the consumer, then the best approach for the consumer would be for AMD to make FSR4 open source and let the community manage it and they can just focus all their resources on FSR5.

In the end though, I have never bought a GPU with the idea I am going to get some "new" feature years down the line, it's simply unrealistic, you judge the hardware how it presents on release and at the time of purchase.

I.E. When S3 promised to provide TnL on it's Savage and never did as the TnL unit was buggy.

Fair they added AI Hardware acceleration in the CU instead of separate cores like they did in CDNA 3 and CDNA/RDNA 4.  Personally having some kind of long term support matters to me as I tend keep GPU for a number of years. 

Considering AMD history lately with GPUs and the fact RDNA 5 is going to be a major architecture change according to many rumors.  Add in the fact that am guessing integrated GPU going to skip RDNA 4 and go straight to RDNA 5.  This is a guess but base on the roadmaps I seen I think this is a good guess and it not like they not skip generations before on there integrated gpus.  On top of that P6 and next xbox will be RDNA 5, I personally would be very worried that as soon as RDNA 5 out AMD going to basically ambandom RDNA 4 and earlier when it comes to driver optimizations and new features.

Shit happens like the S3 example but that different then choosing not to support TnL vs not being able to.  I fine with a architecture shift not allowing past GPU to support some new feature happening once in awhile but not every generation. 

I hoping RDNA 5 will be AMD GPU zen moment which will cause then to support RDNA 5 and future GPU more like Nvidia does.  As on paper from all the rumors RDNA 5 looking very good but personally I not taking a chance on a AMD GPU till RDNA 5 is out.  Now depending on how they handle RDNA 5 I would be willing to go AMD over Nvidia. 

I feel the same way about Intel with CPUs, I not touching anything from them till Nova Lake (desktop).  Now depending on how they handle Nova Lake they could get me back.