JEMC said:
True. But if they only do entry level cards (if $200 can be called that), the perception most people will get is that they only make good enough products, and that can harm them in the future if/when they ever try to put a fight in other segments. It's like people arguing that AMD should always do higher end cards, even if they don't sell, because not competing in that segment damages their image. |
The problem with Radeon competing in the high end is they can never seem to consistently do it. They have these one hit wonders and then spend the rest of the successive generations going down in gpu segments.
RDNA 2: Top "202" competitor
RDNA 3: Top "203" competitor
RDNA 4: Mid "203" competitor
Personally instead of Radeon investing in designs that don't seem to work properly like MCM with RDNA 3 and HBM with GCN, they need to figure out what works and scale it up. Otherwise they are just largely wasting R&D into failed projects such as HBM which is no longer used in consumer gpus and by the looks of it, neither will MCM anytime soon or maybe ever if Ai is used to generate frames. Where as with Nvidia, their R&D into Ray Tracing, Tensor Cores and Ai are paying off in multitudes.
I think if they could have scaled RDNA 2 to say 120CUs with 5nm for RDNA 3, probably would have achieved similar if not better results than going MCM. Then invested R&D into their own Tensor cores, Ray Tracing and Ai, RDNA 3 likely would have not only competed against 4090 in Raster instead of 4080 but also in features. But that's just in theory of course where it's easier said than done.
PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850







