By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Chazore said:

Going by the chart data, they kept their momentum up for around 5-6 GPu gens (Nvidia gens not AMD), and then they just gave up, and it seems they gave up big time around the GTX 900 series, which is nearly a decade ago now, and that's a hella looooooong time in the tech space, let alone console/hardware years to consider.

Nvidia are technically fucking themselves atm by adding in extras to their branding (like RTX or RTX IO, or how this gen's number line is completely muddled up), but they still hold general consumer mindshare, just not hardcore dedicated (since most of us seem to be aware that it's 4090 or nothing this gen).

I just don't see AMD bothering at this point tbh, I mean look at the chart, they haven't cared about keeping up anything for nearly a decade now. That's just bad, dire even, to see AMD go on that long and not do shit about it. 

If Intel doesn't do something, we're really going to get fucked hard in the next 5+ years from now. 

I blame a lot of AMD's woes on Graphics core next.

When you release the Radeon 7870 and then re-release it as the 8870 and then re-release it as the 270X and then re-release it as the 370X... You know you are just being shafted.

Or we take the Radeon 7510 > R7 240 > R5 330 > R5 435 > Radeon 520... That is the same GPU being on the market for 5+ years.

They did get a *really* good return on investment because of it as one design can last half a decade, but they lost revenue, profit and marketshare because of it.

Jizz_Beard_thePirate said:

I think AMDs current issue is they are resource constrained due to Ai hype in the data center. Least that's the only explanation I can think of as to why they basically skimped out on a generation that should be an easy win for them in virtually every area under a 4090. I don't think the industry was prepared for the Ai boom to come so quickly and because of this, AMD and Intel are trying to play catch up against Nvidia in the datacenter.

You look at something like ROCm for CDNA. For like the past decade or so, that software stack has largely been a shit pickle that barely worked properly. In the past year or so, the advancements have been so quick that while it hasn't caught up to Nvidia's software stack quite yet, it is at a point where you could use it and it is competitively good enough as long as you are using it with CDNA chips. AMDs and Intels presentations over the last year has largely been about Ai as well similar to Nvidia. You don't hear it quite as much as with Nvidia because Nvidia loves their Ai in the gaming space with DLSS, VSR, etc while Radeon has largely ignored Ai in gaming but in the datacenter, they have been getting increasingly focused on it and rightfully so. That along with continued advancements in Epyc to make sure Intel isn't catching up anytime soon means that something has gotta give.

And what gave is Radeon. I think Radeon is waving the white flag this generation and not fighting at all really because AMD needs to catch up in Ai against Nvidia in the datacenter space. MI300X is going to come out like next year while H100s have been selling in droves this year. Nvidia's roadmap says they will be releasing next generation of their datacenter GPUs next year so Radeon needs to start advancing quickly in that space. But in gaming it's a stark contrast to RDNA 2 and RDNA 1 where they were very competitive in their respective categories in gaming space.

Considering CDNA is based on Graphics Core Next which has been around for over 10~ years, you would think the software stack would be very mature for that architecture by now.

In saying that, GCN was always stupidly good at compute, so that architecture is well positioned to capitalise on A.I. But AMD is fumbling the software ball.

They just dont have the engineers to develop two architectures in tandem (Even if RDNA is regressing and adopting VLIW-like design philosophies), with the software stacks to go with it.

In saying that... I do prefer AMD's driver control panel over nVidia's, nVidia's reminds me of something from the 90s.

hinch said:

Yeah its also a bad look because they're also doing the same thing as Nvidia with the naming shennanigans and keeping the status quo of high prices. Only they aren't really competitive to Nvidia this gen in the DGPU space when it comes to overall performance, efficiency to supported feature sets. Had they been aggressive with pricing with the RX 7000 series cards and with marketed them right, and say "look we're cheaper (by some margin) and offer way more VRAM" people would sweep that up. Much like RDNA 2 is doing now in the lower end/mid range. But nope. They'd rather be greedy and want all the cake and eat it lol.

The Radeon 7600 is just as bad of a product as the Geforce 4060.

It cannot soundly beat the Radeon 6650XT, it still only has 8GB of Ram and it's more expensive... But comparing it to the vanilla 6600, it's a better product, but also $100 AUD more expensive, so the price/performance goal post never got shifted.

If they release the 7600 on a 96-bit memory bus they *could* have had 12GB of Ram with 250GB/s of bandwidth or more easily enough. - Throw an extra 16-32MB of infinity cache to make up for it and come in at $199 and you would have sold GPU's like hotcakes.



--::{PC Gaming Master Race}::--