By using this site, you agree to our Privacy Policy and our Terms of Use. Close
hinch said:
Jizz_Beard_thePirate said:

Yea Nvidia is effectively dominating the industry and cucking the competition by the sheer force of their software advancements. While Radeon shifts focus to get ROCm to catch up to Nvidia in the Datacenter space as Ai boom is on full throttle mode, Nvidia continues to advance both their Datacenter and Gaming space with more and more software features since they have the resources to do so.

This is what happens when you have a CEO like Jensen that doesn't let the company rest on their laurels. While AMD was focusing on getting out of near bankruptcy and trying to get Ryzen to be a success which basically lead to Radeon getting cucked and Intel having a leadership that would rather focus on smelling their own butthole than actually bringing innovation to the space... Nvidia has been trying to get Ai revolution on the backbone of Cuda ever since. And now they are reaping the rewards while neither Intel and AMD really has the software stack to compete. (Like yes Intel did have innovations like Optane, NUC, etc but nothing truly ground breaking imo)

In the gaming space, we are pretty much at a point where it's like when a person looks to buy a GPU, it's like how many features are you willing to lose? Cause that list keeps growing every 6 months. It's hard for Radeon to catch up because AMD wants that Datacenter money and it's hard for Intel to catch up cause they just started. So we are basically getting to the point of, buy the best Nvidia GPU that you can with the money that you have or buy a console.

True its insane to see how much Nvidia's bet on Ai has paid off the last few generations. Most of us thought Turing was a lackluster POS from Pascal from a value/performance and consumer point of view (DLSS was kind of crap at the time and RT wasn't ready for prime time). But they were really forward thinking for adding an dedicated ai cores to their GPU's and double downing on it. And their focus on software only got better as the years went on. Which is why I find its strange that though AMD's main focus was on Epyc and Ryzen; to get that server and consumer business and marketshare, that they did not see this coming. They've had so many years to respond, but nothing. Until quite recently.

What I'm hoping for is that the next big RDNA (5?) launch from AMD will be that Zen moment when they go big and all out with the MCM design+Ai that will rival Nvidia's best or even get the better. AND have a software stack ready that competes with Nvidia's, plus their own new technologies.

And yeah the gap keeps growing and its not like things like RT and DLSS is a small thing anymore. I just hope that whatever AMD is working on next over the years is that they'll at least get on par with those otherwise its just.. like you said you might as well buy Nvidia. Unless you don't mind missing out on a few things and don't want to pay Nvidia tax.

Yea pretty much. Radeon has two main issues they need to fix and Nvidia has one big issue that if they do not overcome, Radeon will have a chance to take the crown in a lot of fronts.

The first issue from Radeon is the interconnect technology for GCDs when it comes to gaming. This isn't something Radeon can just fix because you need very exotic and expensive materials to make the interconnects not shit the bed but a ton of R&D is being put into it not just by Radeon, but by Intel, TSMC, etc. The second issue is their software stack. Their datacenter GPUs with chiplet designs are pretty great but most companies won't even consider it because the race to Ai means the more time you waste on trying to get software working, the more time advantage and "first to the market" advantage the competition has. GPUs at the end of the day are tools and if for Radeon, you are having issues with ROCm and other things while with Nvidia, the software stack is much more feature rich, stable and well integrated into industry standard software... It's easy to see why companies are spending Billions into H100s.

But Nvidia does have one big issue and that is the monowall. The largest die area a monolith GPU can have is around 800mm² which is what all top end datacenter GPUs from Nvidia has been. Gaming GPUs have been around 600mm² with the exception of the 2080 Ti that was around 750mm². Up until now, the node shrinks have been going at a steady enough pace but now they are slowing down drastically. This means that Nvidia will soon hit a wall as to how many more transistors and such they can fit inside that die area. So you have AMD who has been at the forefront of chiplet design and even Intel is coming out with GPUs with chiplet designs in the data center. But Nvidia has been lagging behind and soon will hit a brick wall. So far it's been fine because they can use their Nvlink switches and "connect" other GPUs together but this is more of a bandaid than anything.

So that is basically Radeons chance. If the interconnect technology is advanced enough for GCDs to work in gaming and Radeons software stack catches up or is close enough to Nvidia in the datacenter and Nvidia lags behind in chiplet design and hits that monowall... Radeon can certainly dominate the GPU market by sheer brute force.

But as they say, never bet against Nvidia cause Jensen has proven to know what he is doing. And it's not like Nvidia aren't putting a lot of R&D into chiplets themselves. So we will see what happens.

Last edited by Jizz_Beard_thePirate - on 23 August 2023

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850