By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Jizz_Beard_thePirate said:

Lol, if true, can't say I am surprised after RDNA 3. 

RDNA3 is starting to come into it's own now, after extensive driver updates.
...which is why AMD is probably a little more confident in rolling out the rest of the product lineup now.

Jizz_Beard_thePirate said:

Honestly I think while chiplet design was the saving grace of Ryzen, it was a death nail for Radeon. For the CPU space, Intel was too cocky and their foundries hit a brick wall struggling to advance past 14nm with good yields. They also had very short socket support. While first gen Ryzen wasn't competitive in gaming at the top end, the i5 7600 4 core 4 thread vs ryzen 5 1600 ended up being in ryzens favor as games became more multi-threaded optimized. Because Intel had a bad rep of continuously producing quad cores for like 4-5+ years, people really wanted something new in the CPU space and with promises of 2020 socket support and being a ton more power efficient, a lot of people were willing to bite despite some initial issues.

AMD had the right product at the right time with Ryzen, this is something they have managed to do several times...

I.E. They took Intel by surprise in the race to 1Ghz, so Intel pushed Coppermine as far as it could go, to the point where they introduced bugs, requiring a revision of that core, hence why we ended up with Tualatin, both eventually got replaced by Netburst.

Then again with Clawhammer, efficiency over clockspeed. - Then AMD stagnated with it's core design, adjusting the hyperlink, caches, frequency and number of CPU cores until Core 2 got introduced which took the best parts of the P6 core and Netburst with a few extra twists and dominated until Zen.

Chiplets are an amazing cost-reduction exercise, you get more working pieces of silicon per wafer, but it does push a ton of extra cost onto packaging and design.

Jizz_Beard_thePirate said:

With Radeon, that unique advantage that Ryzen was providing is simply not there. MCM as stated by AMD themselves did not have enough bandwidth between the interconnects to make multi-die GCDs work with gaming. So they settled with MCDs instead. But all this R&D spending into making MCM designs work clearly resulted into other issues. The GCD isn't all that performant compared to a 4090 while a 6900XT could take on a 3090 in Raster from a previous gen. They needed to make a separate driver branch (according to yuzu devs) specifically for RDNA 3 likely because of MCM design while RDNA 2 and the rest are all on a unified driver branch. And RDNA 3 is very inefficient despite being on the same node as ADA and it has abnormally high idle power as you increase the resolution/hz and the number of monitors.

This was something I stated many years ago when Zen introduced MCM as a chip design concept and people wanted it in GPU's.

GPU's are just different and require orders-of-magnitude more bandwidth that the Fabric cannot sustain... And if AMD introduces a fabric that is enough for today... Sadly GPU's are constantly taking large strides in memory bandwidth, so it's likely not a good long term solution.
For example... RDNA1 topped out at 448GB/s of memory bandwidth, RDNA3 topped out 960GB/s of bandwidth, getting a fabric that can keep pace with that is a hard ask, let alone factoring in outlier technologies like Infinity Fabric with 3,400GB/s of bandwidth.

However we also need to remember that MCD and GCD actually *introduces* inefficiencies, you need to power these interconnects which introduces additional heat and power consumption in an already hot and power hungry environment, this is TDP that could be spent on higher clockspeeds for more performance. - They also increase latencies and decrease bandwidth compared to a consolidated and single chip design.

Zen today if it was a monolithic core, could be faster and use less power if it got rid of chiplets, which is a scary thought for Intel...

And this is why it's the wrong approach, you *will* loose if any part of your design introduces a bottleneck or an efficiency reduction as nVidia is more than happy to make a chip as big as possible and as efficient as possible.

AMD of course, decided on the MCD approach as they could break-up the bandwidth demands, which was the right approach, you will still reduce your efficiency, but because memory transactions can be parallelized over multiple transactions it can allow you to scale up/down the number of chips on a per-needs basis.

But to keep things in perspective, the 7900XTX has a 300mm die with 6x37mm dies, it's basically the Geforce 4070Ti competitor which is also a 300mm die, nVidia's chip may be slightly slower outside of RT, but it is far more cost effective.

And ultimately, if you can build a smaller, more cost effective die, then that will beat any chiplet design. Period.

Jizz_Beard_thePirate said:

And of course, the software stack is no where near as comprehensive. We are still waiting for FSR 3 9 months after it was announced while FSR 2 is losing to XeSS.

And worst of all, because of all those issues, reviewers dunked on RDNA 3 pretty hard which resulted in discounts very quickly which means all those savings likely went out the window. I think if they stuck with Monolith design like Nvidia and continued their trajectory they started with RDNA 2, this generation would have been a lot more competitive imo.

Instead, 4090 will likely be the 1080 Ti of this generation even if it's expensive as 5000 prices will likely be going through the roof.

I would argue AMD's driver front-end is far better than nVidia's, it doesn't look like it's been dragged from the 90's kicking and screaming.

It just lacks the features like FSR3, but otherwise the drivers themselves are actually really solid.

I personally don't use DLSS or FSR, as there is always some kind of artifact's that I pick up on, like texture/shader shimmer, but I get why people do use it, I prefer to just run it raw.



--::{PC Gaming Master Race}::--