By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Jizz_Beard_thePirate said:
hinch said:

So we're fucked basically lol. Well its was a good time while it lasted. Going to be a rough couple of gens.

Lol, if true, can't say I am surprised after RDNA 3. Honestly I think while chiplet design was the saving grace of Ryzen, it was a death nail for Radeon. For the CPU space, Intel was too cocky and their foundries hit a brick wall struggling to advance past 14nm with good yields. They also had very short socket support. While first gen Ryzen wasn't competitive in gaming at the top end, the i5 7600 4 core 4 thread vs ryzen 5 1600 ended up being in ryzens favor as games became more multi-threaded optimized. Because Intel had a bad rep of continuously producing quad cores for like 4-5+ years, people really wanted something new in the CPU space and with promises of 2020 socket support and being a ton more power efficient, a lot of people were willing to bite despite some initial issues.

With Radeon, that unique advantage that Ryzen was providing is simply not there. MCM as stated by AMD themselves did not have enough bandwidth between the interconnects to make multi-die GCDs work with gaming. So they settled with MCDs instead. But all this R&D spending into making MCM designs work clearly resulted into other issues. The GCD isn't all that performant compared to a 4090 while a 6900XT could take on a 3090 in Raster from a previous gen. They needed to make a separate driver branch (according to yuzu devs) specifically for RDNA 3 likely because of MCM design while RDNA 2 and the rest are all on a unified driver branch. And RDNA 3 is very inefficient despite being on the same node as ADA and it has abnormally high idle power as you increase the resolution/hz and the number of monitors. And of course, the software stack is no where near as comprehensive. We are still waiting for FSR 3 9 months after it was announced while FSR 2 is losing to XeSS.

And worst of all, because of all those issues, reviewers dunked on RDNA 3 pretty hard which resulted in discounts very quickly which means all those savings likely went out the window. I think if they stuck with Monolith design like Nvidia and continued their trajectory they started with RDNA 2, this generation would have been a lot more competitive imo.

Instead, 4090 will likely be the 1080 Ti of this generation even if it's expensive as 5000 prices will likely be going through the roof.

Yeah its clear they have bitten off more they can chew. Whether its from the overly complex engineering that trickled down to the rest of the product; the software down to its marketing. I just feels like they are scrambling this generation. I mean they can't even be bothered to release the rest of the stack (under Navi 31) because of how little progress they've made, so they're just trying to extract as much money as they can from us consumers by selling off the rest of their old stock. And the features announced like you say are still mia like FSR 3 seem like a knee-jerk reaction and the longer they take to release, the more incompetent they look. Heck, FSR was late, as was FSR 2 and now 3. And that's not even looking at how they perform.

Its all so messy. But yeah its a massive L from RTG with RDNA 3. Probably would've been better off just iterating on RNDA 2 since that works and just add a few things and put on a new node and acheived similar or better results. Ironically most likely would've been more efficient at least in idle and in lesser demanding games and loads.

Maybe going after the mid range is the smarter move as they know its going to be a David vs Goliath situation regardless if the next architecture is competitive idk. I think going for the value focused mid-range may be the better move for them. But its also going to be shit because Nvidia aren't going to be pressured into releasing as good GPU's they can for us consumers in the mid range, heck even high end (not flagship). Feels like the CPU situation in the 2000's when Intel had no equal, prior to Ryzen's release.

And yeah the 4090 though highly priced is a very decent offering and by far the most worthwhile GPU in a long time, considering its specs and performance. That will easily stand the test of time. And considering its success its sucessor will be quite a bit more expensive.

Last edited by hinch - on 04 August 2023

Around the Network
hinch said:
Jizz_Beard_thePirate said:

Lol, if true, can't say I am surprised after RDNA 3. Honestly I think while chiplet design was the saving grace of Ryzen, it was a death nail for Radeon. For the CPU space, Intel was too cocky and their foundries hit a brick wall struggling to advance past 14nm with good yields. They also had very short socket support. While first gen Ryzen wasn't competitive in gaming at the top end, the i5 7600 4 core 4 thread vs ryzen 5 1600 ended up being in ryzens favor as games became more multi-threaded optimized. Because Intel had a bad rep of continuously producing quad cores for like 4-5+ years, people really wanted something new in the CPU space and with promises of 2020 socket support and being a ton more power efficient, a lot of people were willing to bite despite some initial issues.

With Radeon, that unique advantage that Ryzen was providing is simply not there. MCM as stated by AMD themselves did not have enough bandwidth between the interconnects to make multi-die GCDs work with gaming. So they settled with MCDs instead. But all this R&D spending into making MCM designs work clearly resulted into other issues. The GCD isn't all that performant compared to a 4090 while a 6900XT could take on a 3090 in Raster from a previous gen. They needed to make a separate driver branch (according to yuzu devs) specifically for RDNA 3 likely because of MCM design while RDNA 2 and the rest are all on a unified driver branch. And RDNA 3 is very inefficient despite being on the same node as ADA and it has abnormally high idle power as you increase the resolution/hz and the number of monitors. And of course, the software stack is no where near as comprehensive. We are still waiting for FSR 3 9 months after it was announced while FSR 2 is losing to XeSS.

And worst of all, because of all those issues, reviewers dunked on RDNA 3 pretty hard which resulted in discounts very quickly which means all those savings likely went out the window. I think if they stuck with Monolith design like Nvidia and continued their trajectory they started with RDNA 2, this generation would have been a lot more competitive imo.

Instead, 4090 will likely be the 1080 Ti of this generation even if it's expensive as 5000 prices will likely be going through the roof.

Yeah its clear they have bitten off more they can chew. Whether its from the overly complex engineering that trickled down to the rest of the product; the software down to its marketing. I just feels like they are scrambling this generation. I mean they can't even be bothered to release the rest of the stack (under Navi 31) because of how little progress they've made, so they're just trying to extract as much money as they can from us consumers by selling off the rest of their old stock. And the features announced like you say are still mia like FSR 3 seem like a knee-jerk reaction and the longer they take to release, the more incompetent they look. Heck, FSR was late, as was FSR 2 and now 3. And that's not even looking at how they perform.

Its all so messy. But yeah its a massive L from RTG with RDNA 3. Probably would've been better off just iterating on RNDA 2 since that works and just add a few things and put on a new node and acheived similar or better results. Ironically most likely would've been more efficient at least in idle and in lesser demanding games and loads.

Maybe going after the mid range is the smarter move as they know its going to be a David vs Goliath situation regardless if the next architecture is competitive idk. I think going for the value focused mid-range may be the better move for them. But its also going to be shit because Nvidia aren't going to be pressured into releasing as good GPU's they can for us consumers in the mid range, heck even high end (not flagship). Feels like the CPU situation in the 2000's when Intel had no equal, prior to Ryzen's release.

And yeah the 4090 though highly priced is a very decent offering and by far the most worthwhile GPU in a long time, considering its specs and performance. That will easily stand the test of time. And considering its success its sucessor will be quite a bit more expensive.

Yea exactly. Least on the bright side, the mid-range should be getting very competitive with RDNA 4 and Battlemage all targeting it. From $100-$500 gpu class has been pretty horrid so if Radeon is going to focus in that area, then they will be going back to Volume over Margins which in turn should hopefully mean Polaris vs Pascal era for entry level to mid-range. Maybe we will get the old 480 vs 1060 of competitiveness but now with a 3rd player in that segment. The rest of the lineup above that will be Nvidia's territory and with the Ai boom, yea I can't see Nvidia being to kind to our wallets going forward lol.

Who knows, when PS6 launches, maybe Radeon will have another go at the high end. Hopefully they don't goof it up then.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

To be honest, this could be the right move from AMD to get back on track.

AMD has been so focused on trying to beat Nvidia that they've lost focus on what they were good at: competitive products at lower prices. That's where they excelled at, and it's been proven time and time again that Nvidia owns the high end market so, why not going back to its roots?

So yes, it will be sad to not see a 8900 series of cards trying to compete with Nvidia's 5090/5080, but if they can launch a 8800/8700 card that gives us the same or 90% of the performance of a 5070Ti at the price of a 5070 and so on, all the better for all of us.

But well, we'll have to wait and see how things develop going forward.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Jizz_Beard_thePirate said:

PowerColor leaks Radeon RX 7800 XT Red Devil, Navi 32 with 3840 cores and 16GB confirmed

https://videocardz.com/newz/powercolor-leaks-radeon-rx-7800-xt-red-devil-navi-32-with-3840-cores-and-16gb-confirmed

60 CUs? Are they serious? This will perform like a 6800XT at best which as 72CUs. So they are launching a 4070 competitor 5 months after the 4070? Another miss lol

Would have been acceptable as an 7800 non-XT since at least the number of CU wouldn't have gone down ,and could then have brought a GPU more comparable to the GRE as a true 7800XT. But not like this

Last edited by Bofferbrauer2 - on 04 August 2023

Bofferbrauer2 said:
Jizz_Beard_thePirate said:

PowerColor leaks Radeon RX 7800 XT Red Devil, Navi 32 with 3840 cores and 16GB confirmed

https://videocardz.com/newz/powercolor-leaks-radeon-rx-7800-xt-red-devil-navi-32-with-3840-cores-and-16gb-confirmed

60 CUs? Are they serious? This will perform like a 6800XT at best which as 72CUs. So they are launching a 4070 competitor 5 months after the 4070? Another miss lol

Would have been acceptable as an 7800 non-XT since at least the number of CU wouldn't have gone down ,and could then have brought a GPU more comparable to the GRE as a true 7800XT. But not like this

Every god dang opportunity Nvidia has given them, they decided to cuck themselves. Nvidia legit has all their cards out in the wild and all Radeon needed to do was position their GPUs accordingly and it would have been an easy win. Yet all they decided to do was fuck it up.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network

With RDNA 1 the names were less confusing. If they wont compete at the top With RDNA 4 they should go back to that naming scheme.

RDNA 1 VS RDNA 3 Naming

Radeon RX 5700 XT Radeon RX 7900 XTX
Radeon RX 5700 Radeon RX 7900 XT
Radeon RX 5600 XT Radeon RX 7900 GRE 
Radeon RX 5600  Radeon RX 7800
Radeon RX 5500 XT Radeon RX 7700 XT
Radeon RX 5500 Radeon RX 7700
Radeon RX 5300 XT Radeon RX 7600
Radeon RX 5300 Radeon RX 7500 XT

Last edited by Chicho - on 04 August 2023

Jizz_Beard_thePirate said:

Lol, if true, can't say I am surprised after RDNA 3. 

RDNA3 is starting to come into it's own now, after extensive driver updates.
...which is why AMD is probably a little more confident in rolling out the rest of the product lineup now.

Jizz_Beard_thePirate said:

Honestly I think while chiplet design was the saving grace of Ryzen, it was a death nail for Radeon. For the CPU space, Intel was too cocky and their foundries hit a brick wall struggling to advance past 14nm with good yields. They also had very short socket support. While first gen Ryzen wasn't competitive in gaming at the top end, the i5 7600 4 core 4 thread vs ryzen 5 1600 ended up being in ryzens favor as games became more multi-threaded optimized. Because Intel had a bad rep of continuously producing quad cores for like 4-5+ years, people really wanted something new in the CPU space and with promises of 2020 socket support and being a ton more power efficient, a lot of people were willing to bite despite some initial issues.

AMD had the right product at the right time with Ryzen, this is something they have managed to do several times...

I.E. They took Intel by surprise in the race to 1Ghz, so Intel pushed Coppermine as far as it could go, to the point where they introduced bugs, requiring a revision of that core, hence why we ended up with Tualatin, both eventually got replaced by Netburst.

Then again with Clawhammer, efficiency over clockspeed. - Then AMD stagnated with it's core design, adjusting the hyperlink, caches, frequency and number of CPU cores until Core 2 got introduced which took the best parts of the P6 core and Netburst with a few extra twists and dominated until Zen.

Chiplets are an amazing cost-reduction exercise, you get more working pieces of silicon per wafer, but it does push a ton of extra cost onto packaging and design.

Jizz_Beard_thePirate said:

With Radeon, that unique advantage that Ryzen was providing is simply not there. MCM as stated by AMD themselves did not have enough bandwidth between the interconnects to make multi-die GCDs work with gaming. So they settled with MCDs instead. But all this R&D spending into making MCM designs work clearly resulted into other issues. The GCD isn't all that performant compared to a 4090 while a 6900XT could take on a 3090 in Raster from a previous gen. They needed to make a separate driver branch (according to yuzu devs) specifically for RDNA 3 likely because of MCM design while RDNA 2 and the rest are all on a unified driver branch. And RDNA 3 is very inefficient despite being on the same node as ADA and it has abnormally high idle power as you increase the resolution/hz and the number of monitors.

This was something I stated many years ago when Zen introduced MCM as a chip design concept and people wanted it in GPU's.

GPU's are just different and require orders-of-magnitude more bandwidth that the Fabric cannot sustain... And if AMD introduces a fabric that is enough for today... Sadly GPU's are constantly taking large strides in memory bandwidth, so it's likely not a good long term solution.
For example... RDNA1 topped out at 448GB/s of memory bandwidth, RDNA3 topped out 960GB/s of bandwidth, getting a fabric that can keep pace with that is a hard ask, let alone factoring in outlier technologies like Infinity Fabric with 3,400GB/s of bandwidth.

However we also need to remember that MCD and GCD actually *introduces* inefficiencies, you need to power these interconnects which introduces additional heat and power consumption in an already hot and power hungry environment, this is TDP that could be spent on higher clockspeeds for more performance. - They also increase latencies and decrease bandwidth compared to a consolidated and single chip design.

Zen today if it was a monolithic core, could be faster and use less power if it got rid of chiplets, which is a scary thought for Intel...

And this is why it's the wrong approach, you *will* loose if any part of your design introduces a bottleneck or an efficiency reduction as nVidia is more than happy to make a chip as big as possible and as efficient as possible.

AMD of course, decided on the MCD approach as they could break-up the bandwidth demands, which was the right approach, you will still reduce your efficiency, but because memory transactions can be parallelized over multiple transactions it can allow you to scale up/down the number of chips on a per-needs basis.

But to keep things in perspective, the 7900XTX has a 300mm die with 6x37mm dies, it's basically the Geforce 4070Ti competitor which is also a 300mm die, nVidia's chip may be slightly slower outside of RT, but it is far more cost effective.

And ultimately, if you can build a smaller, more cost effective die, then that will beat any chiplet design. Period.

Jizz_Beard_thePirate said:

And of course, the software stack is no where near as comprehensive. We are still waiting for FSR 3 9 months after it was announced while FSR 2 is losing to XeSS.

And worst of all, because of all those issues, reviewers dunked on RDNA 3 pretty hard which resulted in discounts very quickly which means all those savings likely went out the window. I think if they stuck with Monolith design like Nvidia and continued their trajectory they started with RDNA 2, this generation would have been a lot more competitive imo.

Instead, 4090 will likely be the 1080 Ti of this generation even if it's expensive as 5000 prices will likely be going through the roof.

I would argue AMD's driver front-end is far better than nVidia's, it doesn't look like it's been dragged from the 90's kicking and screaming.

It just lacks the features like FSR3, but otherwise the drivers themselves are actually really solid.

I personally don't use DLSS or FSR, as there is always some kind of artifact's that I pick up on, like texture/shader shimmer, but I get why people do use it, I prefer to just run it raw.



--::{PC Gaming Master Race}::--

AMD Ryzen 8000 “Strix Point” APUs Feature 4 Zen 5, 8 Zen 5C CPU Cores, 16 RDNA 3.5 GPU Cores & 16 MB Cache

https://wccftech.com/amd-ryzen-8000-strix-point-apus-hybrid-12-zen-5-zen-5c-cpu-rdna-3-5-gpu-cores/

CoreWeave Accquires $2.3 Billion Debt By Putting NVIDIA H100 GPUs as “Collateral”

https://wccftech.com/coreweave-accquires-2-3-billion-debt-by-putting-nvidia-h100-gpus-as-collateral/

Idk if that makes sense since GPUs depreciate very heavily but someone certainly took the bait

Intel Meteor Lake Core Ultra 9 CPUs Break Past 5 GHz Clocks & Core Ultra 7 Around 5 GHz

https://wccftech.com/intel-meteor-lake-core-ultra-9-cpus-over-5-ghz-clocks-core-ultra-7-around-5-ghz/

AMD CEO already teases next-generation Instinct MI400 accelerator series

https://videocardz.com/newz/amd-ceo-already-teases-next-generation-instinct-mi400-accelerator-series

When you look across those workloads and the investments that we’re making, not just today, but going forward with our next generation MI400 series and so on and so forth, we definitely believe that we have a very competitive and capable hardware roadmap. I think the discussion about AMD, frankly, has always been about the software roadmap, and we do see a bit of a change here on the software side.

— AMD CEO Dr Lisa Su


Least they recognize where the main issue generally lies. Idk if saying "and we do see a bit of a change here on the software side" sparks the highest of confidence but least they are making changes.

Intel Embree Delivers Massive Boost In Ray Tracing Performance For Arc GPUs

https://wccftech.com/intel-embree-delivers-massive-boost-in-ray-tracing-performance-for-arc-gpus/



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Pemalite said:
Jizz_Beard_thePirate said:

And of course, the software stack is no where near as comprehensive. We are still waiting for FSR 3 9 months after it was announced while FSR 2 is losing to XeSS.

And worst of all, because of all those issues, reviewers dunked on RDNA 3 pretty hard which resulted in discounts very quickly which means all those savings likely went out the window. I think if they stuck with Monolith design like Nvidia and continued their trajectory they started with RDNA 2, this generation would have been a lot more competitive imo.

Instead, 4090 will likely be the 1080 Ti of this generation even if it's expensive as 5000 prices will likely be going through the roof.

I would argue AMD's driver front-end is far better than nVidia's, it doesn't look like it's been dragged from the 90's kicking and screaming.

It just lacks the features like FSR3, but otherwise the drivers themselves are actually really solid.

I personally don't use DLSS or FSR, as there is always some kind of artifact's that I pick up on, like texture/shader shimmer, but I get why people do use it, I prefer to just run it raw.

Yea I do agree with their front-end looking a lot more modern. Especially as when you apply settings on the Nvidia driver, it freezes the driver for a brief moment and you also need to log into Geforce Experience if you want to use that front-end.

The thing with upscaling is that more and more modern games are really using it as a crutch. While a 4090 can generally brute force through even Ray Tracing games at 4k Native while having 70-90fps... That is a $1600 GPU. As you go down the stack to more affordable GPUs, either you are sticking with 1440p /1080p or you are using upscaling to get to 4k. I will say that upscaling to 4k generally has a different result to upscaling to 1440p. Based on my experience and evidence from sites like DF and HUB, the upscalers do a much better job with a lot less artifacting since they have more information to play with so they can make more accurate guesses. But for multiplayer games, I won't ever touch upscaling.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Do you remember AMD's claim of 54% increased efficiency for RDNA3? Here is an explanation how they got to that fictitious number.