By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Captain_Yuri said:
7900XTX4080
24GB 384 Bit Bus16GB 256 Bit Bus
192 ROPs112 ROPs
960.0 GB/s716.8 GB/s
96 CU76 SM
96 RT Cores76 RT Cores
61.42 Teraflops48.74 Teraflops
355 Watt TDP320 Watt TDP
ChipletMonolithic


Yet some how, these two are with in single digit % of each other in Raster and around 30-40% difference in Ray Tracing in favor of the 4080. This is like the inverse of Ampere vs RDNA 2 just that Ampere had the Ray Tracing and DLSS advantage while RDNA 3 has no advantage other than price. But when a person is spending $1000 and the competition is $200 more but gives you superior Ray Tracing + DLSS while being more power efficient... AMD just might have sold the 4080...

Idk how a company can create a chiplet architecture that is more inefficient than monolith but here we are... Even their transient spikes feels like Ampere spikes but on TSMC.

Overall both the 7900XTX and 4080 are terrible products for the price. Hopefully both will get a price cut sooner than later.

Chiplets by their very nature will always be less efficient than a monolithic die.

- You are moving caches and memory controllers further away from compute... And adding in an interconnect/fabric which consumes energy and increases latencies.

haxxiy said:

That TUF is more like what I had expected. The OC would even manage to just about match the 4080 in RT (on average, still behind on the more demanding titles).

This lends some credence to the idea AMD found some frequency curve issues late in development and that is why the clocks were much lower than expected. Even then, I wonder if they'd be better off just adding another hundred Watts to the cards. Could be bad PR trauma from the Vishera/Hawaii days but now everyone's doing it so yeah.

Too bad these won't retail for the MSRP at least for now.

I think the biggest issue currently is just the drivers.
In theory in raster, RDNA3 should be beating the Geforce 4080, not just matching it.

In some benchmarks we see a glimpse of it's potential and it looks great, but other titles it falls short... Which to me shows that the immature drivers are holding it back, which is par the course for AMD and it's launch GPU's.

But in saying that, we can only take the GPU on the basis of how it performs today, not how it may perform in 6 months time.

I am personally not happy with it costing $1,800 AUD, hoping the 7800 series is a little more sensible... But I don't think they will come under $1,000 AUD either.

It had been reported before launch that the drivers were not fully ready yet at launch, with some bugs needing work and optimization not complete. So I'm pretty sure the RDNA3 cards will get better performance and these power bugs ironed out over time (the transient spikes maybe not).

for 1000 Aussie dollars, I suspect you'll have to go with a Navi 32-based card. I expect the top-end 7800 to still be made with the Navi 31 as otherwise there's too large a gap (from 84 CU down to just 60 CU) and an actual regression in CU numbers gen over gen. As such, I think that Navi 32 will either be a 7700XTX, or a 6800XT with a 6800XTX made from a Navi 31 (probably 72CU, so there's a linear regression in CU numbers, which would mean that the CU count of the 7800XTX and XT would be the same as 6800XT and 6800). 

haxxiy said:
Pemalite said:

haxxiy said:

I think the biggest issue currently is just the drivers.
In theory in raster, RDNA3 should be beating the Geforce 4080, not just matching it.

In some benchmarks we see a glimpse of it's potential and it looks great, but other titles it falls short... Which to me shows that the immature drivers are holding it back, which is par the course for AMD and it's launch GPU's.

But in saying that, we can only take the GPU on the basis of how it performs today, not how it may perform in 6 months time.

I am personally not happy with it costing $1,800 AUD, hoping the 7800 series is a little more sensible... But I don't think they will come under $1,000 AUD either.

This might be the case, but I still think AMD should have brute-forced it to 3 GHz with an extra 100W. It'd be near the 4090 instead of the 4080 in raster for $600 less.

That seems much more marketable to me, and one would get 4080 levels of RT for $200 less too. I'm not sure people would care if it's rated 450W given the 4090 is there (despite it not consuming that much in gaming without RT).

Maybe AMD considered all of this but was just too late to change the reference design.

AMD probably needs to make a new step or even a full-on revision to achieve this.

I believe that RDNA3 wasn't ready yet, both on a hardware and software level, but got pushed out the door ASAP as AMD needed something to counter NVidia's new cards. As such, RDNA3 reminds me a bit of the first Ryzen chips, which also got pushed out the door ASAP as AMD was on the brink of bankruptcy at the time, and could only deploy their full potential a year later with the launch of Zen+, which contained all the fixes that couldn't make it into the original Zen chips. 

If this proves to be correct, I expect AMD to come with faster 7x50 versions later down the line, probably fall next year or early 2024. Until then, the driver issues should be resolved and performance already improved from those. Either way, I expect the gap to the 4080 to grow over time, not shrink, as the chips are clearly held back by their unfinished driver support.