By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Bofferbrauer2 said:

Probably GDDR6, as it theoretically consumes much less power for the same bandwith compared to GDDR5(X). This leaves quite some more Watts to spend on other things like CPU or GPU

Not to mention the increase in chip densities. Which will be important if you want to aim for 16GB/24GB/32GB of DRAM in the next gen and still be affordable.

Bofferbrauer2 said:

Even then, in dual channel we're still below 60GB/s, which is barely enough for 16 CUs maximum without starving them to death with the low bandwith. There's a reason why the APUs stayed at 8 CUs for so long and only going up to 10 with Raven Ridge now.

DDR5 won't start with that speed. Just like DDR4 starts off at 1866, I expect DDR5 to start off as low as 4000 or even just 3600 while it would need 6400 for the 51.2GB/s per channel. So it will take a while until that bandwith is actually available for APUs, and by that time their power compared to high-end GPUs won't budge very much - it might even drop even further down the line depending on how fast the top of the line evolves.

You are only comparing raw bandwidth. Which is going to be inaccurate.

nVidia and AMD are constantly inventing new technologies to save bandwidth you know with various forms of culling and compression.
So your "16 CU Maximum" claim is without basis.

The main limiter for why APU CU counts haven't exploded is entirely down to costs... AMD typically reserves roughly 50% or more of the die space on an APU... Entirely to the GPU. And that's with current CU counts on chips like Kaveri with up-to 8 CU's. 8.

This could have hold true until Bristol Ridge, but then with Raven Ridge the jump should have been bigger than just going up to 10 NCU.

I'm not just comparing raw bandwith. While I didn't mention it, I'm also taking the clock speeds and GCN evolutions into account. The GPU Clock in the APUs have risen substantially since Trinty (I'm leaving out Llano because it's VLIW5 design isn't really comparable with it's successors). Trinity came with 800Mhz VLIW 4, Kaveri with 866Mhz GCN 2 and Bristol Ridge pushed it to 1108Mhz GCN 3. That's a roughly 40% increase in clock speed and over 60% increase in performance and hence throughput without increasing the actual number of Compute Units. So while I believe the number of CU will only increase by 60%, the performance and throughput should rise at the same time by about 100%, if not more. I'm expecting those 16CU to run at close to 1500Mhz for instance and the GCN being much more performant by that time as they are now.