By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HoloDust said:
Bofferbrauer2 said:

Well, clock rates and the IPC I would say.

But yeah, being able to compare all of these in a suite of games that all of them should be able to run (like Crisis for instance) and see what fps they'll get would be a breat way to see the evolution of 32 Compute Units.

This is my go to for old GPUs, Witcher 3 on Guru3D, since I can't find more comprehensive chart that goes that back in the past and still gets updated with new GPUs. Maybe someone knows of more charts similar to this one.

I will concentrate on the 1440p data as the 1080p goes way too high in FPS and 4k is too harsh on early GPUs of this comparison.

  • HD 7970: 26 FPS
  • R9 380X: 29 FPS
  • RX 470: 36 FPS
  • RX 570: 43 FPS
  • RX 5600: N/A, but the 5600XT has 63 FPS, hinting at probably around 56 FPS and beating the Fury cards 
  • RX 6600XT: 87 FPS
  • RX 7600: 95 FPS
  • RX 7600XT: 101 FPS

So yeah, performance in this game more than tripled with the same amount of CU. Most of this is due to clock speeds (7970 Ghz edition: 1000 Mhz, 7600XT: 2755 Mhz boost), but there must have been some IPC improvements also.

What I'm surprised at is the gap between the 470 and 570, because it's functionally the same chip and the boost clock speeds differ only slightly (38 Mhz). Probably the higher TDP of 150W compared to 120W has allowed it to boost more and longer. Meanwhile one can clearly see the sharp increase in clock speeds with RDNA 2 resulting in doubling the FPS over the 570. Too bad the 5600 is MIA, it would have been a nice point of reference.

And now we'll have to wait and see it the 9060XT will be added to the chart.