By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
OlfinBedwere said:

It makes sense to use FLOPS when comparing modern-day consoles, since they're all based on the same underlying GPU architecture (apart from the Switch, and even that's similar enough to the others to be at least a useful ballpark figure)

No it doesn't. People need to stop believing this.

AMD for example has consistently iterated upon it's Graphics Core Next design...
A 4 Teraflop Graphics Core Next 1.0 GPU will loose to a 4 Teraflop Graphics Core Next 5.0 GPU. - I can even demonstrate this if you want.

Here we have the Radeon 7970 (4.0 - 4.3 Teraflop) against the Radeon 280 (2.96 - 3.34 Teraflop).
The Radeon 7970 should be able to wipe the floor with it's almost 1 Teraflop advantage right? Wrong.
https://www.anandtech.com/bench/product/1722?vs=1751

They are both Graphics Core Next.
Again, Flops is irrelevant.

FLOPS is a Theoretical number, not a real world one. The GPU in the Playstation 4 Pro and Xbox One X can do more work per flop than the base Xbox One and Playstation 4 consoles, that's a fact, due to efficiency tweaks in other areas.

 
 

 
 

Those two GPUs have exactly the same chip. The R9 280s were a rebrand of the HD 7900s. Besides, we've no idea of which version of the cards are being compared, and it probably isn't the GHz edition.

https://www.anandtech.com/bench/product/1772?vs=1872

I believe this is a better comparison, since it takes GPUs with very, very similar theoretical FLOPS numbers from different GCN generations - two generations apart indeed. The results are not very flattering, though.

https://www.anandtech.com/bench/product/1718?vs=1771

Nvidia makes a better case on their generational improvement, since here the FE GTX 1060 should have only ~10% edge or so based on FLOPS numbers alone. Though the Kepler GPUs... are a case of their own on aging poorly.