By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

Flops are relevant, they give a good ballpark estimation of what the hardware can do.

No they don't.
Here are two Geforce 730. Roughly the same flops. 50-65% performance difference.

Here are two Geforce 1030... Roughly same flops. 100-125% performance difference.

Here are two Radeon 7750.  Roughly the same flops... 40-50%.

Hows about the Radeon 5870 and Radeon 7850? The 5870 is 2.72 Teraflops verses the 7850's 1.76 Teraflops. - The 5870 should win right, it has almost 1 Teraflops more?
Nope. It looses by about 8-55%

Tell me again how much accuracy flops will give you in determining a GPU's capabilities. - Evidence overwhelmingly says you are wrong on that claim. Like... Really wrong.

Flops tells us only one thing...
The single precision theoretical floating point performance of a part, it absolutely tells us nothing about half precision, double precision, quarter precision, integer, geometry, bandwidth, texturing, compression, culling capabilities of a part... In short, just like bits... It's a useless metric on it's own.

The performance differences are due to different memory used.

1. One is DDR3 and the other is GDDR5.

2. DDR4 and GDDR5.

3. GDDR3 and GDDR5.

There are other reasons as well, such as the difference in VRAM between the GT 730s. Flops are a good indicator of performance but also drivers, APIs and stuff. GDDR5 and GDDR6 should likely be used in next gen consoles and that with the Flop performance should indicate the performance of next gen consoles.