By using this site, you agree to our Privacy Policy and our Terms of Use. Close
GOWTLOZ said:
Pemalite said:

No they don't.
Here are two Geforce 730. Roughly the same flops. 50-65% performance difference.
http://www.jagatreview.com/2015/04/gt-730-ddr3-vs-gt-730-gddr5/2/

Here are two Geforce 1030... Roughly same flops. 100-125% performance difference.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

Here are two Radeon 7750.  Roughly the same flops... 40-50%.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

Hows about the Radeon 5870 and Radeon 7850? The 5870 is 2.72 Teraflops verses the 7850's 1.76 Teraflops. - The 5870 should win right, it has almost 1 Teraflops more?
Nope. It looses by about 8-55%

https://www.anandtech.com/bench/product/1062?vs=1076

Tell me again how much accuracy flops will give you in determining a GPU's capabilities. - Evidence overwhelmingly says you are wrong on that claim. Like... Really wrong.

Flops tells us only one thing...
The single precision theoretical floating point performance of a part, it absolutely tells us nothing about half precision, double precision, quarter precision, integer, geometry, bandwidth, texturing, compression, culling capabilities of a part... In short, just like bits... It's a useless metric on it's own.

The performance differences are due to different memory used.

Well i mean... that was Pemalite's point, wasn't it?