By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said
Zippy6 said:

No TF alone is not accurate in representing the difference in GPU performance when there are other differences such as memory bandwidth or a completely different architecture such as your example of the 5870 vs the 7850. But I've seen you complain about others using TF a few times and it's not going to change anyone's mind especially when you use examples of a completely different architecture or ddr3 vs gddr5 to prove your point when the person using TF's is comparing RDNA2 to RDNA2.

People will keep using TF's and that will never change. There's no point you getting annoyed about it.

So basically you are shifting the goal post to include other aspects of a GPU as determiners of performance? Then why use Teraflops?

If Teraflops/Gigaflops was useful, it wouldn't matter what the architecture was... 1 Teraflop of compute should be equivalent of 1 Teraflop of compute regardless of the environment, the architecture, memory bandwidth etc' should be redundant. - A Teraflop is a Teraflop, it's not any different whether it's on a CPU or GPU, it is still the *exact* same single precision floating point operation. It doesn't change.

If it's a case of Teraflops relative to the amount of DRAM and Bandwidth you have, then nVidia and AMD release GPU's with all sorts of different ratios... So we can't even use it in the same product lineup/GPU architecture, because how do you normalize/equalize that crap?

For example... Take the Geforce 1030 GDDR5 vs DDR4... Same 1 Teraflop GPU. - Yet the DDR4 variant will perform at around half the speed.
Again... Signifying how useless Teraflops is even with GPU's of the exact same type.
https://www.techspot.com/review/1658-geforce-gt-1030-abomination/

The fact is, we cannot take any piece of hardware with the *slightest* difference and compare it based on Teraflops.

Ergo. It is a useless and bullshit metric.

It seems like a bad metric to use but isn't there still some use to it making it not outright useless? Like comparing the teraflops of the Series X and Series S easily conveys the really big power difference between the two which is useful.