By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Zippy6 said:
Pemalite said:


Again. Teraflops is absolute bullshit. It literally represents nothing.

It represents clock multiplied by core count as you know well.

It is actually clock multiplied by cores multiplied by floating point operations per pass.

It's theoretical. - If you are going to try and "educate me" then at-least try and be factual.

Zippy6 said:

If it truly was completely useless as you would have people believe then it wouldn't exist in the first place. I also wouldn't be able to say with 100% confidence that lowering the clock of my GPU till it is 4TF will make it perform worse than running it at it's stock 5.3TF.

It exists because it does have a purpose. Just not what you think it is.

Again, I already provided substantial evidence of a GPU with the exact same amount of DRAM, the exact same amount of bandwidth, but with almost 1 teraflop less... End up being faster. Even in compute.

How can you justify Teraflops being legitimate after showing you that? You can't use it to accurately compare it with anything, even hardware from the same manufacturer.

Zippy6 said:

No TF alone is not accurate in representing the difference in GPU performance when there are other differences such as memory bandwidth or a completely different architecture such as your example of the 5870 vs the 7850. But I've seen you complain about others using TF a few times and it's not going to change anyone's mind especially when you use examples of a completely different architecture or ddr3 vs gddr5 to prove your point when the person using TF's is comparing RDNA2 to RDNA2.

People will keep using TF's and that will never change. There's no point you getting annoyed about it.

So basically you are shifting the goal post to include other aspects of a GPU as determiners of performance? Then why use Teraflops?

If Teraflops/Gigaflops was useful, it wouldn't matter what the architecture was... 1 Teraflop of compute should be equivalent of 1 Teraflop of compute regardless of the environment, the architecture, memory bandwidth etc' should be redundant. - A Teraflop is a Teraflop, it's not any different whether it's on a CPU or GPU, it is still the *exact* same single precision floating point operation. It doesn't change.

If it's a case of Teraflops relative to the amount of DRAM and Bandwidth you have, then nVidia and AMD release GPU's with all sorts of different ratios... So we can't even use it in the same product lineup/GPU architecture, because how do you normalize/equalize that crap?

For example... Take the Geforce 1030 GDDR5 vs DDR4... Same 1 Teraflop GPU. - Yet the DDR4 variant will perform at around half the speed.
Again... Signifying how useless Teraflops is even with GPU's of the exact same type.
https://www.techspot.com/review/1658-geforce-gt-1030-abomination/

The fact is, we cannot take any piece of hardware with the *slightest* difference and compare it based on Teraflops.

Ergo. It is a useless and bullshit metric.

Zippy6 said:

People will keep using TF's and that will never change. There's no point you getting annoyed about it.

People kept using "bits" as a way to gauge a consoles capabilities. How did that turn out?

Over time, taking any single number in these gaming devices and running with it as the be-all-end-all, will just result it in being a joke... And the individual being flatout wrong.



--::{PC Gaming Master Race}::--