By using this site, you agree to our Privacy Policy and our Terms of Use. Close

It is important to remember that comparing Tflops across architectures is a dangerous game.

Tflops, as a real-world metric, make sense for FLOP-heavy tasks like scientific computing (i.e, you're basically solving matrix differential equations, for example.) 

Gaming is a mixed load where int operations happen for about every two floating point ones.

There is also the complication that RT cores and tensor cores provide, in that they accelerate different specific types of FLOPs (ray-tracing, DLSS, etc.) Hence NVidia tries to bring in metrics like "RT TFLOPS" and "Tensor TFLOPS" to accommodate.