By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Soundwave said:
Pemalite said:

What makes nVidia flops different or better than AMD's?

Xavier is built at 12nm, not 16nm.


The 12nm process is based on 14nm anyway, which in turn is based upon 20nm which it retains for it's BEOL.

Even so a 7nm Turing based chip is going to be a significant improvement from that. 

The FLOP Nvidia/AMD thing I think stems from AMD GPUs underperforming against Nvidia processors while claiming the same FLOPS or even more in some cases. They're saying this too with regards to the PS5 leaks, that that's 9TFLOPS or whatever relative to a Nvidia GPU like a 2080, but those flops are greater than 9TFLOPS of AMD GCN part (PS4 era). 

That's not telling us how AMD's flops are different or worst than nVidia's though, which is what I asked...

Cobretti2 said:
HoloDust said:

That 6TFLOPS is for AI, I'd assume it comes from 48 Tensor cores, not CUDA cores. So indeed, at 10W (which assumes 800MHz clock), FP32 performance is around 0.6TFLOPS.

Similarly, 2080Ti has 107.5 TFLOPS from its 544 Tensor cores and 13.5 TFLOPS of CUDA FP32 performance.

far out, talk about confusing lol.  two different measures 

Actually even more measures.
FP16, FP32, FP64, INT4, INT8, INT16, Rays/s, RTX Ops, and so on.

Then you have Texture and Pixel Fillrate and more...

Which is why anyone who uses just plain-jane teraflops are either not being comprehensive or just don't know what they are talking about.




www.youtube.com/@Pemalite