| HoloDust said: I honestly have no idea how you got to 6.6TFLOPS with 512 cores...if you're talking FP32. That must be either some epic cores or epic clock at which they're running. |
DLTOPS is the PMPO of the computer industry. It can be calculated on a variety of ways and there is no doubt in my mind that Nvidia is talking up their own future product by making it seem like the equivalent of 6.6 TFLOPS for deep learning. It's only a logical conclusion if Volta is better than Pascal or Maxwell at that, specially given their dismal FP64 performance other than the Titan chips.







