By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Kyuu said:


If you think Switch's more advanced tech will translate to better performance per TFLOPS (vs PS5), you're just setting yourself up for disappointment. Lower your expectations and you won't be disappointed.

It gets even trickier than that. Depending on the data-paths available in the shading units, "maximum TFLOPS" can be counted in a way that the TFLOPS number becomes inflated (with respect to gaming workloads, not say scientific computing.) Nvidia is counting FP32 and FP32/INT32 cores in the estimate of their TFLOPS  since Ampere, for example. This is despite in a gaming load 26% of the calculations are likely INT32 and not FP32 (according to Nvidia's estimate.) 

See: https://www.neogaf.com/threads/nvidia-ampere-teraflops-and-how-you-cannot-compare-them-to-turing.1564257/

And then there is the complication that is ray-tracing and "ray-tracing tflops." 

Using Tflops as a metric of real-world gaming performance (again, as opposed to say - scientific computing) between different architectures, platforms, and especially the brands is just not a good idea. 

I might have missed it though, where were people discussing performance per TFLOPS? 

Last edited by sc94597 - on 11 September 2023