By using this site, you agree to our Privacy Policy and our Terms of Use. Close
sc94597 said:
Kyuu said:


If you think Switch's more advanced tech will translate to better performance per TFLOPS (vs PS5), you're just setting yourself up for disappointment. Lower your expectations and you won't be disappointed.

It gets even trickier than that. Depending on the data-paths available in the shading units, "maximum TFLOPS" can be counted in a way that the TFLOPS number becomes inflated (with respect to gaming workloads, not say scientific computing.) Nvidia is counting FP32 and FP32/INT32 cores in the estimate of their TFLOPS  since Ampere, for example. This is despite in a gaming load 26% of the calculations are likely INT32 and not FP32 (according to Nvidia's estimate.) 

See: https://www.neogaf.com/threads/nvidia-ampere-teraflops-and-how-you-cannot-compare-them-to-turing.1564257/

And then there is the complication that is ray-tracing and "ray-tracing tflops." 

Using Tflops as a metric of real-world gaming performance between different architectures, platforms, and especially the brands is just not a good idea. 

I might have missed it though, where were people discussing performance per TFLOPS? 

Maybe I misread and I don't feel like going back to reading every comment, but I'm pretty sure Soundwave suggested that the Switch 2 could do more than Series S/X and PS5 (which he at least twice referred to as a "RDNA 1.5") per a TFLOPS due to the more advanced architecture and technologies like DLSS. Basically, he thinks Switch can "effectively" do nearly as much with 3 or 3.5 TFLOPS as Series S can with 4TFLOPS.

I know very well that TFLOPS is a misleading metric, but it's what countless people still use to know how powerful a console is on paper. And they determine a TFLOPS efficiency based on how it materializes in games (resolution, fps, settings etc). I know it's not that simple and countless factors go into this.

Last edited by Kyuu - on 11 September 2023