Kyuu said:
|
It gets even trickier than that. Depending on the data-paths available in the shading units, "maximum TFLOPS" can be counted in a way that the TFLOPS number becomes inflated (with respect to gaming workloads, not say scientific computing.) Nvidia is counting FP32 and FP32/INT32 cores in the estimate of their TFLOPS since Ampere, for example. This is despite in a gaming load 26% of the calculations are likely INT32 and not FP32 (according to Nvidia's estimate.)
And then there is the complication that is ray-tracing and "ray-tracing tflops."
Using Tflops as a metric of real-world gaming performance (again, as opposed to say - scientific computing) between different architectures, platforms, and especially the brands is just not a good idea.
I might have missed it though, where were people discussing performance per TFLOPS?
Last edited by sc94597 - on 11 September 2023