By using this site, you agree to our Privacy Policy and our Terms of Use. Close
FunFan said:

 

When Eurogamer reported that the Tegra X1 is the GPU currently powering the NX dev units, many people where quick to compare it to the XBOne. Except that the extend of said scrutiny consisted on comparing only one factor: Flops. The difference in performance was deduced using only this number. Seems like many people only compare the amount of Teraflops a console can perform when figuring out power. But GPUs aren’t Magikarps and there is more to them than flops. 

That said I want to know how the Maxwell architecture used in the Tegra X1 actually compares to the older GCN architecture found in the both the PS4 and XBOne. Of course, I can’t simply take the Tegra X1 as implemented on the NX and compare it to an Xbox One or PS4. And there are simply too many factors involved in taking an Android based Shield tablet and comparing it to a console or even a Windows PC. Luckily there are plenty of desktop GPUs using these architectures, so a comparison between them can be made under the exact same conditions. Meaning same OS and same CPU/RAM.

Of course I wasn’t going to go trough all the trouble of doing all this research myself so I simply went to Anandtech and compared their data on a couple GPUs, one of each architecture and with similar Teraflop capabilities. I used the HD 7850 as the GCN representative due to having a similar shader throughput as the PS4. From the Maxwell camp, the GTX 950 was the closest match. Here is how they stack:

AMD HD 7850 (GCN)

NVIDIA GTX 950 (Maxwell)
Teraflops: 1.76128 (Single) Teraflops: 1.573 (Single)
Memory: 153.6 GB/s Bandwidth Memory: 106 GB/s Sandwich
Max TDP: 130 Watts Max TDP: 90 Watts

That’s a 10.69% Teraflop advantage the HD 7850 has over the GTX 950.

The HD 7850 also has 47.6GB/s more Memory Bandwidth.

 

How does these differences translate into actual game performance? Let’s look at the Anandtech benchmark comparisons:

 

 

I’m feeling too lazy to calculate the average difference of all these benchmarks but I’m going to guess it is 30% in favor of the GTX 950. By adding it’s 10.69% Teraflop disadvantage I think is pretty safe to assume that the Maxwell architecture somehow delivers at the very least 40% more performance per flop compared to GCN. If that makes any sense then:

You would only need a 790 Gflops Maxwell GPU to match the performance of a 1.31 Tflops GCN GPU. *Wink*

You would only need a 1.1 Tflops Maxwell GPU to match the performance of a 1.84 Tflops GCN GPU.

You would need a 716.8 Gflops GCN GPU to match the performance of a 512 Gflops Maxwell GPU.

^ that is kinda mis leading though.

PS4 & XB1 make use of code to metal.... you want a compairison you should be useing benchmarks that are also close to metal APIs.

Find some Vulkan & DX12 benchmarks and use those, to compair Gflops vs Gflops performance instead, will get you a MUCH more realistic picture.