By using this site, you agree to our Privacy Policy and our Terms of Use. Close

_____________________________UPDATE_______________________________

Ok people. Give me some GCN 1st-gen vs Maxwell DX-12/Vulkan benchmarks and I will update this with new analysis and everything. Remember it has to be GCN 1st-gen vs Maxwell. This isn't an NVIDIA vs AMD maximum performance dick-measuring comparison but more of a "EFFICIENCY" comparison between the GPU architectures of PS4/XBone and NX current rumored one, especifically focused in actual performance vs claimed teraflop throughput. Since the NX rumor mill is now pointing more towards Pascal, I'll accept GCN 1st-gen vs Pascal benchmarks too.

 

_____________________________Original_______________________________

When Eurogamer reported that the Tegra X1 is the GPU currently powering the NX dev units, many people where quick to compare it to the XBOne. Except that the extend of said scrutiny consisted on comparing only one factor: Flops. The difference in performance was deduced using only this number. Seems like many people only compare the amount of Teraflops a console can perform when figuring out power. But GPUs aren’t Magikarps and there is more to them than flops. 

That said I want to know how the Maxwell architecture used in the Tegra X1 actually compares to the older GCN architecture found in the both the PS4 and XBOne. Of course, I can’t simply take the Tegra X1 as implemented on the NX and compare it to an Xbox One or PS4. And there are simply too many factors involved in taking an Android based Shield tablet and comparing it to a console or even a Windows PC. Luckily there are plenty of desktop GPUs using these architectures, so a comparison between them can be made under the exact same conditions. Meaning same OS and same CPU/RAM.

Of course I wasn’t going to go trough all the trouble of doing all this research myself so I simply went to Anandtech and compared their data on a couple GPUs, one of each architecture and with similar Teraflop capabilities. I used the HD 7850 as the GCN representative due to having a similar shader throughput as the PS4. From the Maxwell camp, the GTX 950 was the closest match. Here is how they stack:

AMD HD 7850 (GCN)

NVIDIA GTX 950 (Maxwell)
Teraflops: 1.76128 (Single) Teraflops: 1.573 (Single)
Memory: 153.6 GB/s Bandwidth Memory: 106 GB/s Sandwich
Max TDP: 130 Watts Max TDP: 90 Watts

That’s a 10.69% Teraflop advantage the HD 7850 has over the GTX 950.

The HD 7850 also has 47.6GB/s more Memory Bandwidth.

 

How does these differences translate into actual game performance? Let’s look at the Anandtech benchmark comparisons:

I’m feeling too lazy to calculate the average difference of all these benchmarks but I’m going to guess it is 30% in favor of the GTX 950. By adding it’s 10.69% Teraflop disadvantage I think is pretty safe to assume that the Maxwell architecture somehow delivers at the very least 40% more performance per flop compared to GCN. If that makes any sense then:

You would only need a 790 Gflops Maxwell GPU to match the performance of a 1.31 Tflops GCN GPU. *Wink*

You would only need a 1.1 Tflops Maxwell GPU to match the performance of a 1.84 Tflops GCN GPU.

You would need a 716.8 Gflops GCN GPU to match the performance of a 512 Gflops Maxwell GPU.

 

Edit: I can only use the resources I have and I definitely don't have the resourses to compare this cards to the metal. Also, I think is pretty clear that I'm speculating and making plenty of assumptions in the proccess.



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)