By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Maxwell (NX) vs GCN (PS4/XBOne): Real World Teraflop Performance Comparison.

Yes, flop for flop, nVidia usually has upper hand in DX11 benchmarks...their architectures and drivers seem to be much more efficient than AMDs.

One of the comparisons I always find interesting is this one - 390X vs 980Ti - 2816:176:64 vs 2816:176:96 config with similar clock speeds and memory bandwidth advantage for AMD, yet 40-50% advantage for nVidia in DX11 benchmarks.

http://www.anandtech.com/bench/product/1746?vs=1715

Still, DX12 benchmarks are much closer, and that it is what consoles APIs look more like, so I'm not so sure about your conclusions.



Around the Network
FunFan said:

 

When Eurogamer reported that the Tegra X1 is the GPU currently powering the NX dev units, many people where quick to compare it to the XBOne. Except that the extend of said scrutiny consisted on comparing only one factor: Flops. The difference in performance was deduced using only this number. Seems like many people only compare the amount of Teraflops a console can perform when figuring out power. But GPUs aren’t Magikarps and there is more to them than flops. 

That said I want to know how the Maxwell architecture used in the Tegra X1 actually compares to the older GCN architecture found in the both the PS4 and XBOne. Of course, I can’t simply take the Tegra X1 as implemented on the NX and compare it to an Xbox One or PS4. And there are simply too many factors involved in taking an Android based Shield tablet and comparing it to a console or even a Windows PC. Luckily there are plenty of desktop GPUs using these architectures, so a comparison between them can be made under the exact same conditions. Meaning same OS and same CPU/RAM.

Of course I wasn’t going to go trough all the trouble of doing all this research myself so I simply went to Anandtech and compared their data on a couple GPUs, one of each architecture and with similar Teraflop capabilities. I used the HD 7850 as the GCN representative due to having a similar shader throughput as the PS4. From the Maxwell camp, the GTX 950 was the closest match. Here is how they stack:

AMD HD 7850 (GCN)

NVIDIA GTX 950 (Maxwell)
Teraflops: 1.76128 (Single) Teraflops: 1.573 (Single)
Memory: 153.6 GB/s Bandwidth Memory: 106 GB/s Sandwich
Max TDP: 130 Watts Max TDP: 90 Watts

That’s a 10.69% Teraflop advantage the HD 7850 has over the GTX 950.

The HD 7850 also has 47.6GB/s more Memory Bandwidth.

 

How does these differences translate into actual game performance? Let’s look at the Anandtech benchmark comparisons:

 

 

I’m feeling too lazy to calculate the average difference of all these benchmarks but I’m going to guess it is 30% in favor of the GTX 950. By adding it’s 10.69% Teraflop disadvantage I think is pretty safe to assume that the Maxwell architecture somehow delivers at the very least 40% more performance per flop compared to GCN. If that makes any sense then:

You would only need a 790 Gflops Maxwell GPU to match the performance of a 1.31 Tflops GCN GPU. *Wink*

You would only need a 1.1 Tflops Maxwell GPU to match the performance of a 1.84 Tflops GCN GPU.

You would need a 716.8 Gflops GCN GPU to match the performance of a 512 Gflops Maxwell GPU.

^ that is kinda mis leading though.

PS4 & XB1 make use of code to metal.... you want a compairison you should be useing benchmarks that are also close to metal APIs.

Find some Vulkan & DX12 benchmarks and use those, to compair Gflops vs Gflops performance instead, will get you a MUCH more realistic picture.



Mowco said:
FunFan said:

You would only need a 790 Gflops Maxwell GPU to match the performance of a 1.31 Tflops GCN GPU. *Wink*

You would only need a 1.1 Tflops Maxwell GPU to match the performance of a 1.84 Tflops GCN GPU.

You would need a 716.8 Gflops GCN GPU to match the performance of a 512 Gflops Maxwell GPU.

That's not how it works... at all. You complain about people comparing just Tflops... but this is just as bad.

Let's take the 7770 and the 750 TI for example.

7770 is a GCN card, 750 TI is a maxwell card.

7770 has 1.28 Tflops

750 TI has 1.3 Tflops.

And as everyone knows a Maxwell gpu needs less Tflops to equal a GCN gpu so this should be a complete STOMPING.

oh wait....

Actually, 750Ti really stomps 7770

http://www.anandtech.com/bench/product/1130?vs=1079



czecherychestnut said:
FunFan said:

The point is comparing architectures as they are implemented on current gen consoles. Neither the PS4 nor Xbox one use a r7 370. The closest cards to them are the HD 7000 series.

It's the same architecture,  same silicon.  Both have 1024 SP,  running at ~ 1 Ghz,  with a 256b wide memory bus running at 5.7GHz. The difference you think you seeing as an architectural issue is actually a driver issue. 

Nope,  this card you mention is 1996.8 Tflops card. The point of this thread is not to see which card is faster overall, but flop per flop. Yes your card performs better than a stock 7850 but it has a big Teraflop increase.



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)

FunFan said:
KLAMarine said:
Link to article?

What do you mean article? This is 100% FunFan BS!

But here is the link to the Anadtech benchmark comparison tool I used. If thats what you wanted.

^ fair note, Anadtech basically never updates their benchmarks.

(if you found newer bench's youd see the 7850-7870, perform MUCH better today than when Anandtech did those benchs)

 

They do review card benchs and thats it.

AMD typically has poorly optimised drivers at launch, that over time grow much more in performance than nvidia cards do (they typically launch with better optimised drivers).

 

Thats ontop of the fact "close to metal" APIs should be used, since consoles will make use of that.

 

*basically Anandtech use nvidia biased benchmarks, and never update them.

Their a bad site to use to get a realistic view of cards performance.



Around the Network
Mowco said:
FunFan said:

You would only need a 790 Gflops Maxwell GPU to match the performance of a 1.31 Tflops GCN GPU. *Wink*

You would only need a 1.1 Tflops Maxwell GPU to match the performance of a 1.84 Tflops GCN GPU.

You would need a 716.8 Gflops GCN GPU to match the performance of a 512 Gflops Maxwell GPU.

That's not how it works... at all. You complain about people comparing just Tflops... but this is just as bad.

Let's take the 7770 and the 750 TI for example.

7770 is a GCN card, 750 TI is a maxwell card.

7770 has 1.28 Tflops

750 TI has 1.3 Tflops.

And as everyone knows a Maxwell gpu needs less Tflops to equal a GCN gpu so this should be a complete STOMPING.

oh wait....

Yet how much of that "6.9" is made up of actual gaming benchmarks?



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)

JRPGfan said:

^ fair note, Anadtech basically never updates their benchmarks.

(if you found newer bench's youd see the 7850-7870, perform MUCH better today than when Anandtech did those benchs)

 

They do review card benchs and thats it.

AMD typically has poorly optimised drivers at launch, that over time grow much more in performance than nvidia cards do (they typically launch with better optimised drivers).

 

Thats ontop of the fact "close to metal" APIs should be used, since consoles will make use of that.

 

*basically Anandtech use nvidia biased benchmarks, and never update them.

Their a bad site to use to get a realistic view of cards performance.

Nvidia cards also increase performance with drivers updates as time goes on and the GTX 950 is a newer card, compared to the matured HD 7850 and drivers.

Anandtech is the one I had on hand and I think is trusthworthy.

The "close to the metal" argument is inconsecuential because the NX will also be "close to the metal". This is not a PC vs Console comparison. But a flop per flop performance analisis with the limited resources we have.



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)

FunFan said:
JRPGfan said:

Nvidia cards also increase performance with drivers updates as time goes on and the GTX 950 is a newer card, compared to the matured HD 7850 and drivers.

Anandtech is the one I had on hand and I think is trusthworthy.

The "close to the metal" argument is inconsecuential because the NX will also be "close to the metal". This is not a PC vs Console comparison. But a flop per flop performance analisis with the limited resources we have.

No its not, because they dont "scale" or perform equally in close to metal benchmarks.

AMD does much better in DX12 & Vulkan, than it does in DX11 games (that usually favor nvidia), ontop of the fact that your wrong about the performance increase via drivers for nvidia vs amd. AMD usually scale higher, they launch with terrible drivers, that overtime improve more than nvidia's.

 

If your trying to get a idea, of how a 512 Gflop Tegra X1 would compair to a AMD card, you should be useing DX12 + Vulkan benchmarks.



Soooooo........does this mean that Breath of the Wild will look even better on NX than Wii U? :P



JRPGfan said:

No its not, because they dont "scale" or perform equally in close to metal benchmarks.

AMD does much better in DX12 & Vulkan, than it does in DX11 games (that usually favor nvidia), ontop of the fact that your wrong about the performance increase via drivers for nvidia vs amd. AMD usually scale higher, they launch with terrible drivers, that overtime improve more than nvidia's.

 

If your trying to get a idea, of how a 512 Gflop Tegra X1 would compair to a AMD card, you should be useing DX12 + Vulkan benchmarks.

Both still increase performance as time goes on. And that AMD starts worse and gets much better is more of a notion that comes from the old ATI days and can't be proven truth today. Current AMD drivers are really good even when the cards come out as far as I know.

Neither of us has those "close to the metal" bechmarks, so we can't do anything about it but use DX 11.



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)