FunFan said:
The point is comparing architectures as they are implemented on current gen consoles. Neither the PS4 nor Xbox one use a r7 370. The closest cards to them are the HD 7000 series.
|
The majority of the 300 series cards were rebadged pieces of hardware from the 200 series. Which were also mostly rebadged parts from the 7000 series.
The Radeon R7 370 is a rebadged Radeon R7 265 which in turn is a rebadged Radeon 7850... Which is almost identical to the Playstation 4 in capability.
torok said: You can't use the 950 as comparison because it is a 90 watts GPU. A large portable device, such as a tablet, can use 10 to 12 w top. So NX will probably be way behind X1, unless it has some extra stuff when docked. |
Agreed.
It's silly using the Geforce 950 as some sort of gauge of Tegra's capability... For one it also has more bandwidth.
JRPGfan said:
^ that is kinda mis leading though.
PS4 & XB1 make use of code to metal.... you want a compairison you should be useing benchmarks that are also close to metal APIs.
|
Not really. Might be true for a few first party titles, but most games use the low level or high level API's these days due to ease of development.
FunFan said:
Nvidia cards also increase performance with drivers updates as time goes on and the GTX 950 is a newer card, compared to the matured HD 7850 and drivers.
Anandtech is the one I had on hand and I think is trusthworthy.
The "close to the metal" argument is inconsecuential because the NX will also be "close to the metal". This is not a PC vs Console comparison. But a flop per flop performance analisis with the limited resources we have.
|
Anandtech is very trustworthy... But they are also very slow.
When push comes to shove I will always place Anandtech above all other sites.
On the flip side though, they don't tend to update benchmarks or compare drivers very often like say.. Toms do.
AMD did a ton of work on their drivers fairly recently, overhauling them during their crusades to fix the frame pacing issues... The 7000 series saw some pretty massive gains at that point in time and the older Anandtech benches don't seem to reflect that.
JRPGfan said:
No its not, because they dont "scale" or perform equally in close to metal benchmarks.
AMD does much better in DX12 & Vulkan, than it does in DX11 games (that usually favor nvidia), ontop of the fact that your wrong about the performance increase via drivers for nvidia vs amd. AMD usually scale higher, they launch with terrible drivers, that overtime improve more than nvidia's.
If your trying to get a idea, of how a 512 Gflop Tegra X1 would compair to a AMD card, you should be useing DX12 + Vulkan benchmarks.
|
It's mostly because of Asynchronous Compute that AMD does so well in Direct X 12 and Vulkan.
A 512Gflop Tegra cannot be compared to any desktop chip, plain and simple.
There is a completely different software and hardware ecosystem... Tegra is a "relative" of Desktop GPU's on the PC, but it's still far from the same, it has a completely different memory hierachy for one.
czecherychestnut said:
What you have proved is that flops is silly metric to use to compare performance with unless you are only doing compute. But that's all you can really conclude from your data, nothing more.
|
Even for compute it's a stupid number. Not all compute scenario's use FP32.
HoloDust said:
I don't know why you think that GPUs inside PS4 and XOne are GCN 1st gen. PS4's GPU (due to number of ACEs) looks like 290X/390X cut in half with 2CUs disabled and XOnes is like 7790/260X with 2 CUs disabled (or 360)...they are all 2nd gen GCN.
So you already have pretty good benchmark for 2nd Gen GCN vs Maxwell - 390X vs 980Ti:
http://www.anandtech.com/bench/product/1746?vs=1715
DX12 are Ashes and Hitman.
Now, it is hardly even debatable that nVidia generally has more efficient architectures (looking at 1070@150W vs 480@150W is pretty much all that is needed), it's just that difference when it's DX12 or Vulkan (which is best we can hope for to see in terms of benchmarks) is not as big as DX11 tests where nVidia usually smokes AMD.
|
The PS4 and Xbox One are GCN 1st gen "with a twist". -They have SOME features of the GCN 1.1/Gen 2.
A desktop Maxwell is not the same as mobile Tegra. There are similarities for sure, but there are more differences.
nVidia's main edge could be completely due to nVidia's tiled based rasterization, more info on that needs to be found first.
****************
I still believe that in real-world gaming, Tegra will likely be half the performance of the Xbox One in a best-case scenario, provided resolutions are kept low so as not to hit the bandwidth/fillrate limitations of the chip.