By using this site, you agree to our Privacy Policy and our Terms of Use. Close
eva01beserk said:
@Permalite
I dint say the ps5 would have 2080ti performance. I said the leak said that a discreate pc card had a 2080ti performance for $430. Never claimed thet would be in a console.

I never mentioned the Playstation 5.
I was comparing Vega 7/Navi/2080Ti.

Navi isn't going to match a Geforce RTX 2080Ti. Simple as that... Regardless if it's PC or Console, it's Polaris's successor not Vega's.

fatslob-:O said:
Pemalite said:

Okay...

So... Either Architecture matters or it doesn't?

Considering the market (consoles) we're talking about it sure doesn't for the most part because they have a lot more control on the software side ... 

Well it does matter to a degree. Always has.
If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

fatslob-:O said:
Pemalite said:

Maxwell, Pascal and Turing have a plethora of techniques that simply gives nVidia a massive step up in regards to efficiency... These were all lessons that nVidia learned whilst building Tegra.

Things like Asynchronous compute are in the Xbox One/Playstation 4... And on the PC hasn't really translated into AMD having a leg up over nVidia in the PC gaming landscape by any meaningful denominator.

Turing was arguably a step backwards in efficiency compared to Pascal so I'm not seeing a 'massive' step up in comparison to before ... 

As for the last line, I'm not surprised considering PC has shit tools with so many developers continue with shit practices and it doesn't help that AMD killed their own gfx API ... 

PS4 is arguably a developers wet dream since it has goodies like their in-house Razor CPU/GPU profiler and using GNM is almost like CUDA except for graphics so you get the benefits of single source programming model with more low level access than either DX12/Vulkan could provide. Graphics programmers are a lot more productive with the single source model like CUDA and they get better performance since they have access to more features ... 

Turing is actually more efficient than Pascal on a SM to SM based comparison... However, Turing introduced allot of new hardware designed for other tasks... Once games start to leverage Ray-Tracing more abundantly, then Turing's architecture will shine far more readily.

It's a chicken before the egg scenario.

Whether nVidia's approach was the right one... Still remains to be seen. Either way, AMD still isn't able to match Turing, despite Turing's pretty large investment in non-rasterization technologies that takes up a ton of die space... Which is extremely telling.

fatslob-:O said:
Pemalite said:

Turing has Rapid Packed Math... Or rather, nVidia's version of it.

Hence why Turings half-precision is double it's single-precision in theoretical flops.

Even some Pascal parts had it.
https://www.anandtech.com/show/10222/nvidia-announces-tesla-p100-accelerator-pascal-power-for-hpc
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/4

It has also been a feature with Tegra for awhile too. Rapid packed math as AMD calls it, is AMD's marketing term of packing two FP16 problems together.

Only GTX Turing supports rapid packed math, the RTX Turing series have Tensor Cores which are FAR more limited in flexibility so it's nearly useless to game programmers ... 

Well. It's only early days yet. Turing is only the start of nVidia's efforts into investing in Tensor Cores.

In saying that... Routing FP16 through the Tensor cores has one massive advantage... It means that Turing can dual issue FP16 and FP32/INT32 operations at the same time, allowing the Warp Scheduler another option to keep the SM partition busy working.

So there is certainly a few "Pro's" to the "Con's" you have outlaid.


fatslob-:O said:
Pemalite said:

Metro on PC is a step up over the Xbox One X version on Turing-equivalent hardware.

Consoles can punch ahead of PC equivalent hardware, that holds true whether you use nVidia or AMD's solutions... But the precedent is already been done and dusted... Despite games being built with 8th gen Graphics Core Next hardware in mind... nVidia still holds a seriously catastriphic advantage over AMD in almost every regard... With the exception of price.

Considering that the X1X matches a GTX 1070 (GP104), I'd guess the Turing equivalent would be a little bit under the GTX 1660Ti (TU116) ... (not a surprise when looking at the die size between the two with 314 mm^2 vs 284mm^2) 

Nvidia holds an advantage over AMD in PC so I don't deny that much is true but on consoles the advantages don't appear to be all that compelling to the manufacturers.

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)
Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

fatslob-:O said:

When we see technical comparisons between Switch and PS4 (which is at least theoretically 4x faster), benchmarks seem to show that code also manages to run 4x better as well on the PS4 so even in similar development environments GCN seems to perform as similarly to the Nvidia parts in theoretical performance ... 

Switch and PS4 are consoles with specialized graphics APIs such as NVN and GNM tailoring them respectively but amazingly enough they pack a similar punch relative to their weight ... 

I would place the Playstation 4 at more than 4x faster. It has far more functional units at it's disposal, granted Maxwell is also a far more efficient architecture... The Playstation 4 also has clockspeed and bandwidth on it's side.

I am surprised the Switch gets as close as it does to be honest.




www.youtube.com/@Pemalite