By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:

Well it does matter to a degree. Always has.
If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

Then the argument should be about features instead of the architecture. No reason why we couldn't opt in for hardware extensions for the same effect ... 

Sure.

fatslob-:O said:

Higher performance per SM came at an expense of 40%+ larger die area compared to it's predecessor so Nvidia is not as flawless as you seem to believe in their execution of efficiency ... 

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.
It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

fatslob-:O said:
As far as ray tracing is concerned, there's no reason to believe that either AMD or Intel couldn't one up whatever Turing has because there's still potential improve it with new extensions such as traversal shaders, more efficient acceleration structures, and beam tracing! It's far from guaranteed that Turing will be built for the future of ray tracing when the yet to be released new consoles could very well obsolete the way games design ray tracing around Turing hardware with a possibly superior feature set ... 

I agree. Never said anything to the contrary... However we simply aren't there yet so basically everything is speculation.

In saying that... Intels Xe GPU hardware will have GPU accelerated Ray Tracing support, how that will look... If it will take the approach Turing has remains to be seen.

fatslob-:O said:

Turing invested equally just as much elsewhere such as tensor cores, texture space shading, mesh shaders, independent thread scheduling, variable rate shading, and some GCN features (barycentric coordinates, flexible memory model, scalar ops) are all things that can directly enhance rasterization as well so it's just mainstream perception that overhypes it's focus towards ray tracing ... 

There's other ways to bloat Nvidia's architectures in the future with features from consoles they still haven't adopted like global ordered append and shader specified stencil values ...

I have already expressed my opinion on all of this.
I would personally prefer if the individual compute units were made more flexible and can thus continue to lend itself to traditional rasterization rather than dedicate hardware to Ray Tracing. But I digress.

At the end of the day, Turing is simply better than Vega or Polaris, it's not the leap many expected after the resounding success that was Pascal, but it is what it is.
Whether nVidia's gamble is the right one remains to be seen, but it's hard not to be impressed considering how much die sizes have bloated outwards, performance only marginally increased... And yet still resoundingly beats AMD.

And this comes from someone who has historically only bought AMD GPU's and will likely continue to do so. Even my notebook is AMD.

fatslob-:O said:
Pemalite said:

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)
Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

Generally speaking you're going to need a GTX 1070 to get the same experience as the X1X is pretty definitively ahead of the GTX 1060 in the same settings and by extension the RX 580 as well ...

Doesn't generally happen.
The Xbox One X really isn't doing much that a Radeon RX 580/590 can't do... Granted it's generally able to hit higher resolutions than those parts... Likely thanks to it's higher theoretical bandwidth (Even if it's on a crossbar!) and lower overheads, however... It does so at the expense of image quality with most games sitting around a medium quality preset.

I would take an RX 580 and run most games at 1440P with the settings dialed up than with the dynamic-resolution implementation most Xbox One X games take with medium quality settings. Games simply look better.

Still not convinced an Xbox One X is equivalent to a 1070. Just haven't seen it push the same levels of visuals at high resolutions as that part.

In-fact... In Gears of War 4, Forza 7, Fortnite, Witcher 3, Final Fantasy XV, Dishonered 2, Resident Evil 2 and so on with a Geforce 1060 6GB is turning in similar (And sometimes superior) results as the Xbox One X.










Geforce 1070 would be a step up again.
Obviously some games will run better on one platform than another... I mean. Final Fantasy runs better on the Playstation 4 pro than Xbox One X... But this is a general trend with multiplats. 1060 6GB > Xbox One X > Playstation 4 Pro > Playstation 4 > Xbox One > Nintendo Switch.

fatslob-:O said:

From a GPU compute perspective the PS4 is roughly ~4.7x faster, the same with texture sampling depending on formats but it's geometry performance is just a little over 2x faster than the Switch so it's not totally a slam dunk in theoretical performance since developers need to use some features like async compute to mask the relatively low geometry performance ... 

The Switch get's as 'close' (still can't run many AAA games) as it does since NV's driver/shader compiler team desire to take responsibility for performance so it doesn't matter what platform you develop on for Nvidia hardware when their whole entire software stack is more productive ...  

For AMD, on the PC side they can't change practices as that easily so I can only imagine their envy for Sony to be able to shove a whole new gfx API down every developers throat ...  

Never argued anything to the contrary to be honest.

The Switch does have some Pro's and Con's. It's well known that Maxwell is generally more efficient than what anything Graphics Core Next provides in gaming workloads outside of Asynchronous Compute, but considering that the Xbox One and Playstation 4 generally have more hardware overall, it's really a moot point.



--::{PC Gaming Master Race}::--