By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.
It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

Never did pretend that it was a good thing but I was only trying to present a counterpoint to your perception that Nvidia somehow has a near perfect record on efficiency ... 

Pemalite said:

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.

It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

I never argued that it was ever a good thing but I assume I got my point across by now ... 

Pemalite said:

I agree. Never said anything to the contrary... However we simply aren't there yet so basically everything is speculation.

In saying that... Intels Xe GPU hardware will have GPU accelerated Ray Tracing support, how that will look... If it will take the approach Turing has remains to be seen.

That wasn't my impression so I'm not sure if you realize this explicitly but when betting on a new technology to be standardized, there's always going to be a stake of an 'overengineered' solution ending up being inferior on a technical performance basis because like it or not there's going to be a set of trade-offs depending on each competitors strategy ... 

Let's take a more sympathetic approach to AMD for a moment to not disregard their achievements so far for every downside they have because at the end of the day they still managed to pivot Nvidia a little bit towards their direction so by no means is AMD worse off than they were technologically speaking after the release of Turing ... (AMD were arguably far worse off against Pascal because unlike Turing where they could be similarly competitive on a performance/area basis, they couldn't compete against Pascal on ANY metric)

Meh, Xe won't be interesting at all to talk about until it gets closer to release or if it ever releases at all under the current situation with Intel ... 

Pemalite said:

I have already expressed my opinion on all of this.
I would personally prefer if the individual compute units were made more flexible and can thus continue to lend itself to traditional rasterization rather than dedicate hardware to Ray Tracing. But I digress.

At the end of the day, Turing is simply better than Vega or Polaris, it's not the leap many expected after the resounding success that was Pascal, but it is what it is.
Whether nVidia's gamble is the right one remains to be seen, but it's hard not to be impressed considering how much die sizes have bloated outwards, performance only marginally increased... And yet still resoundingly beats AMD.

And this comes from someone who has historically only bought AMD GPU's and will likely continue to do so. Even my notebook is AMD.

More compute units isn't sustainable if we want a ray traced future when we take a look at Volta but I don't deny that Turing still has an advantage compared to AMD's offerings, however it would be prudent to not assume that Nvidia will forever retain this advantage when ultimately they can't solely control the direction the entire industry is headed towards ... 

Pemalite said:

Doesn't generally happen.
The Xbox One X really isn't doing much that a Radeon RX 580/590 can't do... Granted it's generally able to hit higher resolutions than those parts... Likely thanks to it's higher theoretical bandwidth (Even if it's on a crossbar!) and lower overheads, however... It does so at the expense of image quality with most games sitting around a medium quality preset.

I would take an RX 580 and run most games at 1440P with the settings dialed up than with the dynamic-resolution implementation most Xbox One X games take with medium quality settings. Games simply look better.

Still not convinced an Xbox One X is equivalent to a 1070. Just haven't seen it push the same levels of visuals at high resolutions as that part.

In-fact... In Gears of War 4, Forza 7, Fortnite, Witcher 3, Final Fantasy XV, Dishonered 2, Resident Evil 2 and so on with a Geforce 1060 6GB is turning in similar (And sometimes superior) results as the Xbox One X.

*snip*

Geforce 1070 would be a step up again.
Obviously some games will run better on one platform than another... I mean. Final Fantasy runs better on the Playstation 4 pro than Xbox One X... But this is a general trend with multiplats. 1060 6GB > Xbox One X > Playstation 4 Pro > Playstation 4 > Xbox One > Nintendo Switch.

The source at hand doesn't seem all that rigorous in it's analysis compared to digital foundry since he doesn't present a frame rate counter, omits information, and sometimes he changes settings which raises a big red flag for me. Let's use more high quality sources of data instead for a better insight ... 

Going by DF's video on SWBF2, you practically need a 1080Ti (maybe you could get away with a 1080 ?) to hold 4K60FPS on ultra settings to do better than an X1X which I imagine to be the high preset with with a dynamic res between 75-100% 4K. An X1X soundly NUKES a 1060 out of orbit with 4K MEDIUM preset settings and nearly DOUBLES the frame rate according to Digital Trends ... 

In a DF article, it states that FFXV on the X1X runs on the 'average' preset equivalent on PC. Once again, DT was nice enough to provide us information about the performance of other presets and if we take 1440p medium settings as our reference point then a 1060 nets us ~40FPS which makes an X1X at least neck on neck with it factoring in the extra headroom being used for dynamic resolution ... 

When we make a side by side DF comparison in Wolfenstein 2, X1X is running at a lower bound of 1656p at near maximum preset equivalent on PC. A 1060 was no where near in sight of the X1X's performance profile on it's best day running at a lower resolution of 1440p/max preset all the while it was far away from the 60FPS target according to guru3D. A 1080 is practically necessary to do away with the uncertainties of delivering lower than X1X level of experience because an X1X is nearly twice as fast as a 1060 in the pessimistic case ... 

I don't know why you picked Forza 7 for comparison when it's one of the more favourable titles in comparison for the X1X against a 1060. It pretty much matches PC at maximum settings while maintaining perfect 40K60FPS performance with better than 4x MSAA solution while a 1060 can't even come close to maintaining a perfect 60FPS on max settings at the PC side from guru3D reporting from another source ... (a 1070 looks bad as well when we look at the 99th percentile)

For The Witcher 3, given that base consoles previously delivered a preset between PC's medium/high settings I imagine that DF would put the X1X resoundingly within the high settings. From Techspot's data, seeing how much of a disaster The Witcher 3 is with high settings and 4K is on a 980 I think we can safely say it won't end all that well for a 1060 even with dynamic res ... 

With Fortnite, the game ran a little under 1800p on the X1X while a 1060 ran moderately better with a lower resolution of 1440p according to Tom's Hardware. Both run the same EPIC preset so they're practically neck and neck in this case as well ... 

There's not enough collected quality data about Dishonored 2 to really say anything about X1X to compare against PC ... 

In Ubisoft's Tom Clancy's The Division, X1X was running dynamic 4K with a range between 88-100% and past analysis reveals that it's on par with a 980Ti! (X1X had slightly higher settings but 980Ti came with full native 4K) 

At Far Cry 5, even Alex from DF said he couldn't maintain X1X settings on either a 1060 or the 580 ... 

Even in the pessimistic scenario you give the 1060 waaay too much credit than what it's truly worth when an X1X is more than a match made for it. Is a 1070 ahead of an X1X ? Sure I might give you that since it happens often enough but in no way would I place an X1X below a 1060 since it doesn't seem to happen that much if ever at all when we take into account good sources of data ... (the future becomes even darker for the 1060 with DX12 only titles)

Pemalite said:

Never argued anything to the contrary to be honest.

The Switch does have some Pro's and Con's. It's well known that Maxwell is generally more efficient than what anything Graphics Core Next provides in gaming workloads outside of Asynchronous Compute, but considering that the Xbox One and Playstation 4 generally have more hardware overall, it's really a moot point.

Switch is a neat kit of hardware alright but it's too bad that it's very well going to be obsolete from an architectural standpoint soon so this puts a spanner in Nintendo's potential plans of backwards compatibility ...