fatslob-:O said:
Pemalite said:
The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks. It's actually a similar design paradigm that the Geforce FX took.
But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.
|
Never did pretend that it was a good thing but I was only trying to present a counterpoint to your perception that Nvidia somehow has a near perfect record on efficiency ...
|
I haven't stated or announced that nVidia does have a perfect record on efficiency... In-fact I even provided examples to the contrary prior. (Aka. Geforce FX)
But right now and for years prior... nVidia has hands down dominated AMD in that single facet, that is far from a good thing for the industry.
fatslob-:O said:
That wasn't my impression so I'm not sure if you realize this explicitly but when betting on a new technology to be standardized, there's always going to be a stake of an 'overengineered' solution ending up being inferior on a technical performance basis because like it or not there's going to be a set of trade-offs depending on each competitors strategy ...
|
At the end of the day though, it doesn't matter.
nVidia is offering something that AMD isn't, nVidia is still faster and more efficient than AMD's equivalent offerings at almost every bracket... (Minus the crappy Geforce 1650, get the Radeon 570 every day, even if it's a bigger power hog.)
Whether Turings hardware will pay off in the long term remains to be seen, but one thing is for sure... Lots of modders are shoring up mods for older games and implementing Ray Tracing, Minecraft, Crysis, you name it. - Even if it is more rudimentary Path Tracing.
It will be interesting if they implement Turing specific features going forward.
fatslob-:O said:
Let's take a more sympathetic approach to AMD for a moment to not disregard their achievements so far for every downside they have because at the end of the day they still managed to pivot Nvidia a little bit towards their direction so by no means is AMD worse off than they were technologically speaking after the release of Turing ... (AMD were arguably far worse off against Pascal because unlike Turing where they could be similarly competitive on a performance/area basis, they couldn't compete against Pascal on ANY metric)
|
You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.
And I am not downplaying any of AMD's achievements... But when a company is bolting on features to a 7+ year old GPU architecture and is trying to shovel it as something new and novel and revolutionary... Well. Doesn't really sit well. Especially when AMD has a history of re-badging older GCN as a new series.
Take the RX 520 for example... Using a GPU from 2013. (Oland.)
Grant it's low-end stuff so doesn't matter as much, but it's highly disingenuous of AMD... Plus such a part misses out on technologies that are arguably more important for low-end hardware, like Delta Colour compression to bolster bandwidth.
nVidia isn't immune from such practices either, but they have shied away from such practices in recent times.
fatslob-:O said:
Meh, Xe won't be interesting at all to talk about until it gets closer to release or if it ever releases at all under the current situation with Intel ...
|
I think it's interesting now, especially the technologies Intel is hinting at.
For IGP's though AMD should still hold a sizable advantage, AMD simply reserves more transistors in it's IGP's for GPU duties than Intel is willing.
My Ryzen notebook (Ignoring the shit battery life that plagues all Ryzen notebooks!) has been amazing from a performance and support standpoint.
fatslob-:O said:
More compute units isn't sustainable if we want a ray traced future when we take a look at Volta but I don't deny that Turing still has an advantage compared to AMD's offerings, however it would be prudent to not assume that Nvidia will forever retain this advantage when ultimately they can't solely control the direction the entire industry is headed towards ...
|
Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means.
AMD has pretty much stuck to 64 CU's or below with GCN, I don't expect that to change anytime soon.. And Graphics Core Next has shown not to be an architecture that has been able to keep pace with nVidia, whether it's Maxwell, Pascal or Turing, it's simply full of limitations for gaming orientated workloads.
Allot of the features introduced with Vega generally didn't pan out as one would hope which didn't help things for AMD either.
Will nVidia retain it's advantage? Who knows. I don't like their approach to dedicating hardware to driving Ray Tracing... I would rather nVidia had taken a more unified approach that would have lent itself to rasterized workloads as well. - Whether this ends up being another Geforce FX moment for nVidia and AMD remains to be seen.. But with Navi being a Polaris replacement and not a high-end part... I don't have my hopes up until AMD's next-gen architecture.
fatslob-:O said:
The source at hand doesn't seem all that rigorous in it's analysis compared to digital foundry since he doesn't present a frame rate counter, omits information, and sometimes he changes settings which raises a big red flag for me. Let's use more high quality sources of data instead for a better insight ...
|
I too would prefer a digital foundry comparison... But they aren't exactly pitting Geforce 1060 6GB cards against the Xbox One X, they usually run with high-end PC hardware.
fatslob-:O said:
Going by DF's video on SWBF2, you practically need a 1080Ti (maybe you could get away with a 1080 ?) to hold 4K60FPS on ultra settings to do better than an X1X which I imagine to be the high preset with with a dynamic res between 75-100% 4K. An X1X soundly NUKES a 1060 out of orbit with 4K MEDIUM preset settings and nearly DOUBLES the frame rate according to Digital Trends ...
|
But the Xbox One X isn't doing 4K, ultra settings @60fps.
I am also going to hazard a guess they are using a Geforce 1060 3GB, not the 6GB variant, the 6GB card tends to track a little closer to the RX 580, not almost 20% slower... But they never really elaborated upon any of that... Nor did Digital Trends actually state anything about the Xbox One X?
fatslob-:O said:
In a DF article, it states that FFXV on the X1X runs on the 'average' preset equivalent on PC. Once again, DT was nice enough to provide us information about the performance of other presets and if we take 1440p medium settings as our reference point then a 1060 nets us ~40FPS which makes an X1X at least neck on neck with it factoring in the extra headroom being used for dynamic resolution ...
|
...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable.
In saying that... The Geforce 1060 is playing around the same level as the Radeon RX 580, which is roughly equivalent to the Xbox One X GPU anyway.
fatslob-:O said:
When we make a side by side DF comparison in Wolfenstein 2, X1X is running at a lower bound of 1656p at near maximum preset equivalent on PC. A 1060 was no where near in sight of the X1X's performance profile on it's best day running at a lower resolution of 1440p/max preset all the while it was far away from the 60FPS target according to guru3D. A 1080 is practically necessary to do away with the uncertainties of delivering lower than X1X level of experience because an X1X is nearly twice as fast as a 1060 in the pessimistic case ...
|
Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup.
The Geforce 1060 3GB's performance for example will absolutely tank in that game.
There is a reason why a Radeon RX 470 8GB is beating the Geforce 1060... No one in their right mind would state the RX 470 is the superior part, would they?
fatslob-:O said:
I don't know why you picked Forza 7 for comparison when it's one of the more favourable titles in comparison for the X1X against a 1060. It pretty much matches PC at maximum settings while maintaining perfect 40K60FPS performance with better than 4x MSAA solution while a 1060 can't even come close to maintaining a perfect 60FPS on max settings at the PC side from guru3D reporting from another source ... (a 1070 looks bad as well when we look at the 99th percentile)
|
You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate.
No way in hell should you require a Geforce 1080 to have a similar experience to the Xbox One X. It's chalk and cheese, 1080 every day.
fatslob-:O said:
For The Witcher 3, given that base consoles previously delivered a preset between PC's medium/high settings I imagine that DF would put the X1X resoundingly within the high settings. From Techspot's data, seeing how much of a disaster The Witcher 3 is with high settings and 4K is on a 980 I think we can safely say it won't end all that well for a 1060 even with dynamic res ...
|
A Geforce 1060 6Gb is doing 42.9fps @ 1440P with ultra settings.
You can bet your ass that same part can do 4k, 30fps at medium... Without the killer that is Hairworx.
But the Xbox One X is dropping things to 1800P in taxing areas as well.
https://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/10
In-fact Guru 3D says the Witcher 3 is 27fps @ 4k + Ultra settings, so a 1060 6Gb could potentially drive better visuals than the Xbox One X version if you settle for some high details rather than ultra.
https://www.guru3d.com/articles-pages/geforce-gtx-1060-review,23.html
fatslob-:O said:
With Fortnite, the game ran a little under 1800p on the X1X while a 1060 ran moderately better with a lower resolution of 1440p according to Tom's Hardware. Both run the same EPIC preset so they're practically neck and neck in this case as well ...
|
Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit.
https://www.eurogamer.net/articles/digitalfoundry-2018-fortnites-new-patch-really-does-deliver-60fps
And if we look at the techspot benchies, we can see that 4k, 30fps is feasible on the 1060.
https://www.techspot.com/article/1557-fortnite-benchmarks/
fatslob-:O said:
There's not enough collected quality data about Dishonored 2 to really say anything about X1X to compare against PC ...
|
Probably not. But I threw it in there... Because cats.
fatslob-:O said:
In Ubisoft's Tom Clancy's The Division, X1X was running dynamic 4K with a range between 88-100% and past analysis reveals that it's on par with a 980Ti! (X1X had slightly higher settings but 980Ti came with full native 4K)
|
Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results.
https://www.youtube.com/watch?v=r9IGpIehFmQ
fatslob-:O said:
At Far Cry 5, even Alex from DF said he couldn't maintain X1X settings on either a 1060 or the 580 ...
|
The 1060 is doing 25fps @4k with Ultra settings.
https://www.techspot.com/article/1600-far-cry-5-benchmarks/page2.html
And just like Digital foundry states, the Xbox One X isn't running Ultra settings, but high settings with a few low settings... But that dropping the visuals from High to low nets only 5~ fps performance, which means that the engine just isn't scaling things appropriately on PC.
But hey. Like what I said above.
fatslob-:O said:
Even in the pessimistic scenario you give the 1060 waaay too much credit than what it's truly worth when an X1X is more than a match made for it. Is a 1070 ahead of an X1X ? Sure I might give you that since it happens often enough but in no way would I place an X1X below a 1060 since it doesn't seem to happen that much if ever at all when we take into account good sources of data ... (the future becomes even darker for the 1060 with DX12 only titles)
|
Or I am not giving the 1060 6GB enough credit?
I am not saying the Xbox One X is below a 1060. It's that it's roughly in the same ballpark. - That is... Medium quality settings. 1440P-4k, which is what I expect out of a Radeon RX 580/590, which is roughly equivalent to the Xbox One X in terms of GPU capability on the Radeon side.
You were the one who stated that you need a 1070 to get a similar experience to the Xbox One X. :P
I mean, the Radeon RX 590 is a faster GPU than what is in the Xbox One X... It's Polaris pushed to it's clock limits, yet... The Geforce 1070 still craps on it.
https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review/6
Like I said though... I am a Radeon RX 580 owner.
I am also an Xbox One X owner.
I can do side by side comparisons in real time.
The Radeon RX 580 is roughly equivalent to the Geforce 1060 6GB... And by extension... Neither part is doing anything that I don't see my Xbox One X doing, they are all roughly the same level of expected capability overall.