Quantcast
View Post
Pemalite said:

At the end of the day though, it doesn't matter.
nVidia is offering something that AMD isn't, nVidia is still faster and more efficient than AMD's equivalent offerings at almost every bracket... (Minus the crappy Geforce 1650, get the Radeon 570 every day, even if it's a bigger power hog.)

Whether Turings hardware will pay off in the long term remains to be seen, but one thing is for sure... Lots of modders are shoring up mods for older games and implementing Ray Tracing, Minecraft, Crysis, you name it. - Even if it is more rudimentary Path Tracing.

It will be interesting if they implement Turing specific features going forward.

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ... 

Pemalite said:

You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.


And I am not downplaying any of AMD's achievements... But when a company is bolting on features to a 7+ year old GPU architecture and is trying to shovel it as something new and novel and revolutionary... Well. Doesn't really sit well. Especially when AMD has a history of re-badging older GCN as a new series.

Take the RX 520 for example... Using a GPU from 2013. (Oland.)
Grant it's low-end stuff so doesn't matter as much, but it's highly disingenuous of AMD... Plus such a part misses out on technologies that are arguably more important for low-end hardware, like Delta Colour compression to bolster bandwidth.

nVidia isn't immune from such practices either, but they have shied away from such practices in recent times.

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

Pemalite said:

I think it's interesting now, especially the technologies Intel is hinting at.
For IGP's though AMD should still hold a sizable advantage, AMD simply reserves more transistors in it's IGP's for GPU duties than Intel is willing.

My Ryzen notebook (Ignoring the shit battery life that plagues all Ryzen notebooks!) has been amazing from a performance and support standpoint.

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

Pemalite said:

Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means.

AMD has pretty much stuck to 64 CU's or below with GCN, I don't expect that to change anytime soon.. And Graphics Core Next has shown not to be an architecture that has been able to keep pace with nVidia, whether it's Maxwell, Pascal or Turing, it's simply full of limitations for gaming orientated workloads.

Allot of the features introduced with Vega generally didn't pan out as one would hope which didn't help things for AMD either.

Will nVidia retain it's advantage? Who knows. I don't like their approach to dedicating hardware to driving Ray Tracing... I would rather nVidia had taken a more unified approach that would have lent itself to rasterized workloads as well. - Whether this ends up being another Geforce FX moment for nVidia and AMD remains to be seen.. But with Navi being a Polaris replacement and not a high-end part... I don't have my hopes up until AMD's next-gen architecture.

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Pemalite said:

But the Xbox One X isn't doing 4K, ultra settings @60fps.

I am also going to hazard a guess they are using a Geforce 1060 3GB, not the 6GB variant, the 6GB card tends to track a little closer to the RX 580, not almost 20% slower... But they never really elaborated upon any of that... Nor did Digital Trends actually state anything about the Xbox One X?

I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Pemalite said:

...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable.

In saying that... The Geforce 1060 is playing around the same level as the Radeon RX 580, which is roughly equivalent to the Xbox One X GPU anyway.

It's at minimum a 1060 ...

Pemalite said:

Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup.
The Geforce 1060 3GB's performance for example will absolutely tank in that game.

There is a reason why a Radeon RX 470 8GB is beating the Geforce 1060... No one in their right mind would state the RX 470 is the superior part, would they?

@Bold I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Pemalite said:

You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate.

No way in hell should you require a Geforce 1080 to have a similar experience to the Xbox One X. It's chalk and cheese, 1080 every day.

I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Pemalite said:


A Geforce 1060 6Gb is doing 42.9fps @ 1440P with ultra settings. 

You can bet your ass that same part can do 4k, 30fps at medium... Without the killer that is Hairworx.
But the Xbox One X is dropping things to 1800P in taxing areas as well.

https://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/10

In-fact Guru 3D says the Witcher 3 is 27fps @ 4k + Ultra settings, so a 1060 6Gb could potentially drive better visuals than the Xbox One X version if you settle for some high details rather than ultra.

https://www.guru3d.com/articles-pages/geforce-gtx-1060-review,23.html

Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

Pemalite said:

Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit.

https://www.eurogamer.net/articles/digitalfoundry-2018-fortnites-new-patch-really-does-deliver-60fps

And if we look at the techspot benchies, we can see that 4k, 30fps is feasible on the 1060.
https://www.techspot.com/article/1557-fortnite-benchmarks/

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

Pemalite said:

Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results.

https://www.youtube.com/watch?v=r9IGpIehFmQ

@Bold Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

Pemalite said:

The 1060 is doing 25fps @4k with Ultra settings.
https://www.techspot.com/article/1600-far-cry-5-benchmarks/page2.html

And just like Digital foundry states, the Xbox One X isn't running Ultra settings, but high settings with a few low settings... But that dropping the visuals from High to low nets only 5~ fps performance, which means that the engine just isn't scaling things appropriately on PC.

But hey. Like what I said above.

But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...  

Pemalite said:

Or I am not giving the 1060 6GB enough credit?
I am not saying the Xbox One X is below a 1060. It's that it's roughly in the same ballpark. - That is... Medium quality settings. 1440P-4k, which is what I expect out of a Radeon RX 580/590, which is roughly equivalent to the Xbox One X in terms of GPU capability on the Radeon side.

You were the one who stated that you need a 1070 to get a similar experience to the Xbox One X. :P

I mean, the Radeon RX 590 is a faster GPU than what is in the Xbox One X... It's Polaris pushed to it's clock limits, yet... The Geforce 1070 still craps on it.
https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review/6

Like I said though... I am a Radeon RX 580 owner. 
I am also an Xbox One X owner. 

I can do side by side comparisons in real time.

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)