Pemalite said:
At the end of the day though, it doesn't matter. |
On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ...
Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ...
Pemalite said: You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.
|
New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles)
AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck)
AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated)
Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)
It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ?
Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)
Pemalite said: I think it's interesting now, especially the technologies Intel is hinting at. |
Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)
Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)
Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ...
Pemalite said: Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means. |
I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)
My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications)
Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ...
Pemalite said: But the Xbox One X isn't doing 4K, ultra settings @60fps. |
I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ...
Pemalite said: ...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable. |
It's at minimum a 1060 ...
Pemalite said: Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup. |
@Bold I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge)
Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ...
Pemalite said: You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate. |
I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)
Pemalite said:
|
Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ...
Pemalite said: Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit. |
It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode"
Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ...
Pemalite said: Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results. |
@Bold Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...
Pemalite said: The 1060 is doing 25fps @4k with Ultra settings. |
But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...
Pemalite said: Or I am not giving the 1060 6GB enough credit? |
If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ...
10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ...
A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ?
A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)