Quantcast
Navi Made in Colab with sony, MS still Using it?

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0.00%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34
fatslob-:O said:
Pemalite said:

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.
It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

Never did pretend that it was a good thing but I was only trying to present a counterpoint to your perception that Nvidia somehow has a near perfect record on efficiency ... 

I haven't stated or announced that nVidia does have a perfect record on efficiency... In-fact I even provided examples to the contrary prior. (Aka. Geforce FX)
But right now and for years prior... nVidia has hands down dominated AMD in that single facet, that is far from a good thing for the industry.

fatslob-:O said:

That wasn't my impression so I'm not sure if you realize this explicitly but when betting on a new technology to be standardized, there's always going to be a stake of an 'overengineered' solution ending up being inferior on a technical performance basis because like it or not there's going to be a set of trade-offs depending on each competitors strategy ...

At the end of the day though, it doesn't matter.
nVidia is offering something that AMD isn't, nVidia is still faster and more efficient than AMD's equivalent offerings at almost every bracket... (Minus the crappy Geforce 1650, get the Radeon 570 every day, even if it's a bigger power hog.)

Whether Turings hardware will pay off in the long term remains to be seen, but one thing is for sure... Lots of modders are shoring up mods for older games and implementing Ray Tracing, Minecraft, Crysis, you name it. - Even if it is more rudimentary Path Tracing.

It will be interesting if they implement Turing specific features going forward.

fatslob-:O said:

Let's take a more sympathetic approach to AMD for a moment to not disregard their achievements so far for every downside they have because at the end of the day they still managed to pivot Nvidia a little bit towards their direction so by no means is AMD worse off than they were technologically speaking after the release of Turing ... (AMD were arguably far worse off against Pascal because unlike Turing where they could be similarly competitive on a performance/area basis, they couldn't compete against Pascal on ANY metric)

You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.

And I am not downplaying any of AMD's achievements... But when a company is bolting on features to a 7+ year old GPU architecture and is trying to shovel it as something new and novel and revolutionary... Well. Doesn't really sit well. Especially when AMD has a history of re-badging older GCN as a new series.

Take the RX 520 for example... Using a GPU from 2013. (Oland.)
Grant it's low-end stuff so doesn't matter as much, but it's highly disingenuous of AMD... Plus such a part misses out on technologies that are arguably more important for low-end hardware, like Delta Colour compression to bolster bandwidth.

nVidia isn't immune from such practices either, but they have shied away from such practices in recent times.

fatslob-:O said:

Meh, Xe won't be interesting at all to talk about until it gets closer to release or if it ever releases at all under the current situation with Intel ... 

I think it's interesting now, especially the technologies Intel is hinting at.
For IGP's though AMD should still hold a sizable advantage, AMD simply reserves more transistors in it's IGP's for GPU duties than Intel is willing.

My Ryzen notebook (Ignoring the shit battery life that plagues all Ryzen notebooks!) has been amazing from a performance and support standpoint.

fatslob-:O said:

More compute units isn't sustainable if we want a ray traced future when we take a look at Volta but I don't deny that Turing still has an advantage compared to AMD's offerings, however it would be prudent to not assume that Nvidia will forever retain this advantage when ultimately they can't solely control the direction the entire industry is headed towards ... 

Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means.

AMD has pretty much stuck to 64 CU's or below with GCN, I don't expect that to change anytime soon.. And Graphics Core Next has shown not to be an architecture that has been able to keep pace with nVidia, whether it's Maxwell, Pascal or Turing, it's simply full of limitations for gaming orientated workloads.

Allot of the features introduced with Vega generally didn't pan out as one would hope which didn't help things for AMD either.

Will nVidia retain it's advantage? Who knows. I don't like their approach to dedicating hardware to driving Ray Tracing... I would rather nVidia had taken a more unified approach that would have lent itself to rasterized workloads as well. - Whether this ends up being another Geforce FX moment for nVidia and AMD remains to be seen.. But with Navi being a Polaris replacement and not a high-end part... I don't have my hopes up until AMD's next-gen architecture.

fatslob-:O said:

The source at hand doesn't seem all that rigorous in it's analysis compared to digital foundry since he doesn't present a frame rate counter, omits information, and sometimes he changes settings which raises a big red flag for me. Let's use more high quality sources of data instead for a better insight ... 

I too would prefer a digital foundry comparison... But they aren't exactly pitting Geforce 1060 6GB cards against the Xbox One X, they usually run with high-end PC hardware.

fatslob-:O said:

Going by DF's video on SWBF2, you practically need a 1080Ti (maybe you could get away with a 1080 ?) to hold 4K60FPS on ultra settings to do better than an X1X which I imagine to be the high preset with with a dynamic res between 75-100% 4K. An X1X soundly NUKES a 1060 out of orbit with 4K MEDIUM preset settings and nearly DOUBLES the frame rate according to Digital Trends ... 

But the Xbox One X isn't doing 4K, ultra settings @60fps.

I am also going to hazard a guess they are using a Geforce 1060 3GB, not the 6GB variant, the 6GB card tends to track a little closer to the RX 580, not almost 20% slower... But they never really elaborated upon any of that... Nor did Digital Trends actually state anything about the Xbox One X?

fatslob-:O said:

In a DF article, it states that FFXV on the X1X runs on the 'average' preset equivalent on PC. Once again, DT was nice enough to provide us information about the performance of other presets and if we take 1440p medium settings as our reference point then a 1060 nets us ~40FPS which makes an X1X at least neck on neck with it factoring in the extra headroom being used for dynamic resolution ... 

...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable.

In saying that... The Geforce 1060 is playing around the same level as the Radeon RX 580, which is roughly equivalent to the Xbox One X GPU anyway.

fatslob-:O said:

When we make a side by side DF comparison in Wolfenstein 2, X1X is running at a lower bound of 1656p at near maximum preset equivalent on PC. A 1060 was no where near in sight of the X1X's performance profile on it's best day running at a lower resolution of 1440p/max preset all the while it was far away from the 60FPS target according to guru3D. A 1080 is practically necessary to do away with the uncertainties of delivering lower than X1X level of experience because an X1X is nearly twice as fast as a 1060 in the pessimistic case ... 

Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup.
The Geforce 1060 3GB's performance for example will absolutely tank in that game.

There is a reason why a Radeon RX 470 8GB is beating the Geforce 1060... No one in their right mind would state the RX 470 is the superior part, would they?

fatslob-:O said:

I don't know why you picked Forza 7 for comparison when it's one of the more favourable titles in comparison for the X1X against a 1060. It pretty much matches PC at maximum settings while maintaining perfect 40K60FPS performance with better than 4x MSAA solution while a 1060 can't even come close to maintaining a perfect 60FPS on max settings at the PC side from guru3D reporting from another source ... (a 1070 looks bad as well when we look at the 99th percentile)

You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate.

No way in hell should you require a Geforce 1080 to have a similar experience to the Xbox One X. It's chalk and cheese, 1080 every day.

fatslob-:O said:

For The Witcher 3, given that base consoles previously delivered a preset between PC's medium/high settings I imagine that DF would put the X1X resoundingly within the high settings. From Techspot's data, seeing how much of a disaster The Witcher 3 is with high settings and 4K is on a 980 I think we can safely say it won't end all that well for a 1060 even with dynamic res ... 


A Geforce 1060 6Gb is doing 42.9fps @ 1440P with ultra settings.

You can bet your ass that same part can do 4k, 30fps at medium... Without the killer that is Hairworx.
But the Xbox One X is dropping things to 1800P in taxing areas as well.

https://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/10

In-fact Guru 3D says the Witcher 3 is 27fps @ 4k + Ultra settings, so a 1060 6Gb could potentially drive better visuals than the Xbox One X version if you settle for some high details rather than ultra.

https://www.guru3d.com/articles-pages/geforce-gtx-1060-review,23.html

fatslob-:O said:

With Fortnite, the game ran a little under 1800p on the X1X while a 1060 ran moderately better with a lower resolution of 1440p according to Tom's Hardware. Both run the same EPIC preset so they're practically neck and neck in this case as well ... 

Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit.

https://www.eurogamer.net/articles/digitalfoundry-2018-fortnites-new-patch-really-does-deliver-60fps

And if we look at the techspot benchies, we can see that 4k, 30fps is feasible on the 1060.
https://www.techspot.com/article/1557-fortnite-benchmarks/

fatslob-:O said:

There's not enough collected quality data about Dishonored 2 to really say anything about X1X to compare against PC ... 

Probably not. But I threw it in there... Because cats.

fatslob-:O said:

In Ubisoft's Tom Clancy's The Division, X1X was running dynamic 4K with a range between 88-100% and past analysis reveals that it's on par with a 980Ti! (X1X had slightly higher settings but 980Ti came with full native 4K) 

Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results.

https://www.youtube.com/watch?v=r9IGpIehFmQ

fatslob-:O said:

At Far Cry 5, even Alex from DF said he couldn't maintain X1X settings on either a 1060 or the 580 ... 

The 1060 is doing 25fps @4k with Ultra settings.
https://www.techspot.com/article/1600-far-cry-5-benchmarks/page2.html

And just like Digital foundry states, the Xbox One X isn't running Ultra settings, but high settings with a few low settings... But that dropping the visuals from High to low nets only 5~ fps performance, which means that the engine just isn't scaling things appropriately on PC.

But hey. Like what I said above.

fatslob-:O said:

Even in the pessimistic scenario you give the 1060 waaay too much credit than what it's truly worth when an X1X is more than a match made for it. Is a 1070 ahead of an X1X ? Sure I might give you that since it happens often enough but in no way would I place an X1X below a 1060 since it doesn't seem to happen that much if ever at all when we take into account good sources of data ... (the future becomes even darker for the 1060 with DX12 only titles)

Or I am not giving the 1060 6GB enough credit?
I am not saying the Xbox One X is below a 1060. It's that it's roughly in the same ballpark. - That is... Medium quality settings. 1440P-4k, which is what I expect out of a Radeon RX 580/590, which is roughly equivalent to the Xbox One X in terms of GPU capability on the Radeon side.

You were the one who stated that you need a 1070 to get a similar experience to the Xbox One X. :P

I mean, the Radeon RX 590 is a faster GPU than what is in the Xbox One X... It's Polaris pushed to it's clock limits, yet... The Geforce 1070 still craps on it.
https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review/6

Like I said though... I am a Radeon RX 580 owner.
I am also an Xbox One X owner.

I can do side by side comparisons in real time.

The Radeon RX 580 is roughly equivalent to the Geforce 1060 6GB... And by extension... Neither part is doing anything that I don't see my Xbox One X doing, they are all roughly the same level of expected capability overall.



Around the Network
eva01beserk said:

So you really believe Sony will launch a new gen that is not even 2 times faster than its latest console launch, the ps4 pro? But for some reason you believe the anaconda will be 2x the x1x, making it almost 1.5 the power of the ps5 assuming its all coming at the same time.  What are you expecting the prices will be for them?

Honestly a new gen console coming at only 1.3 the power of the previous last gen top console would be just ridiculous. Specially not weaker than stadia.

Ps4 pro is 4,2 Teraflops so 8,3 is about 2x, it will probably be slightly above 8,3 it's a rounding number dude.

Keep in mind this is speculation, but yes, PS5 will have 8 Teraflops at the price $400, Xbox lockhart 4TF (disc-less) at $300 and xbox anaconda 12TF at $500. They will all have 1TB NVMe/SSD drive (ultra super-fast storage) and 8core zen2 CPU and will be released in fall 2020. This is 100% certain.

I've been saying something close to this for few months now, only that Brad sams and Jason Schreier (kotaku reporter) comments did throw me off for awhile. But nothing is 100% until it's 100% but after a while speculation becomes fact and all we can do now is w8 for confirmation.

Last edited by Trumpstyle - on 07 May 2019

"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

Ps4 pro is 4,2 Teraflops so 8,3 is about 2x, it will probably be slightly above 8,3 it's a rounding number dude. confirmation.

So you don't expect efficiency to change at all? So you expect the Playstation 5 to be as efficient as a base Xbox One on a per-flop basis? Wow.

Trumpstyle said:

Keep in mind this is speculation, but yes, PS5 will have 8 Teraflops at the price $400, Xbox lockhart 4TF (disc-less) at $300 and xbox anaconda 12TF at $500. They will all have 1TB NVMe/SSD drive (ultra super-fast storage) and 8core zen2 CPU and will be released in fall 2020. This is 100% certain.

You can't say it is speculation with one hand and assert something as definitive with another.
You actually have no idea what the hardware is going to be capable of, it hasn't been revealed yet.



ironmanDX said:
DonFerrari said:

We are talking about a 2 year window and you want to look at a 2 month window?

Sony sold much more than MS from the very begging, and PS5 would do the same. Sony can easily have a contract obeying they average sales for the 2 year period (which would be over 30M) while MS can't, they never got anywhere near the numbers. Unless console landscape changes a lot (which you seem to be determined to believe because of reasons) Xbox next will sell about half of PS5.

Of course I am... The sales are going to be massive at launch of both for both of these machines. PS5... Simply because it's PlayStation and xbox for the reasons I mentioned above. It'll be a significant chunk of the consoles sold in the first 2 years.... And set the tone of how well they're likely to sell against one another.

Of course ps4 outsold the xbox one... I just stated very good reasons as to why in the post you replied too... Any company puts out a product with that many missteps is bound to be outsold no matter the market.

The landscape HAS changed. Look at the Switch. A hybrid console is taking names... Iterating consoles now exist too, it's looking like at least one of them will launch with multiple skus and Ms are making all the right moves to make some ground back next generation. Google is coming with Stadia... There's Psnow and Gamepass... Though, again... I'll agree that the PS5 is going to sell quite a lot more than the neXtbox.


I don't see how you can say that the landscape hasn't changed.

Companies work on projections that are based much on historical data. So there isn't a single iota of evidence that Xbox next could do better than 50% of PS5 sales. So there is absolutely no reason to think Sony wouldn't have much better bargain power on purchasing parts.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

thismeintiel said:
eva01beserk said:

More leaks.

Amd wont have anything matching the 2080 untill 2020 for $500. Who knows about the 2080ti.

The 2070 match will be $330. Navi 10 with 56 compute units. will come out third Q of this year. Could this be the one in the ps5? At that price? Its still a whole year before the ps5 actually uses it. And we have to remember that sony will be getting them much cheaper.

So this scales down the rumors. Like permalite said before. Seems like Amd is still ways behind. Nvidea just has to update the rtx line to 7nm and Keep their big advantage in both power and tdp. But seems like AMD will still have a better price for performance.

I for one think we are getting at least 2070 performance on the ps5. After all, $500 ps5 seems to be likely. Who knows about the anaconda. Could be $600 with 2080 performance. But will that matter? I dont think nnext gen us casual gamers wont be able to tell. We already go by what Digital Foundry tells us.

If the leak is correct, than the Navi 10 sounds right.  I wonder if the real reason Sony skipped E3 was because they will be at AMD's E3 showing where the Navi is revealed.  They may show a few games that are in production, but they aren't far along enough yet to have a floor full of demos.  It also allows AMD to shoulder much of the price of a E3 showing, while Sony is focused on spending money on the development of the PS5.

EricHiggin said:

Since it matches up with Adored TV's most recent info, and it looks to be more in line with what one would expect, along with being closer to the rumored July 7th announcement, I'd say there's probably some validity to it. Not to mention Cerny failing to mention anything about PS5's Navi specs. PS themselves may not know exactly where Navi is going to land just yet, since it's still being tweeked possibly, so no point in mentioning specs for it, on top of the other strategic reasons they may not want to officially announce yet.

The prices are deceiving for consoles though. A $330 3080XT, with RTX 2070 specs, is the price of the entire card, and is msrp. PS will only be buying the silicon chip so the price they would pay would already be so much less than that, plus no middle man, plus in bulk.

Is Navi 10 just a cut down version of Navi 20 or a different die? If the 56CU 3080XT is a perfect die, I'd have to imagine the 52CU 3080 would be the console die, leaving room to disable some CU's.



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.

Around the Network
Pemalite said:
Trumpstyle said:

Ps4 pro is 4,2 Teraflops so 8,3 is about 2x, it will probably be slightly above 8,3 it's a rounding number dude. confirmation.

So you don't expect efficiency to change at all? So you expect the Playstation 5 to be as efficient as a base Xbox One on a per-flop basis? Wow.

Trumpstyle said:

Keep in mind this is speculation, but yes, PS5 will have 8 Teraflops at the price $400, Xbox lockhart 4TF (disc-less) at $300 and xbox anaconda 12TF at $500. They will all have 1TB NVMe/SSD drive (ultra super-fast storage) and 8core zen2 CPU and will be released in fall 2020. This is 100% certain.

You can't say it is speculation with one hand and assert something as definitive with another.
You actually have no idea what the hardware is going to be capable of, it hasn't been revealed yet.

I'm not sure where you going with this effiency thingy, I expect the PS5 gpu to pull about 120watt and that includes memory and xbox one gpu probably pulls about 80watt with its ddr4 memory, so that's upwards 5x flop/effiency when you compare the TFs.

When I say 100% it's just for fun :) hehe but I'm pretty certain we will be getting something very close to what I wrote. Right now it's the 8core zen2 cpu for the xboxes I feel most uncertain about, I think there is a pretty decent chance that xbox lockhart will only have 4 zen2 cpu cores and maybe even xbox anaconda, But I wanna do precise predictons so I don't write that down.



"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

eva01beserk said:

More leaks.

Amd wont have anything matching the 2080 untill 2020 for $500. Who knows about the 2080ti.

The 2070 match will be $330. Navi 10 with 56 compute units. will come out third Q of this year. Could this be the one in the ps5? At that price? Its still a whole year before the ps5 actually uses it. And we have to remember that sony will be getting them much cheaper.

So this scales down the rumors. Like permalite said before. Seems like Amd is still ways behind. Nvidea just has to update the rtx line to 7nm and Keep their big advantage in both power and tdp. But seems like AMD will still have a better price for performance.

I for one think we are getting at least 2070 performance on the ps5. After all, $500 ps5 seems to be likely. Who knows about the anaconda. Could be $600 with 2080 performance. But will that matter? I dont think nnext gen us casual gamers wont be able to tell. We already go by what Digital Foundry tells us.

I saw that leak before, but it doesn't add up.

1. Why making 2 different chips when they are just 8 CU apart? That would have made sense at the low end/entry level, say 8 and 16 CU, but not at that level where there's a high chance that many chips with 64CU would need to be binned out with just 56 working CU anyway. It's the exact opposite problem of the previous leak, where they had a huge range of GPUs and CU on just one chip.

2. The pricing doesn't add up, either. For just 4 additional CU you're paying a 90$ premium, or in other terms: for 7% more CU you're paying a 27% premium. NVidias prices are high, but the difference in power is also roughly the difference in price percentage wise. Not so between those 3080X/3090models.

3. The TDP. The 3090 is supposed to have less TDP but more performance out of just 4 more CU. That doesn't add up unless there are more changes to the architecture or extremely agressive binning - which would make it a very rare card.



Pemalite said:

At the end of the day though, it doesn't matter.
nVidia is offering something that AMD isn't, nVidia is still faster and more efficient than AMD's equivalent offerings at almost every bracket... (Minus the crappy Geforce 1650, get the Radeon 570 every day, even if it's a bigger power hog.)

Whether Turings hardware will pay off in the long term remains to be seen, but one thing is for sure... Lots of modders are shoring up mods for older games and implementing Ray Tracing, Minecraft, Crysis, you name it. - Even if it is more rudimentary Path Tracing.

It will be interesting if they implement Turing specific features going forward.

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ... 

Pemalite said:

You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.


And I am not downplaying any of AMD's achievements... But when a company is bolting on features to a 7+ year old GPU architecture and is trying to shovel it as something new and novel and revolutionary... Well. Doesn't really sit well. Especially when AMD has a history of re-badging older GCN as a new series.

Take the RX 520 for example... Using a GPU from 2013. (Oland.)
Grant it's low-end stuff so doesn't matter as much, but it's highly disingenuous of AMD... Plus such a part misses out on technologies that are arguably more important for low-end hardware, like Delta Colour compression to bolster bandwidth.

nVidia isn't immune from such practices either, but they have shied away from such practices in recent times.

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

Pemalite said:

I think it's interesting now, especially the technologies Intel is hinting at.
For IGP's though AMD should still hold a sizable advantage, AMD simply reserves more transistors in it's IGP's for GPU duties than Intel is willing.

My Ryzen notebook (Ignoring the shit battery life that plagues all Ryzen notebooks!) has been amazing from a performance and support standpoint.

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

Pemalite said:

Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means.

AMD has pretty much stuck to 64 CU's or below with GCN, I don't expect that to change anytime soon.. And Graphics Core Next has shown not to be an architecture that has been able to keep pace with nVidia, whether it's Maxwell, Pascal or Turing, it's simply full of limitations for gaming orientated workloads.

Allot of the features introduced with Vega generally didn't pan out as one would hope which didn't help things for AMD either.

Will nVidia retain it's advantage? Who knows. I don't like their approach to dedicating hardware to driving Ray Tracing... I would rather nVidia had taken a more unified approach that would have lent itself to rasterized workloads as well. - Whether this ends up being another Geforce FX moment for nVidia and AMD remains to be seen.. But with Navi being a Polaris replacement and not a high-end part... I don't have my hopes up until AMD's next-gen architecture.

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Pemalite said:

But the Xbox One X isn't doing 4K, ultra settings @60fps.

I am also going to hazard a guess they are using a Geforce 1060 3GB, not the 6GB variant, the 6GB card tends to track a little closer to the RX 580, not almost 20% slower... But they never really elaborated upon any of that... Nor did Digital Trends actually state anything about the Xbox One X?

I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Pemalite said:

...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable.

In saying that... The Geforce 1060 is playing around the same level as the Radeon RX 580, which is roughly equivalent to the Xbox One X GPU anyway.

It's at minimum a 1060 ...

Pemalite said:

Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup.
The Geforce 1060 3GB's performance for example will absolutely tank in that game.

There is a reason why a Radeon RX 470 8GB is beating the Geforce 1060... No one in their right mind would state the RX 470 is the superior part, would they?

@Bold I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Pemalite said:

You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate.

No way in hell should you require a Geforce 1080 to have a similar experience to the Xbox One X. It's chalk and cheese, 1080 every day.

I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Pemalite said:


A Geforce 1060 6Gb is doing 42.9fps @ 1440P with ultra settings. 

You can bet your ass that same part can do 4k, 30fps at medium... Without the killer that is Hairworx.
But the Xbox One X is dropping things to 1800P in taxing areas as well.

https://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/10

In-fact Guru 3D says the Witcher 3 is 27fps @ 4k + Ultra settings, so a 1060 6Gb could potentially drive better visuals than the Xbox One X version if you settle for some high details rather than ultra.

https://www.guru3d.com/articles-pages/geforce-gtx-1060-review,23.html

Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

Pemalite said:

Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit.

https://www.eurogamer.net/articles/digitalfoundry-2018-fortnites-new-patch-really-does-deliver-60fps

And if we look at the techspot benchies, we can see that 4k, 30fps is feasible on the 1060.
https://www.techspot.com/article/1557-fortnite-benchmarks/

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

Pemalite said:

Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results.

https://www.youtube.com/watch?v=r9IGpIehFmQ

@Bold Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

Pemalite said:

The 1060 is doing 25fps @4k with Ultra settings.
https://www.techspot.com/article/1600-far-cry-5-benchmarks/page2.html

And just like Digital foundry states, the Xbox One X isn't running Ultra settings, but high settings with a few low settings... But that dropping the visuals from High to low nets only 5~ fps performance, which means that the engine just isn't scaling things appropriately on PC.

But hey. Like what I said above.

But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...  

Pemalite said:

Or I am not giving the 1060 6GB enough credit?
I am not saying the Xbox One X is below a 1060. It's that it's roughly in the same ballpark. - That is... Medium quality settings. 1440P-4k, which is what I expect out of a Radeon RX 580/590, which is roughly equivalent to the Xbox One X in terms of GPU capability on the Radeon side.

You were the one who stated that you need a 1070 to get a similar experience to the Xbox One X. :P

I mean, the Radeon RX 590 is a faster GPU than what is in the Xbox One X... It's Polaris pushed to it's clock limits, yet... The Geforce 1070 still craps on it.
https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review/6

Like I said though... I am a Radeon RX 580 owner. 
I am also an Xbox One X owner. 

I can do side by side comparisons in real time.

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)



Trumpstyle said:

I'm not sure where you going with this effiency thingy, I expect the PS5 gpu to pull about 120watt and that includes memory and xbox one gpu probably pulls about 80watt with its ddr4 memory, so that's upwards 5x flop/effiency when you compare the TFs.

Performance per watt. Performance per Teraflop.
You can have a GPU with 1 Teraflop beat a GPU with 2 Teraflops in gaming.

Comparing Teraflops as some kind of absolute determiner of performance between hardware is highly disingenuous.

And where are you pulling this 5x flop/efficiency?

Trumpstyle said:

When I say 100% it's just for fun :) hehe but I'm pretty certain we will be getting something very close to what I wrote. Right now it's the 8core zen2 cpu for the xboxes I feel most uncertain about, I think there is a pretty decent chance that xbox lockhart will only have 4 zen2 cpu cores and maybe even xbox anaconda, But I wanna do precise predictons so I don't write that down.

Speculation is fine, but the way you were wording it... It was as if you were asserting that your stance is 100% factual, when that simply isn't the case yet.

Zen 2 is likely 8-cores per CCX, so my assertion prior was that the next gen will likely leverage a single CCX for the CPU, the Playstation 5 having 8-cores falls into that.
Originally I thought AMD was only going to increase the single CCX core count to 6-cores, so I was happy to be incorrect about that.

Bofferbrauer2 said:

I saw that leak before, but it doesn't add up.

1. Why making 2 different chips when they are just 8 CU apart? That would have made sense at the low end/entry level, say 8 and 16 CU, but not at that level where there's a high chance that many chips with 64CU would need to be binned out with just 56 working CU anyway. It's the exact opposite problem of the previous leak, where they had a huge range of GPUs and CU on just one chip.

2. The pricing doesn't add up, either. For just 4 additional CU you're paying a 90$ premium, or in other terms: for 7% more CU you're paying a 27% premium. NVidias prices are high, but the difference in power is also roughly the difference in price percentage wise. Not so between those 3080X/3090models.

3. The TDP. The 3090 is supposed to have less TDP but more performance out of just 4 more CU. That doesn't add up unless there are more changes to the architecture or extremely agressive binning - which would make it a very rare card.

I tend to take whatever Red Gaming Tech says with a grain of salt, they have an AMD bias and will generally comment on each and every single piece of rumor that comes around. - Regardless of it's absurdity.

Do they get some things right? Sure. But they get some things wrong as well. Is what it is, but sticking with sources that are a little more credible is probably the best way to go.

fatslob-:O said:

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ...

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

fatslob-:O said:

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

fatslob-:O said:

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

fatslob-:O said:

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

This is probably one of the biggest arguments for sticking with Graphics Core Next. And it's extremely valid.

fatslob-:O said:

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

fatslob-:O said:

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

fatslob-:O said:

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

fatslob-:O said:

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.
I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

fatslob-:O said:

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

fatslob-:O said:
I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

fatslob-:O said:
It's at minimum a 1060 ...

Certainly not a 1070.

fatslob-:O said:

I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

fatslob-:O said:
I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)
https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

fatslob-:O said:
Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

*********

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

**********
Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

**********
But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...

We are pretty much just debating semantics now. Haha

I still standby my previous assertion that the Xbox One X is more inline with a Geforce 1060 6Gb in terms of overall capability.

fatslob-:O said:

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

fatslob-:O said:

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

fatslob-:O said:

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)

A 1060 is overrated. But so it the Xbox One X.
The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.



Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ... 

Pemalite said:

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Pemalite said:

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

Pemalite said:

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

Pemalite said:

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

The likes of Huawei, Qualcomm, or Samsung are destined to reap the vast majority of the rewards in mobile space since they have independent 5G technology while the likes of Intel (they couldn't make 5G modems)/Nvidia (GPUs are too power hungry) have already deserted the mobile space and others like Apple will have to settle for scraps (even though most profitable) as they sit this one out whenever they can figure out how to make their own 5G modems ... 

Pemalite said:

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

Pemalite said:

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.

I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

------------------------------------------------------------------------------------------------------------------------------------------------

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

Consoles are going this route regardless so everybody including AMD and Intel will have it ... 

Pemalite said:

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

--------------------------------------------------------------------------------------------------------------------------------------------------

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Pemalite said:

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)

https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

Pemalite said:

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

----------------------------------------------------------------------------------------------------------------------------------------------------

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

Pemalite said:

A 1060 is overrated. But so it the Xbox One X.

The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.

Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ...