By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

 

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34
Trumpstyle said:

Ps4 pro is 4,2 Teraflops so 8,3 is about 2x, it will probably be slightly above 8,3 it's a rounding number dude. confirmation.

So you don't expect efficiency to change at all? So you expect the Playstation 5 to be as efficient as a base Xbox One on a per-flop basis? Wow.

Trumpstyle said:

Keep in mind this is speculation, but yes, PS5 will have 8 Teraflops at the price $400, Xbox lockhart 4TF (disc-less) at $300 and xbox anaconda 12TF at $500. They will all have 1TB NVMe/SSD drive (ultra super-fast storage) and 8core zen2 CPU and will be released in fall 2020. This is 100% certain.

You can't say it is speculation with one hand and assert something as definitive with another.
You actually have no idea what the hardware is going to be capable of, it hasn't been revealed yet.



--::{PC Gaming Master Race}::--

Around the Network
ironmanDX said:
DonFerrari said:

We are talking about a 2 year window and you want to look at a 2 month window?

Sony sold much more than MS from the very begging, and PS5 would do the same. Sony can easily have a contract obeying they average sales for the 2 year period (which would be over 30M) while MS can't, they never got anywhere near the numbers. Unless console landscape changes a lot (which you seem to be determined to believe because of reasons) Xbox next will sell about half of PS5.

Of course I am... The sales are going to be massive at launch of both for both of these machines. PS5... Simply because it's PlayStation and xbox for the reasons I mentioned above. It'll be a significant chunk of the consoles sold in the first 2 years.... And set the tone of how well they're likely to sell against one another.

Of course ps4 outsold the xbox one... I just stated very good reasons as to why in the post you replied too... Any company puts out a product with that many missteps is bound to be outsold no matter the market.

The landscape HAS changed. Look at the Switch. A hybrid console is taking names... Iterating consoles now exist too, it's looking like at least one of them will launch with multiple skus and Ms are making all the right moves to make some ground back next generation. Google is coming with Stadia... There's Psnow and Gamepass... Though, again... I'll agree that the PS5 is going to sell quite a lot more than the neXtbox.


I don't see how you can say that the landscape hasn't changed.

Companies work on projections that are based much on historical data. So there isn't a single iota of evidence that Xbox next could do better than 50% of PS5 sales. So there is absolutely no reason to think Sony wouldn't have much better bargain power on purchasing parts.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

thismeintiel said:
eva01beserk said:

More leaks.

Amd wont have anything matching the 2080 untill 2020 for $500. Who knows about the 2080ti.

The 2070 match will be $330. Navi 10 with 56 compute units. will come out third Q of this year. Could this be the one in the ps5? At that price? Its still a whole year before the ps5 actually uses it. And we have to remember that sony will be getting them much cheaper.

So this scales down the rumors. Like permalite said before. Seems like Amd is still ways behind. Nvidea just has to update the rtx line to 7nm and Keep their big advantage in both power and tdp. But seems like AMD will still have a better price for performance.

I for one think we are getting at least 2070 performance on the ps5. After all, $500 ps5 seems to be likely. Who knows about the anaconda. Could be $600 with 2080 performance. But will that matter? I dont think nnext gen us casual gamers wont be able to tell. We already go by what Digital Foundry tells us.

If the leak is correct, than the Navi 10 sounds right.  I wonder if the real reason Sony skipped E3 was because they will be at AMD's E3 showing where the Navi is revealed.  They may show a few games that are in production, but they aren't far along enough yet to have a floor full of demos.  It also allows AMD to shoulder much of the price of a E3 showing, while Sony is focused on spending money on the development of the PS5.

EricHiggin said:

Since it matches up with Adored TV's most recent info, and it looks to be more in line with what one would expect, along with being closer to the rumored July 7th announcement, I'd say there's probably some validity to it. Not to mention Cerny failing to mention anything about PS5's Navi specs. PS themselves may not know exactly where Navi is going to land just yet, since it's still being tweeked possibly, so no point in mentioning specs for it, on top of the other strategic reasons they may not want to officially announce yet.

The prices are deceiving for consoles though. A $330 3080XT, with RTX 2070 specs, is the price of the entire card, and is msrp. PS will only be buying the silicon chip so the price they would pay would already be so much less than that, plus no middle man, plus in bulk.

Is Navi 10 just a cut down version of Navi 20 or a different die? If the 56CU 3080XT is a perfect die, I'd have to imagine the 52CU 3080 would be the console die, leaving room to disable some CU's.



Pemalite said:
Trumpstyle said:

Ps4 pro is 4,2 Teraflops so 8,3 is about 2x, it will probably be slightly above 8,3 it's a rounding number dude. confirmation.

So you don't expect efficiency to change at all? So you expect the Playstation 5 to be as efficient as a base Xbox One on a per-flop basis? Wow.

Trumpstyle said:

Keep in mind this is speculation, but yes, PS5 will have 8 Teraflops at the price $400, Xbox lockhart 4TF (disc-less) at $300 and xbox anaconda 12TF at $500. They will all have 1TB NVMe/SSD drive (ultra super-fast storage) and 8core zen2 CPU and will be released in fall 2020. This is 100% certain.

You can't say it is speculation with one hand and assert something as definitive with another.
You actually have no idea what the hardware is going to be capable of, it hasn't been revealed yet.

I'm not sure where you going with this effiency thingy, I expect the PS5 gpu to pull about 120watt and that includes memory and xbox one gpu probably pulls about 80watt with its ddr4 memory, so that's upwards 5x flop/effiency when you compare the TFs.

When I say 100% it's just for fun :) hehe but I'm pretty certain we will be getting something very close to what I wrote. Right now it's the 8core zen2 cpu for the xboxes I feel most uncertain about, I think there is a pretty decent chance that xbox lockhart will only have 4 zen2 cpu cores and maybe even xbox anaconda, But I wanna do precise predictons so I don't write that down.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

eva01beserk said:

More leaks.

Amd wont have anything matching the 2080 untill 2020 for $500. Who knows about the 2080ti.

The 2070 match will be $330. Navi 10 with 56 compute units. will come out third Q of this year. Could this be the one in the ps5? At that price? Its still a whole year before the ps5 actually uses it. And we have to remember that sony will be getting them much cheaper.

So this scales down the rumors. Like permalite said before. Seems like Amd is still ways behind. Nvidea just has to update the rtx line to 7nm and Keep their big advantage in both power and tdp. But seems like AMD will still have a better price for performance.

I for one think we are getting at least 2070 performance on the ps5. After all, $500 ps5 seems to be likely. Who knows about the anaconda. Could be $600 with 2080 performance. But will that matter? I dont think nnext gen us casual gamers wont be able to tell. We already go by what Digital Foundry tells us.

I saw that leak before, but it doesn't add up.

1. Why making 2 different chips when they are just 8 CU apart? That would have made sense at the low end/entry level, say 8 and 16 CU, but not at that level where there's a high chance that many chips with 64CU would need to be binned out with just 56 working CU anyway. It's the exact opposite problem of the previous leak, where they had a huge range of GPUs and CU on just one chip.

2. The pricing doesn't add up, either. For just 4 additional CU you're paying a 90$ premium, or in other terms: for 7% more CU you're paying a 27% premium. NVidias prices are high, but the difference in power is also roughly the difference in price percentage wise. Not so between those 3080X/3090models.

3. The TDP. The 3090 is supposed to have less TDP but more performance out of just 4 more CU. That doesn't add up unless there are more changes to the architecture or extremely agressive binning - which would make it a very rare card.



Around the Network
Pemalite said:

At the end of the day though, it doesn't matter.
nVidia is offering something that AMD isn't, nVidia is still faster and more efficient than AMD's equivalent offerings at almost every bracket... (Minus the crappy Geforce 1650, get the Radeon 570 every day, even if it's a bigger power hog.)

Whether Turings hardware will pay off in the long term remains to be seen, but one thing is for sure... Lots of modders are shoring up mods for older games and implementing Ray Tracing, Minecraft, Crysis, you name it. - Even if it is more rudimentary Path Tracing.

It will be interesting if they implement Turing specific features going forward.

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ... 

Pemalite said:

You are correct that AMD against Turing is a better fit rather than AMD against Pascal, Pascal's chips were far more efficient and even smaller than the AMD equivalent, nVidia could have theoretically priced AMD out of the market entirely, which wouldn't bode well for anyone, especially the consoles.


And I am not downplaying any of AMD's achievements... But when a company is bolting on features to a 7+ year old GPU architecture and is trying to shovel it as something new and novel and revolutionary... Well. Doesn't really sit well. Especially when AMD has a history of re-badging older GCN as a new series.

Take the RX 520 for example... Using a GPU from 2013. (Oland.)
Grant it's low-end stuff so doesn't matter as much, but it's highly disingenuous of AMD... Plus such a part misses out on technologies that are arguably more important for low-end hardware, like Delta Colour compression to bolster bandwidth.

nVidia isn't immune from such practices either, but they have shied away from such practices in recent times.

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

Pemalite said:

I think it's interesting now, especially the technologies Intel is hinting at.
For IGP's though AMD should still hold a sizable advantage, AMD simply reserves more transistors in it's IGP's for GPU duties than Intel is willing.

My Ryzen notebook (Ignoring the shit battery life that plagues all Ryzen notebooks!) has been amazing from a performance and support standpoint.

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

Pemalite said:

Ray Tracing is inherently a compute bound scenario. Turing's approach is to try and make such workloads more efficient by including extra hardware to reduce that load via various means.

AMD has pretty much stuck to 64 CU's or below with GCN, I don't expect that to change anytime soon.. And Graphics Core Next has shown not to be an architecture that has been able to keep pace with nVidia, whether it's Maxwell, Pascal or Turing, it's simply full of limitations for gaming orientated workloads.

Allot of the features introduced with Vega generally didn't pan out as one would hope which didn't help things for AMD either.

Will nVidia retain it's advantage? Who knows. I don't like their approach to dedicating hardware to driving Ray Tracing... I would rather nVidia had taken a more unified approach that would have lent itself to rasterized workloads as well. - Whether this ends up being another Geforce FX moment for nVidia and AMD remains to be seen.. But with Navi being a Polaris replacement and not a high-end part... I don't have my hopes up until AMD's next-gen architecture.

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Pemalite said:

But the Xbox One X isn't doing 4K, ultra settings @60fps.

I am also going to hazard a guess they are using a Geforce 1060 3GB, not the 6GB variant, the 6GB card tends to track a little closer to the RX 580, not almost 20% slower... But they never really elaborated upon any of that... Nor did Digital Trends actually state anything about the Xbox One X?

I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Pemalite said:

...Which reinforces my point. That the Xbox One X GPU is comparable to a Geforce 1060... That's not intrinsically a bad thing either, the Geforce 1060 is a fine mid-range card that has shown itself to be capable.

In saying that... The Geforce 1060 is playing around the same level as the Radeon RX 580, which is roughly equivalent to the Xbox One X GPU anyway.

It's at minimum a 1060 ...

Pemalite said:

Wolfenstein 2 and by extension most Id Tech powered games love DRAM. Absolutely love it... That's thanks to it's texturing setup.
The Geforce 1060 3GB's performance for example will absolutely tank in that game.

There is a reason why a Radeon RX 470 8GB is beating the Geforce 1060... No one in their right mind would state the RX 470 is the superior part, would they?

@Bold I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Pemalite said:

You need to check the dates and patch version, later successive patches for Forza 7 dramatically improved things on the PC, it wasn't the best port out of the gate.

No way in hell should you require a Geforce 1080 to have a similar experience to the Xbox One X. It's chalk and cheese, 1080 every day.

I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Pemalite said:


A Geforce 1060 6Gb is doing 42.9fps @ 1440P with ultra settings. 

You can bet your ass that same part can do 4k, 30fps at medium... Without the killer that is Hairworx.
But the Xbox One X is dropping things to 1800P in taxing areas as well.

https://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/10

In-fact Guru 3D says the Witcher 3 is 27fps @ 4k + Ultra settings, so a 1060 6Gb could potentially drive better visuals than the Xbox One X version if you settle for some high details rather than ultra.

https://www.guru3d.com/articles-pages/geforce-gtx-1060-review,23.html

Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

Pemalite said:

Fortnite will vary from 1152P - 1728P on the Xbox One X with 1440P being the general ballpark, according to Eurogamer... And that is about right where the Geforce 1060 will also sit.

https://www.eurogamer.net/articles/digitalfoundry-2018-fortnites-new-patch-really-does-deliver-60fps

And if we look at the techspot benchies, we can see that 4k, 30fps is feasible on the 1060.
https://www.techspot.com/article/1557-fortnite-benchmarks/

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

Pemalite said:

Like I stated prior... Regardless of platform there will always be a game or two which will run better regardless of power. I.E. Final Fantasy 12 running better on Playstation 4 Pro than the Xbox One X. - Essentially the exact same hardware architecture, but the Xbox One X is vastly superior in almost every metric but returns inferior results.

https://www.youtube.com/watch?v=r9IGpIehFmQ

@Bold Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

Pemalite said:

The 1060 is doing 25fps @4k with Ultra settings.
https://www.techspot.com/article/1600-far-cry-5-benchmarks/page2.html

And just like Digital foundry states, the Xbox One X isn't running Ultra settings, but high settings with a few low settings... But that dropping the visuals from High to low nets only 5~ fps performance, which means that the engine just isn't scaling things appropriately on PC.

But hey. Like what I said above.

But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...  

Pemalite said:

Or I am not giving the 1060 6GB enough credit?
I am not saying the Xbox One X is below a 1060. It's that it's roughly in the same ballpark. - That is... Medium quality settings. 1440P-4k, which is what I expect out of a Radeon RX 580/590, which is roughly equivalent to the Xbox One X in terms of GPU capability on the Radeon side.

You were the one who stated that you need a 1070 to get a similar experience to the Xbox One X. :P

I mean, the Radeon RX 590 is a faster GPU than what is in the Xbox One X... It's Polaris pushed to it's clock limits, yet... The Geforce 1070 still craps on it.
https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review/6

Like I said though... I am a Radeon RX 580 owner. 
I am also an Xbox One X owner. 

I can do side by side comparisons in real time.

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)



Trumpstyle said:

I'm not sure where you going with this effiency thingy, I expect the PS5 gpu to pull about 120watt and that includes memory and xbox one gpu probably pulls about 80watt with its ddr4 memory, so that's upwards 5x flop/effiency when you compare the TFs.

Performance per watt. Performance per Teraflop.
You can have a GPU with 1 Teraflop beat a GPU with 2 Teraflops in gaming.

Comparing Teraflops as some kind of absolute determiner of performance between hardware is highly disingenuous.

And where are you pulling this 5x flop/efficiency?

Trumpstyle said:

When I say 100% it's just for fun :) hehe but I'm pretty certain we will be getting something very close to what I wrote. Right now it's the 8core zen2 cpu for the xboxes I feel most uncertain about, I think there is a pretty decent chance that xbox lockhart will only have 4 zen2 cpu cores and maybe even xbox anaconda, But I wanna do precise predictons so I don't write that down.

Speculation is fine, but the way you were wording it... It was as if you were asserting that your stance is 100% factual, when that simply isn't the case yet.

Zen 2 is likely 8-cores per CCX, so my assertion prior was that the next gen will likely leverage a single CCX for the CPU, the Playstation 5 having 8-cores falls into that.
Originally I thought AMD was only going to increase the single CCX core count to 6-cores, so I was happy to be incorrect about that.

Bofferbrauer2 said:

I saw that leak before, but it doesn't add up.

1. Why making 2 different chips when they are just 8 CU apart? That would have made sense at the low end/entry level, say 8 and 16 CU, but not at that level where there's a high chance that many chips with 64CU would need to be binned out with just 56 working CU anyway. It's the exact opposite problem of the previous leak, where they had a huge range of GPUs and CU on just one chip.

2. The pricing doesn't add up, either. For just 4 additional CU you're paying a 90$ premium, or in other terms: for 7% more CU you're paying a 27% premium. NVidias prices are high, but the difference in power is also roughly the difference in price percentage wise. Not so between those 3080X/3090models.

3. The TDP. The 3090 is supposed to have less TDP but more performance out of just 4 more CU. That doesn't add up unless there are more changes to the architecture or extremely agressive binning - which would make it a very rare card.

I tend to take whatever Red Gaming Tech says with a grain of salt, they have an AMD bias and will generally comment on each and every single piece of rumor that comes around. - Regardless of it's absurdity.

Do they get some things right? Sure. But they get some things wrong as well. Is what it is, but sticking with sources that are a little more credible is probably the best way to go.

fatslob-:O said:

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ...

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

fatslob-:O said:

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

fatslob-:O said:

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

fatslob-:O said:

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

This is probably one of the biggest arguments for sticking with Graphics Core Next. And it's extremely valid.

fatslob-:O said:

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

fatslob-:O said:

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

fatslob-:O said:

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

fatslob-:O said:

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.
I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

fatslob-:O said:

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

fatslob-:O said:
I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

fatslob-:O said:
It's at minimum a 1060 ...

Certainly not a 1070.

fatslob-:O said:

I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

fatslob-:O said:
I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)
https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

fatslob-:O said:
Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

*********

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

**********
Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

**********
But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...

We are pretty much just debating semantics now. Haha

I still standby my previous assertion that the Xbox One X is more inline with a Geforce 1060 6Gb in terms of overall capability.

fatslob-:O said:

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

fatslob-:O said:

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

fatslob-:O said:

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)

A 1060 is overrated. But so it the Xbox One X.
The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.



--::{PC Gaming Master Race}::--

Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ... 

Pemalite said:

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Pemalite said:

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

Pemalite said:

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

Pemalite said:

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

The likes of Huawei, Qualcomm, or Samsung are destined to reap the vast majority of the rewards in mobile space since they have independent 5G technology while the likes of Intel (they couldn't make 5G modems)/Nvidia (GPUs are too power hungry) have already deserted the mobile space and others like Apple will have to settle for scraps (even though most profitable) as they sit this one out whenever they can figure out how to make their own 5G modems ... 

Pemalite said:

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

Pemalite said:

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.

I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

------------------------------------------------------------------------------------------------------------------------------------------------

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

Consoles are going this route regardless so everybody including AMD and Intel will have it ... 

Pemalite said:

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

--------------------------------------------------------------------------------------------------------------------------------------------------

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Pemalite said:

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)

https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

Pemalite said:

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

----------------------------------------------------------------------------------------------------------------------------------------------------

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

Pemalite said:

A 1060 is overrated. But so it the Xbox One X.

The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.

Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ... 



fatslob-:O said:
Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ...

Which is why I stipulated it's "pretty awesome" for a game that is from "2007".
If a game released today, I would expect a different approach.

It's no less "hacky" than say... ENB anyway.

fatslob-:O said:

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

It is as bad as I make it out to be.
The Radeon VII is packaged with far more expensive HBM2 memory... And despite it being built at 7nm will still consume 40w+ more energy during gaming.

Not an ideal scenario for AMD to be in... Which is why they couldn't undercut nVidia's already high-priced 2080 to lure gamers in. In short... It's a bad buy.

In saying that, I have to give credit where credit is due... Radeon VII is an absolute compute monster.

fatslob-:O said:

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Things are always changing. The Xbox One has Direct X 12, some developers use it and it's features... But any serious developer will target the low-level API's anyway.

fatslob-:O said:

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

The Switch isn't as fixed as we think it is... Considering it's plethora of performance states... But I digress. But Maxwell is pretty easy to target for anyway.

nVidia has most engines onboard... And this has been a long historical trend, hence their "nVidia, the way it's meant to be played" campaign, CryEngine, Unreal Engine, Unity... List goes on.
They work closely with allot of industry bodies, more so than what AMD has historically done... Which has both it's Pro's and Con's.

It does mean that AMD is less likely to engage in building up technologies which are exclusive to their hardware.

fatslob-:O said:
Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

AMD does the same, hence why they cut off Terascale support in their drivers a couple years after they were releasing Terascale based APU's.
There are obviously Pro's and Con's to each companies approach.

fatslob-:O said:

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

The recycling results in stagnation... It's as simple as that. AMD has stagnated for years, nVidia stagnated when they were recycling hardware.
The other issue is... It's not a good thing for the consumer, when you buy a new series of GPU's, you are hoping for something new, not old with a different sticker... It's far from a good thing.

fatslob-:O said:

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Agreed. There is still room for things to become disrupted in the console space though if IBM, Intel or nVidia etc' offer a compelling solution to Sony or Microsoft, but the chances of that is pretty slim to non existent anyway.
No one is able to offer such high performing graphics with a capable CPU other than nVidia... And nVidia is expensive, meaning not ideal for a cost-sensitive platform.

fatslob-:O said:

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

nVidia does have some options. They don't need x86 or Power to remain relevant, ARM is making inroads into cloud computer/server space, albeit slowly.
I mean, ARM was such a serious threat that AMD has even invested in it.
https://www.amd.com/en/amd-opteron-a1100

nVidia is also seeing substantial growth in the Datacenter environment with increases of 85% in revenue.
https://www.anandtech.com/show/13235/nvidia-announces-q2-fy-2019-results-record-revenue

So I wouldn't discount them just yet... They have some substantial pull.

fatslob-:O said:

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Indeed. Although parts like the MX110/MX150 got a TON of design wins in notebooks, which were devices that went up against AMD's Ryzen APU's and often had the advantage in terms of graphics performance.

Mobile is a very fickle space... You have Qualcomm. And that is it... Apple, Huawei, Samsung all build their own SoC's, so there is very little market for nVidia to latch onto... I guess AMD made the right decision years ago to spin off Adreno to Qualcomm.

And even Chinese manufacturers like Xiaomi are entering the SoC game for their budget handsets... Meaning the likes of MediaTek and so on probably looks tenuous over the long term.

However, Tegra isn't done and dusted yet though, nVidia is seeing growth in Vehicles, IoT and so on.

fatslob-:O said:

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

Apple not only has impressive graphics technology... But equally as impressive energy efficiency.
Even their CPU cores tend to be extremely efficient... But also have substantial performance ceilings, it's actually impressive with what they achieve.

In saying that... They do own everything from top to bottom, so they are able to garner some efficiency advantages that Android just cannot match.

fatslob-:O said:

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Intels Graphics have historically been shit as well.
Even when things played out in Intels favour and had optimized it's graphics for games like Half Life... They still trailed the likes of ATI/AMD/nVidia.

Even back in the late 90's/early 2000's I would have opted for an S3/Matrox part over an Intel solution... And that says something... And they were arguably more competitive back then!

But drivers are probably Intel's largest Achilles heels, they are investing more on that front... And they absolutely must if they wish to be a force in the PC Gaming market.

fatslob-:O said:

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

Haswell was a big step up, but still pretty uninspiring... Haswells Iris Pro did manage to double the performance of AMD's Trinity mobile APU's in some instances... But you would hope so with a chunky amount of eDRAM and without the TDP restrictions.

A large portion of Haswell's advantages in the Integrated Graphics Space back then was also partly attributed to Intels vastly superior CPU capability as well... Which is partly why the 5800K was starting to catch the Haswell Iris Pro thanks to a dramatic uplift in CPU performance.

However, then AMD pretty much left Intels Decelerator graphics in the dust going forward... Not to mention better 99th percentile, frame pacing and game compatibility with AMD's solutions.

I would take Vega 10/Vega 11 integrated graphics over any of Intels efforts currently.

fatslob-:O said:

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

They are working on it!
https://www.anandtech.com/show/14117/intel-releases-new-graphics-control-panel-the-intel-graphics-command-center

They have years worth of catching up to do, but they are making inroads... If anyone can do it though, Intel probably can.

fatslob-:O said:

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Depends on the 470... The 1060 is superior in every meaningful metric with an advantage of upwards of 50%.
https://www.anandtech.com/bench/product/1872?vs=1771

Anandtech does need to update it's benchmark suite... But even with dated titles like Grand Theft Auto 5... That game is still played heavily with Millions of gamers, so I suppose it's important to retain for awhile longer yet... Plus it's still a fairly demanding title at 4k all things considered.

End of the day, a Geforce 1060 is a superior choice for gaming over a Radeon RX 470 or 570, unquestionably.

fatslob-:O said:
I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

There wasn't supposed to be? I was pointing out that Forza 7 had a patch to fix performance issues?

fatslob-:O said:
Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

In short, yes. The 1070 is a step up over an RX 580.

fatslob-:O said:
Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ... 

By then, the RX 580 and Xbox One X will be irrelevant anyway with next gen GPU's and Consoles in our hands.

No point playing "what-ifs" on hypotheticals, we can only go by with the information we have for today.



--::{PC Gaming Master Race}::--

Leakers keep saying that is a lock that the ps5 will be $500. Makes the 56 commute unit more realistic. If the ps5 is $500 and anaconda is supost to be better. How much more could it be. If it was $600 i think it might price itself out the market. Specially since i doubt the benefits that $100 could make on that higher braket.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.