By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Trumpstyle said:

I'm not sure where you going with this effiency thingy, I expect the PS5 gpu to pull about 120watt and that includes memory and xbox one gpu probably pulls about 80watt with its ddr4 memory, so that's upwards 5x flop/effiency when you compare the TFs.

Performance per watt. Performance per Teraflop.
You can have a GPU with 1 Teraflop beat a GPU with 2 Teraflops in gaming.

Comparing Teraflops as some kind of absolute determiner of performance between hardware is highly disingenuous.

And where are you pulling this 5x flop/efficiency?

Trumpstyle said:

When I say 100% it's just for fun :) hehe but I'm pretty certain we will be getting something very close to what I wrote. Right now it's the 8core zen2 cpu for the xboxes I feel most uncertain about, I think there is a pretty decent chance that xbox lockhart will only have 4 zen2 cpu cores and maybe even xbox anaconda, But I wanna do precise predictons so I don't write that down.

Speculation is fine, but the way you were wording it... It was as if you were asserting that your stance is 100% factual, when that simply isn't the case yet.

Zen 2 is likely 8-cores per CCX, so my assertion prior was that the next gen will likely leverage a single CCX for the CPU, the Playstation 5 having 8-cores falls into that.
Originally I thought AMD was only going to increase the single CCX core count to 6-cores, so I was happy to be incorrect about that.

Bofferbrauer2 said:

I saw that leak before, but it doesn't add up.

1. Why making 2 different chips when they are just 8 CU apart? That would have made sense at the low end/entry level, say 8 and 16 CU, but not at that level where there's a high chance that many chips with 64CU would need to be binned out with just 56 working CU anyway. It's the exact opposite problem of the previous leak, where they had a huge range of GPUs and CU on just one chip.

2. The pricing doesn't add up, either. For just 4 additional CU you're paying a 90$ premium, or in other terms: for 7% more CU you're paying a 27% premium. NVidias prices are high, but the difference in power is also roughly the difference in price percentage wise. Not so between those 3080X/3090models.

3. The TDP. The 3090 is supposed to have less TDP but more performance out of just 4 more CU. That doesn't add up unless there are more changes to the architecture or extremely agressive binning - which would make it a very rare card.

I tend to take whatever Red Gaming Tech says with a grain of salt, they have an AMD bias and will generally comment on each and every single piece of rumor that comes around. - Regardless of it's absurdity.

Do they get some things right? Sure. But they get some things wrong as well. Is what it is, but sticking with sources that are a little more credible is probably the best way to go.

fatslob-:O said:

On PC ? That might very well be true but on consoles ? Many ISV's don't see eye to eye with them and will just gladly use features their competitor has to offer instead ... 

Also you do not want to do ray tracing with just a depth buffer like in Crysis's case ...

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

fatslob-:O said:

New features make the hardware revolutionary, not new architectures IMO ... (it makes nearly no sense for AMD to dump all the work that console developers have invested in coding for GCN when it's a long-term strategy for these investments to translate on PC as well which probably won't come into fruition until the release of next gen consoles) 

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

fatslob-:O said:

AMD is trying to adopt an Intel strategy where developers DON'T have to move so much like they do on x86 because they believe that there'll come a point where the industry might realize it is better to value compatibility and more features rather than dealing with an entirely new architecture altogether since drivers or software are getting far more complex than ever. AMD doesn't share the same vision as Nvidia does in the GPU landscape emulating the wild west and aren't interested in the chaos that comes with it ... (Sony was punished before for believing that developers would arrogantly strive to get the most for their 'exotic' hardware and a similar situation occurred with AMD on Bulldozer for being so bullish that multithreaded apps would exonerate their single threaded bottleneck) 

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

fatslob-:O said:

AMD FAILING to deliver their obligations to Sony or Microsoft's desire for backwards compatibility would mean that they would automatically be branded as an enemy of the entire developer community since the ISV's can't be very happy that they've LOST their biggest investments thus far when they figure out all their released software or tools need to be either reworked or gone for good. It is in my belief that it is AMD's current strategy of striving for openness, collaboration, and goodwill between the developers that will lead them to salvation they lookout for so purely seeking out superior hardware designs would run counter to their strategy they've built up thus far. By ruining developer community relationships, AMD as they are now would be thrown out to the wolves to fend for themselves ... (where bigger players such as Intel or Nvidia would push them out of the market for good if AMD were all alone and isolated) 

This is probably one of the biggest arguments for sticking with Graphics Core Next. And it's extremely valid.

fatslob-:O said:

Both AMD and Nvidia's strategies have their own merits and drawbacks. For the former, it's trying to elevate and cultivate a special relationship between them and the ISVs but the latter is using it's own incumbent position as leader of PC graphics hardware to maintain it's dominance. We need to understand that in a long-term relationship that AMD forms with some of it's customers, it requires a level of trust that goes above and beyond just being a corporate one because bailing AMD out once in a while should also be in their interest as well if each party seeks to be interdependent on one another then that risk should be levied between each other ... (as crazy as it might sound in the business world both need to serve as safety nets for each other!)

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

fatslob-:O said:

It's true that they haven't seen much of the pros yet and that they've mostly only seen the cons so far but know this that the industry has SOME STAKE to keep GCN. If not for being able to succeed in PC hardware market then surely they can still see success in providing console hardware and cloud gaming hardware ? 

Even if AMD can't take the performance crown in the PC space maybe one day mGPU could become ubiquitous solely for console platforms so that you can ONLY get the best gaming experience out of them compared to a generic high-end PC where developers likely won't ever touch mGPU over there ... (it seems like a radical idea for AMD to somehow exclusively dominate a feature such as mGPU for gaming)

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

fatslob-:O said:

Honestly, I wouldn't be as surprised if Intel reserves a similar amount of die space on their highest end parts as AMD does! Intel has a VERY HIGH amount of flexibility in their GPU architecture so if AMD is 'wasting' a LOT of transistors then wait until you hear about what Intel does, haha. Those guys have 4(!) different SIMD modes such as SIMD 4x2/8/16/32 while AMD or Nvidia only have one with each being SIMD64 and SIMD32 respectively. SIMD4x2 is especially good for what is effectively a deprecated feature known as geometry shaders. That's not all the other goodies there is to Intel hardware though. They also support framebuffer/render target reads inside a shader (most powerful way to do programmable blending) and solve one fundamental issue with hardware tiled resources by being able to update the tile mappings with GPU commands instead of CPU commands! (not being able to update the mappings from the GPU was a big complaint from developers since it introduced a ton of latency)

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

fatslob-:O said:

Nvidia currently strikes a very good balance between flexibility and efficiency. AMD is a harder sell on the PC side with their unused higher flexibility but Intel takes the word 'bloat' to a whole new level of meaning with their complex register file since it shares a lot more of GCN's capabilities with their amazing unicorn sauce that developers would only dream of exploiting ... (I wonder if Xe is going to cut out the fun stuff from Intel's GEN lineups to make it more efficient from a competitive standpoint)

Also, developers pay very little attention to Intel's graphics stack as well. They pay a lot more than AMD does for this flexibility but the sub-par graphics stack just scares developers away from even trying ... 

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.
I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

fatslob-:O said:

I guess we feel the opposite regarding whether dedicated fixed function units should be used for hardware accelerated ray tracing or not but Volta is vastly less efficient at 3DMark Port Royal when it turns into an RTX 2060 ... (I imagine Port Royal will be the gold standard target visuals for next gen consoles)

My hope is for consoles to double down on dedicated units for ray tracing by one upping Turing's ray tracing feature set because I'd care more about performance in this instance rather than worrying about flexibility since it has tons of very useful applications for real-time computer graphics ... (I wouldn't take tensor cores over FP16 support in the other way since the payoff is questionable as it is with current applications) 

Hardware design decisions like these are ultimately going to be about the payoff and I think it's worth it since it will significantly increase the visual quality at much lower performance impact ... 

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

fatslob-:O said:
I don't believe so from other independent tests verifying the 1060's 4K ultra numbers. When I did a slight sanity check, guru3D got 28FPS at 4K ultra while DT got 25 FPS for the same settings so the numbers aren't too dissimilar. I maintain that DT were indeed testing the 6GB version of the 1060 so it's likely that the 1060 does badly at 4K on this game regardless but a massive win for the mighty X1X nonetheless ... 

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

fatslob-:O said:
It's at minimum a 1060 ...

Certainly not a 1070.

fatslob-:O said:

I don't know ? Maybe it is when we take a look at the newer games with DX12/Vulkan where it's getting slower to the comparative AMD parts ... (I know I wouldn't want to be stuck with a 1060 in the next 2 years because graphics code is starting to get more hostile against Maxwell/Pascal since a flood of DX12/Vulkan only titles are practically on the edge) 

Even when we drop down to 1440p the 1060 still CAN'T hit 60FPS like the X1X but what's more an RX 580 equipped with the same amount of memory as Vega tanks harder than you'd think in comparison to the X1X ... 

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

fatslob-:O said:
I don't believe it was a patch that helped improve performance dramatically for Forza 7, I think it was a driver update from Nvidia that did the trick but you'll still need a 1070 either way regardless to hit 60FPS in the 99th percentile to get a similar experience because a 1060 is still noticeably slower ... (580 wasn't all that competitive so massive kudos to Turn 10 for leveraging the true strength of console optimizations)

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)
https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

fatslob-:O said:
Even when we turn down to MEDIUM quality which disables Hairworks entirely, a 980 is still very much getting eaten alive. Although guru3D's numbers are strangely on the higher side I don't think many will lose sleep over it since a 1060 is very like for like with the X1X ... 

*********

It 'depends' on the 'Fortnite' we're talking about. I am talking about it's more demanding singe player "Save the World" campaign while you are talking about it's more pedestrian "battle royale mode" that's inherently designed with different bottleneck characteristics ... (it's the reason why the Switch/Android/iOS is running 'Fortnite' at all when they strip away campaign mode" 

Since X1X ends up being roughly equivalent to the 1060 in both modes, you're getting a good experience regardless ... 

**********
Depends and I don't deny it being the more uncommon cases but what you may think as 'pathological' is more common than you realize so it's definitely going to skew the X1X some in it's favour ...

**********
But take a look at that 99th percentile (or 1% of lowest frames) which explains why Alex wasn't able to hold a steady 30FPS on 1060 like the X1X did with the same settings ...

We are pretty much just debating semantics now. Haha

I still standby my previous assertion that the Xbox One X is more inline with a Geforce 1060 6Gb in terms of overall capability.

fatslob-:O said:

If I had to give a distribution of X1X's performance in AAA games, here's how it would look like ... 

10% chance that it's performance will be lower than a 1060 by 10%. 50% chance that it'll perform within -/+ 5% within a 1060. 20% chance that it'll perform better than a 1060 by a margin of 10% and a 20% chance that it'll perform as good or better than a 1070 ... 

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

fatslob-:O said:

A 1060 will certainly give you a worse experience than an X1X by a fair margin but when we take into account into the more uncommon cases, an X1X isn't all that far behind a 1070. Maybe 10-15% slower on average ? 

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

fatslob-:O said:

A 1060 is overrated anyways since it's not going to stand up much chance against either a 580 or an X1X in the next 2 years when new releases come out over time ... (a 1070 will be needed in case a 1060 starts tanking away)

A 1060 is overrated. But so it the Xbox One X.
The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.



--::{PC Gaming Master Race}::--