Quantcast
Navi Made in Colab with sony, MS still Using it?

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0.00%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34

@Pemalite

And just like that, Avalanche Studio's Apex engine made the jump!  

AMD cards are doing very well in RAGE 2, better than their NVIDIA counterparts. For example, the Radeon VII is usually around 10% behind the GeForce RTX 2080, but in RAGE 2, it matches it almost exactly. The same is happening with Vega 64, which is usually 20% behind the RTX 2070, yet manages to trade blows with it in RAGE 2. For the lower end, the picture is similar: we'd expect the RX 590 to be a few percentage points behind the GTX 1660, but here, it delivers almost the same performance.

Not too long ago when they released Just Cause 4, AMD was significantly trailing behind Nvidia but with Avalanche's first release on Vulkan the tables are turning ... 

Only a couple of key engines left are holding out from this trend ... 



Around the Network
Trumpstyle said:

Guys a new REAL leak, videocardz.com is very reliable, as I have been predicting, Navi can reach 2ghz+

All those youtube rumors are fake and this one is real, it's Navi 10 and PS5 will have a cut-down navi 10 = 36 CU's clocked at 1,8ghz giving 8,3TF :) it's almost over now hehe.

Edit: Looked at pc gaming benchmarks to see where performance should land. The PS5 should have around geforce 1080/Vega64 performance if this information is correct.

It's an Anonymous source... We have no idea if it holds any credibility. Sub~40 CU's seems about what I would expect with high clockspeeds though, mainstream part and all that.
"Radeon Maxwell sauce" is bullshit though. Polaris and Vega brought many Maxwell-driving-efficiency to the table anyway... Plus Maxwell is getting old, shows how many years AMD is behind in many aspects that makes nVidia GPU's so efficient for gaming.

Also there is a larger performance gap between Polaris and the 2070 than just 30%... And the jump between the RX 580 and Vega 64 is a good 45-65% depending on application... Navi with 40 CU's will likely come up short against even Vega 56 and definitely Vega 64 and the RTX 2070.

Either way, like all rumors, grain of salt and all of that, don't take it as gospel until AMD does a revealing or we can see benchmarks from legitimate outlets like Anandtech.

fatslob-:O said:

Maybe until the next generation arrives ? It'll be way more pronounced from then on ... 

Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's.
Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor.

Not to say that AMD hasn't gained a slight boost out of it with development pipelines, but it just means sweet bugger all in the grand scheme of things... Which is certainly not a good thing.
I would like AMD to be toe-to-toe with nVidia, that is when innovation is at it's best and prices are at it's lowest.

fatslob-:O said:

No they really couldn't. The Wii's TEV was roughly the equivalent of shader model 1.1 for pixel shaders so they were still missing programmable vertex shaders. The Flipper still used hardware accelerated T&L for vertex and lighting transformations so you just couldn't straight add the feature as easily since it required redesigning huge parts of the graphics pipeline. Both the 360 and the PS3 came with a bunch of their own goodies as well. For the former, you had a GPU that was completely 'bindless' and 'memexport' instruction was nice a precursor to compute shaders. With the latter, you have SPUs which are amazingly flexible like the Larrabee concept and you effectively had the equivalent of compute shaders but it even supported hardware transnational memory(!) like as featured on Intel's Skylake CPU architecture ... (X360 and PS3 had forward looking hardware features that were far into the future) 

You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners.

The Gamecube/Wii had Vertex Shader... But the transform and lighting engine was semi-programmable too, I would even argue more programmable than AMD's Morpheus parts back in the day.

In saying that... It's not actually an ATI designed part, it is an ArtX designed part so certain approaches will deviate.
For such a tiny chip, she could pull her weight, especially when you start doing multiple passes and leverage it's texturing capabilities.

Sadly overall, due to it's lack of overall performance relative to the Xbox 360, the effects did have to be paired back rather substantially.

And I am not denying that the Xbox 360 and Playstation 3 didn't come with their own bunch of goodies either... The fact that the Xbox 360 for example has Tessellation is one of them.

fatslob-:O said:

The Xbox 360 is definitely closer to an Adreno 2XX design than the R5XX design which is not a bad thing. Xbox 360 emulator developers especially seek Adreno 2XX devices for reverse engineering purposes since it much helps further their goals ... 

Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.

And like I said before... Adreno was based upon AMD's desktop GPU efforts, so it will obviously draw similarities with parallel GPU releases for other markets... But Xenos is certainly based upon R500 with features taken from R600... But that just means the Adreno was also based on the R500 also.

fatslob-:O said:

That's not what I heard among the community. AMD's OpenGL drivers are already slow in content creation, professional visualization, games, and I consistently hear about how broken they are in emulation ... (for blender, AMD's GL drivers weren't even on the same level as the GeForce 200 series until GCN came along)

The last high-end game I remembered using OpenGL were No Man's Sky and Doom but until a Vulkan backend came out, AMD got smoked by Nvidia counterparts ...

OpenGL on AMD's pre-GCN arch is a no go ... 

When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite.

Obviously AMD has always pushed it's Direct X capabilities harder than OpenGL... Even going back to the Radeon 9000 vs Geforce FX war with Half Life 2 (Direct X) vs Doom 3 (OpenGL.)

OpenGL wasn't to bad on the later Terascale parts.

I mean... Take Wolfenstein via OpenGL on id tech... AMD pretty much dominated nVidia on this title with it's Terascale parts.
Granted, it's an engine/game that favored AMD hardware, but the fact that it was OpenGL is an interesting aspect.

https://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/22

fatslob-:O said:

CUDA documentation seems to disagree that you could just group Kepler with Fermi together. Kepler has FCHK, IMADSP, SHF, SHFL, LDG, STSCUL, and a totally new set of surface memory instructions. The other things Kepler has deprecated from Fermi were LD_LDU, LDS_LDU, STUL, STSUL, LONGJMP, PLONGJMP, LEPC but all of this is just on the surface(!) since NVPTX assembly is not Nvidia's GPUs native ISA. PTX is just a wrapper that makes Nvidia GPUs true ISA hidden so there could be many other changes going on underneath there ... (people have to use envytools to reverse engineer Nvidia's blob so that they could make open source drivers)

Even with Turing, Nvidia does not group it together with Volta since Turing has specialized uniform operations ...

I think you are nitpicking a little to much.

Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA.
They aren't 1:1 with each other.

Same holds true for nVidia.

But from an overall design principle, Fermi and Kepler are related, just like Maxwell and Pascal.

fatslob-:O said:

@Bold No developers are worried about new software breaking compatibility with old hardware but just about everyone is worried about new hardware breaking compatibility with old software and that is becoming unacceptable. AMD does NOT actively do this in comparison to Nvidia ... 

These are the combinatorial configurations that Nvidia has to currently maintain ... 

APIs: CUDA, D3D11/12, Metal (Apple doesn't do this for them), OpenCL, OpenGL, OpenGL ES, and Vulkan 

Platforms: Android (still releasing quality drivers despite zero market share), Linux, MacOS, and Windows (all of them have different graphics kernel architecture)

Nvidia has to support ALL of that on at least 3 major instruction encodings which are Kepler, Maxwell/Pascal, Volta, Turing and they only MAKE ENDS MEET in terms of compatibility with more employees than AMD by comparison which only have to maintain the following ... 

APIs: D3D11/12, OpenGL (half-assed effort over here), and Vulkan (Apple makes the Metal drivers for AMD's case plus AMD stopped developing OpenCL and OpenGL ES altogether) 

Platforms: Linux and Windows 

AMD have 2 major instruction encodings at most with GCN all of which are GCN1/GCN2/Navi (Navi is practically a return to consoles judging from LLVM activity) and GCN3/Vega (GCN4 shares the same exact ISA as GCN3) but despite focusing on less APIs/platforms they STILL CAN'T match Nvidia's OpenGL driver quality ... 

End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC.

nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot.

As for OpenGL, that is being depreciated in favor of Vulkan anyway for next gen. (Should have happened this gen, but I digress.)

fatslob-:O said:

I don't see why not ? A single channel DDR4 module at 3.2Ghz can deliver 25.6 GB/s An octa-channel DDR5 modules clocked at 6.4Ghz can bring a little under 410 GB/s which is well above a 1080 and with 7nm, you play around with a more transistors in your design ...

Anandtech may not paint their outlook as bleak but Nvidia's latest numbers and attempt at acquisition doesn't bode well for them ... 

DDR3 1600mhz on a 64bit bus = 12.8GB/s.
DDR4 3200mhz on a 64bit bus = 25.6GB/s.
DDR5 6400mhz on a 64bit bus = 51.2GB/s.

DDR3 1600mhz on a 128bit bus = 25.6GB/s.
DDR4 3200mhz on a 128bit bus = 51.2GB/s.
DDR5 6400mhz on a 128bit bus = 102.4GB/s.

DDR3 1600mhz on a 256bit bus = 51.2GB/s.
DDR4 3200mhz on a 256bit bus = 102.4GB/s.
DDR5 6400mhz on a 256bit bus = 204.8GB/s.

I wouldn't be so bold to assume that motherboards will start coming out with 512bit busses to drive 400GB/s+ of bandwidth. That would be prohibitively expensive.
The Xbox One X is running with a 384-bit bus... but that is a "premium" console... Even then that would mean...

DDR3 1600mhz on a 384bit bus = 76.8GB/s.
DDR4 3200mhz on a 384bit bus = 153.6GB/s.
DDR5 6400mhz on a 384bit bus = 307.2GB/s.

That's an expensive implementation for only 307GB/s of bandwidth, considering that ends up being less than the Xbox One X... And that is using GDDR5X... And no way is that approaching Geforce 1080 levels of performance.
AMD would need to make some catastrophic leaps in efficiency to get there... And while we are still shackled to Graphics Core Next... Likely isn't happening anytime soon.

You would simply be better off using GDDR6 on a 256bit bus.

fatslob-:O said:
I doubt it, Intel are not even committing to a HARD date for the launch of their 10nm products so no way am I going to trust them anytime soon with 7nm unless they've got a contingency plan in place with either Samsung or TSMC ... 

They stuffed up with 10nm... And allot of work has had to go into making that line usable.
However, the team that is working on 7nm at Intel hasn't had the setbacks that the 10nm team has... And for good reason.

I remain cautiously optimistic though.

fatslob-:O said:
We should not use old games because it's exactly as you said, "it wasn't designed for today's hardware" which is why we shouldn't skew a benchmark suite's to be heavily weighted in favour of past workloads ...

We should if said games are the most popular games in history that are actively played by millions of gamers.

Those individuals who are upgrading their hardware probably wants to know how it will handle their favorite game... And considering we aren't at a point yet where low-end parts or IGP's aren't capable of driving GTA5 at 1080P ultra... Well. There is still relevance to including it in a benchmark suite.

Plus it gives a good representation of how the hardware handles older API's/work loads.

fatslob-:O said:

@Pemalite

And just like that, Avalanche Studio's Apex engine made the jump!  

AMD cards are doing very well in RAGE 2, better than their NVIDIA counterparts. For example, the Radeon VII is usually around 10% behind the GeForce RTX 2080, but in RAGE 2, it matches it almost exactly. The same is happening with Vega 64, which is usually 20% behind the RTX 2070, yet manages to trade blows with it in RAGE 2. For the lower end, the picture is similar: we'd expect the RX 590 to be a few percentage points behind the GTX 1660, but here, it delivers almost the same performance.

Not too long ago when they released Just Cause 4, AMD was significantly trailing behind Nvidia but with Avalanche's first release on Vulkan the tables are turning ... 

Only a couple of key engines left are holding out from this trend ... 

Good to hear. Fully expected though. But doesn't really change the landscape much.

Last edited by Pemalite - on 14 May 2019

For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Pemalite said:

Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's.

Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor.

Not to say that AMD hasn't gained a slight boost out of it with development pipelines, but it just means sweet bugger all in the grand scheme of things... Which is certainly not a good thing.
I would like AMD to be toe-to-toe with nVidia, that is when innovation is at it's best and prices are at it's lowest.

Don't worry about it, changes are happen now ...

Pemalite said:

You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners.

The Gamecube/Wii had Vertex Shader... But the transform and lighting engine was semi-programmable too, I would even argue more programmable than AMD's Morpheus parts back in the day.

In saying that... It's not actually an ATI designed part, it is an ArtX designed part so certain approaches will deviate.
For such a tiny chip, she could pull her weight, especially when you start doing multiple passes and leverage it's texturing capabilities.

Sadly overall, due to it's lack of overall performance relative to the Xbox 360, the effects did have to be paired back rather substantially.

And I am not denying that the Xbox 360 and Playstation 3 didn't come with their own bunch of goodies either... The fact that the Xbox 360 for example has Tessellation is one of them.

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

Pemalite said:

Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.


And like I said before... Adreno was based upon AMD's desktop GPU efforts, so it will obviously draw similarities with parallel GPU releases for other markets... But Xenos is certainly based upon R500 with features taken from R600... But that just means the Adreno was also based on the R500 also.

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

Pemalite said:

When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite.

Obviously AMD has always pushed it's Direct X capabilities harder than OpenGL... Even going back to the Radeon 9000 vs Geforce FX war with Half Life 2 (Direct X) vs Doom 3 (OpenGL.)

OpenGL wasn't to bad on the later Terascale parts.

I mean... Take Wolfenstein via OpenGL on id tech... AMD pretty much dominated nVidia on this title with it's Terascale parts.
Granted, it's an engine/game that favored AMD hardware, but the fact that it was OpenGL is an interesting aspect.

https://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/22

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

Pemalite said:

I think you are nitpicking a little to much.

Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA.
They aren't 1:1 with each other.

Same holds true for nVidia.

But from an overall design principle, Fermi and Kepler are related, just like Maxwell and Pascal.

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

Pemalite said:

End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC.

nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot.

As for OpenGL, that is being depreciated in favor of Vulkan anyway for next gen. (Should have happened this gen, but I digress.)

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

Pemalite said:

DDR3 1600mhz on a 64bit bus = 12.8GB/s.
DDR4 3200mhz on a 64bit bus = 25.6GB/s.
DDR5 6400mhz on a 64bit bus = 51.2GB/s.

DDR3 1600mhz on a 128bit bus = 25.6GB/s.
DDR4 3200mhz on a 128bit bus = 51.2GB/s.
DDR5 6400mhz on a 128bit bus = 102.4GB/s.

DDR3 1600mhz on a 256bit bus = 51.2GB/s.
DDR4 3200mhz on a 256bit bus = 102.4GB/s.
DDR5 6400mhz on a 256bit bus = 204.8GB/s.

I wouldn't be so bold to assume that motherboards will start coming out with 512bit busses to drive 400GB/s+ of bandwidth. That would be prohibitively expensive.
The Xbox One X is running with a 384-bit bus... but that is a "premium" console... Even then that would mean...

DDR3 1600mhz on a 384bit bus = 76.8GB/s.
DDR4 3200mhz on a 384bit bus = 153.6GB/s.
DDR5 6400mhz on a 384bit bus = 307.2GB/s.

That's an expensive implementation for only 307GB/s of bandwidth, considering that ends up being less than the Xbox One X... And that is using GDDR5X... And no way is that approaching Geforce 1080 levels of performance.
AMD would need to make some catastrophic leaps in efficiency to get there... And while we are still shackled to Graphics Core Next... Likely isn't happening anytime soon.

You would simply be better off using GDDR6 on a 256bit bus.

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Pemalite said:

They stuffed up with 10nm... And allot of work has had to go into making that line usable.
However, the team that is working on 7nm at Intel hasn't had the setbacks that the 10nm team has... And for good reason.

I remain cautiously optimistic though.

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Pemalite said:

We should if said games are the most popular games in history that are actively played by millions of gamers.

Those individuals who are upgrading their hardware probably wants to know how it will handle their favorite game... And considering we aren't at a point yet where low-end parts or IGP's aren't capable of driving GTA5 at 1080P ultra... Well. There is still relevance to including it in a benchmark suite.

Plus it gives a good representation of how the hardware handles older API's/work loads.

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Pemalite said:

Good to hear. Fully expected though. But doesn't really change the landscape much.

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)



Pemalite said:
Trumpstyle said:

It's an Anonymous source... We have no idea if it holds any credibility. Sub~40 CU's seems about what I would expect with high clockspeeds though, mainstream part and all that.
"Radeon Maxwell sauce" is bullshit though. Polaris and Vega brought many Maxwell-driving-efficiency to the table anyway... Plus Maxwell is getting old, shows how many years AMD is behind in many aspects that makes nVidia GPU's so efficient for gaming.

Also there is a larger performance gap between Polaris and the 2070 than just 30%... And the jump between the RX 580 and Vega 64 is a good 45-65% depending on application... Navi with 40 CU's will likely come up short against even Vega 56 and definitely Vega 64 and the RTX 2070.

Either way, like all rumors, grain of salt and all of that, don't take it as gospel until AMD does a revealing or we can see benchmarks from legitimate outlets like Anandtech.















I like this leak though, from what I understand videocardz.com recieved it at or before march 12 way before any gonzalo leak showing a PS5 with 1,8ghz and I remember there was a leak on gfxbench showing a 20CU navi card with radeon 590 performance, it didn't make any sense at the time as the performance was just way to good and we thought the benchmark was misreading the CU's on the card, but maybe AMD has improved the CU's and clock by a lot on Navi.

You're misunderstanding the Maxwell sauce, it means better perf/teraflops and perf/mm2 gcn hasn't received this yet, its actually gone the opposite way where Vega decreased perf/mm2. But yes I think a 40CU's should land a bit below geforce 2060.

eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.



"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Around the Network
Trumpstyle said:
eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Would PS possibly try a PS4 + Pro price arrangement right off the bat next gen?

PS5 - 8 core Ryzen - 8.3TF Navi - checkerboard 4k/60 - $399

PS5 Pro - 8 core Ryzen - 12.9TF Navi - native 4k/60 - $499



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.

eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

The extra performance the Playstation 4 Pro and Xbox One X bring to the table is mostly sunk into driving higher resolutions and framerates... Games aren't being designed with those consoles as a baseline.

The jump from the base Playstation 4 and Xbox One to next gen should be a rather sizable one.

fatslob-:O said:

Don't worry about it, changes are happen now ...

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

fatslob-:O said:

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

fatslob-:O said:

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

fatslob-:O said:

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

That was kinda' the point?

fatslob-:O said:

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

fatslob-:O said:

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

fatslob-:O said:

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

fatslob-:O said:

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

fatslob-:O said:

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Never going to happen.

fatslob-:O said:

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

fatslob-:O said:

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

fatslob-:O said:

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

Trumpstyle said:

I like this leak though, from what I understand videocardz.com recieved it at or before march 12 way before any gonzalo leak showing a PS5 with 1,8ghz and I remember there was a leak on gfxbench showing a 20CU navi card with radeon 590 performance, it didn't make any sense at the time as the performance was just way to good and we thought the benchmark was misreading the CU's on the card, but maybe AMD has improved the CU's and clock by a lot on Navi.

It's a "leak" for a reason. Aka. A rumor. Take it with a grain of salt until we have empirical evidence. Aka. Real hardware in our hands.

Remember, Navi is still Graphics Core Next, keep your expectations inline for a GPU architecture that is 7+ years old.

Trumpstyle said:

You're misunderstanding the Maxwell sauce, it means better perf/teraflops and perf/mm2 gcn hasn't received this yet, its actually gone the opposite way where Vega decreased perf/mm2. But yes I think a 40CU's should land a bit below geforce 2060.

I understand perfectly. You don't seem to understand that a large chunk of the features found in Maxwells core architecture was adopted by Vega and Polaris already.
I think you should look at performance per clock of Vega and Fiji and see how things have changed.

Trumpstyle said:

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

While Microsoft and Nintendo has consoles on the market... There will always be a choice available not to get a Sony console.

Trumpstyle said:

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Citation needed.
I am sure Fatslob will like to jump on this claim with me too.

Last edited by Pemalite - on 15 May 2019

Pemalite said:

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

Not so because AMD intends on keeping GCN in the foreseeable! As long as Sony, Microsoft, Google, and potentially EA as well as other ISVs (Valve wants to keep developing open source drivers for AMD on Linux/DXVK and Ubisoft are developing their Vulkan backend exclusively for AMD hardware on Stadia) wants to keep GCN then GCN will stay alive ... 

AMD's customers are more than just PC gamers ... 

Pemalite said:

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

Gamecube was pretty overrated IMO since quite a few devs didn't find an improvement whenever they ported code from PS2. Take for example Baldur's Gate Dark Alliance, NFS Underground series, and one of the lead programmers proclaimed that the GC couldn't handle Burnout 3 because it was inferior in comparison to PS2 hardware ... (I'd be interested in other opinions from programmers within the industry that worked during that generation how the GC actually fared against the PS2)

I think ERP from Beyond3D might be Brian Fehdrau but judging by his testing, Flipper's clipping performance was a massive showstopper in it's hardware design ... (now I'm definitely convinced that the GC is just overrated after actually hearing from some developers)

I'm starting to get the impression that the GC was only ever good at texturing. It couldn't handle high poly counts, alpha effects, physics, AI like the PS2 could ...

Metroid Prime looks great but you can't really use exclusives as a measuring stick for comparison ... (no way effects would rival 360 at all since it'll probably blow the doors off the GC when original Xbox by comparison ran nearly all of the code faster in comparison to the GC)

GeForce 2 at most could potentially similar be similar to pixel shaders but from GeForce 3 then and onwards, it had a truly programmable vertex pipeline ... 

The only hardware where it was acceptable to do multiple passes was on the PS2 with it's stupidly fast eDRAM but then it'll be a massive performance impact. Doing multiple passes on MGS2 brought even the Xbox down to it's knees by comparison to the PS2. Multiple passes was especially a no go from 7th gen and onwards even when the 360 had eDRAM because programmers started realizing that poor cache locality resulted in bad performance on parallel processing ... 

Pemalite said:

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

The WII U is a mix between the R600 and the R700 according to the leading WII U emulator developer ... 

Pemalite said:

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

Maybe so market wise but Nvidia won't be able to as easily maintain support for older hardware in the future ... 

I don't deny that Nvidia has a sound approach since quite a few developers are resisting the push for more explicit APIs like DX12 or Vulkan but it'll come at the cost of bad experiences later on with as hardware ages in comparison to AMD ... 

Pemalite said:

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

But that's just the high level view. If we take a look at the lower level like say Nvidia's PTX which is their intermediate representation for their GPUs (AMD equivalent is AMDIL IIRC) then the changes are really profound. Fermi only supports binding upto 8 RWTexture2D in any given shader stages while Kepler ups this 64, Fermi doesn't support bindless handles for textures/images and not to mention the changes behind PTX ... 

Pemalite said:

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

Nvidia licenses the ISA, not ARM's in-house designs ... 

It's probably the biggest reasons why Nvidia's CPUs are still trash in comparison to what ARM offers but since their mainly a GPU chip design company, they'll obviously cheap out whenever they can ... 

Pemalite said:

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

It's not good for preservation purposes especially since Nvidia dropped 3D vision support ... (it sucks especially if a feature is dropped because the future won't be able to experience the same things we did)

I don't believe it'll be sustainable in the near future to just deprecate things especially when investment in software is rising ... 

Pemalite said:

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

They should launch 10nm first before talking about 7nm ... 

I still don't trust Intel's manufacturing group ... 

Pemalite said:

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

Older games aren't worth benchmarking because their code becomes unmaintained so their more of a liability to include than gaining any real useful data point ... 

It's not good to use unmaintained software to collect performance data of new hardware since bottlenecks could be due to code rather than the hardware itself ... (older code just isn't going to look so great running on a new piece of hardware) 

This is why I strongly advocate against using older games but I guess we'll agree to disagree ...

Pemalite said:

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

It's true Nvidia holds the performance crown but the tables are turning and the playing field is changing against them so their going to be in hostile territory soon ... 



fatslob-:O said:

Not so because AMD intends on keeping GCN in the foreseeable! As long as Sony, Microsoft, Google, and potentially EA as well as other ISVs (Valve wants to keep developing open source drivers for AMD on Linux/DXVK and Ubisoft are developing their Vulkan backend exclusively for AMD hardware on Stadia) wants to keep GCN then GCN will stay alive...

AMD's Roadmaps and Anandtech seem to agree that after Navi. AMD will have a next-gen Architecture though.
https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm

And I honestly can't wait... Just like Terascale got long in the tooth in it's twilight years, the same is happening with Graphics Core Next.

Unless you have information that I do not?

fatslob-:O said:

Gamecube was pretty overrated IMO since quite a few devs didn't find an improvement whenever they ported code from PS2. Take for example Baldur's Gate Dark Alliance, NFS Underground series, and one of the lead programmers proclaimed that the GC couldn't handle Burnout 3 because it was inferior in comparison to PS2 hardware ... (I'd be interested in other opinions from programmers within the industry that worked during that generation how the GC actually fared against the PS2)

I think ERP from Beyond3D might be Brian Fehdrau but judging by his testing, Flipper's clipping performance was a massive showstopper in it's hardware design ... (now I'm definitely convinced that the GC is just overrated after actually hearing from some developers)

I'm starting to get the impression that the GC was only ever good at texturing. It couldn't handle high poly counts, alpha effects, physics, AI like the PS2 could ...

Metroid Prime looks great but you can't really use exclusives as a measuring stick for comparison ... (no way effects would rival 360 at all since it'll probably blow the doors off the GC when original Xbox by comparison ran nearly all of the code faster in comparison to the GC)

Anyone who asserts that the Gamecube was inferior to the Playstation 2 is an opinion not worth it's weight... It's pretty much that.

At the end of the day... The proof is in the pudding, Gamecube games in general were a big step up over the Playstation 2 and would trend closer to the Original Xbox in terms of visuals than the Playstation 2.

Metroid on Gamecube showed that the console could handle high poly counts... They even had 3D mesh water ripples... Watch the digital foundry video I linked to prior.
But you are right, the Gamecube was a texturing powerhouse.


fatslob-:O said:

GeForce 2 at most could potentially similar be similar to pixel shaders but from GeForce 3 then and onwards, it had a truly programmable vertex pipeline ... 

The only hardware where it was acceptable to do multiple passes was on the PS2 with it's stupidly fast eDRAM but then it'll be a massive performance impact. Doing multiple passes on MGS2 brought even the Xbox down to it's knees by comparison to the PS2. Multiple passes was especially a no go from 7th gen and onwards even when the 360 had eDRAM because programmers started realizing that poor cache locality resulted in bad performance on parallel processing ... 

Don't foget the Gamecube and Wii had the 1t SRAM and the Xbox 360 had eDRAM and the Xbox One had eSRAM.
It's not a pre-requisite for doing multiple passes. In-fact deferred renderers tend to rely on it more so.

The Original Xbox wasn't built for that kind of workload though, it's architecture was traditional-PC at the time.

The Geforce 2 didn't have fully programmable pixel shader pipelines... The point I am trying to convey though is that certain pieces of hardware, even if they lack the intrinsic functionality of a more modern piece of hardware, can still actually be capable of similar effects in hardware using a few tricks of the trade.

That doesn't mean said effects will be employed however due to lack of horse power or how said effects will blow out the render time budget.

fatslob-:O said:
The WII U is a mix between the R600 and the R700 according to the leading WII U emulator developer ... 

The die-shots show it to have allot in common with R700. But R700 has allot of common with R600 anyway... R700 is based on the R600 design, it's all Terascale.
So I wouldn't be surprised. In saying that, it doesn't really matter at the end of the day.
WiiU was a bust.

fatslob-:O said:

Maybe so market wise but Nvidia won't be able to as easily maintain support for older hardware in the future ... 

I don't deny that Nvidia has a sound approach since quite a few developers are resisting the push for more explicit APIs like DX12 or Vulkan but it'll come at the cost of bad experiences later on with as hardware ages in comparison to AMD ... 

nVidia won't maintain support for older hardware, they will eventually relegate older parts to life-support.
In-fact it has already started to happen to Fermi. Kepler is next.

https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-legacy-ends-32bit-os-support

fatslob-:O said:
But that's just the high level view. If we take a look at the lower level like say Nvidia's PTX which is their intermediate representation for their GPUs (AMD equivalent is AMDIL IIRC) then the changes are really profound. Fermi only supports binding upto 8 RWTexture2D in any given shader stages while Kepler ups this 64, Fermi doesn't support bindless handles for textures/images and not to mention the changes behind PTX ... 

Same can be said for Graphics Core Next... There are always smaller tweaks between Graphics Core Next revisions... It's actually expected whenever a GPU lineup is refreshed, new features and capabilities are expected to be added.

I mean, AMD doubled the ACE units in Bonaire for example or introduced primitive discard in Polaris, there is always some deviation.
Hence... The entire point of why I look at it from a high-level perspective, certain GPU families can certainly be grouped together... Sure you can nit-pick individualistic points of contention and go from there, but at the end of the day you won't get far.

fatslob-:O said:

Nvidia licenses the ISA, not ARM's in-house designs ... 

It's probably the biggest reasons why Nvidia's CPUs are still trash in comparison to what ARM offers but since their mainly a GPU chip design company, they'll obviously cheap out whenever they can ... 

nVidia's license actually includes access to everything. -Around 15 companies have such a license.

Whether they will use it is another matter entirely.

fatslob-:O said:

It's not good for preservation purposes especially since Nvidia dropped 3D vision support ... (it sucks especially if a feature is dropped because the future won't be able to experience the same things we did)

I don't believe it'll be sustainable in the near future to just deprecate things especially when investment in software is rising ... 

And AMD dropped Mantle Support... And made the entire situation of True Audio confusing as some parts didn't have the block in hardware and then they abolished the feature from their drivers.

It happens. That's life. Neither company has a perfect track record.

fatslob-:O said:

They should launch 10nm first before talking about 7nm ... 

I still don't trust Intel's manufacturing group ... 

7nm will be less ambitious than their 10nm process and the team handling 7nm has hit every milestone on time.

fatslob-:O said:

Older games aren't worth benchmarking because their code becomes unmaintained so their more of a liability to include than gaining any real useful data point ... 

It's not good to use unmaintained software to collect performance data of new hardware since bottlenecks could be due to code rather than the hardware itself ... (older code just isn't going to look so great running on a new piece of hardware) 

This is why I strongly advocate against using older games but I guess we'll agree to disagree ...

Grand Theft Auto 5 is certainly maintained.

World of WarCraft is certainly maintained.

A benchmark suite should present data that shows how the hardware performs in every single scenario. Not just a nitpick to show the hardware in the best possible light.

fatslob-:O said:
It's true Nvidia holds the performance crown but the tables are turning and the playing field is changing against them so their going to be in hostile territory soon ..

To be frank... I will believe it when I see it. People have been proclaiming AMD's return to being competitive in the GPU landscape for years, but other than price... They haven't really put up much of a fight.

In saying that, I honestly hope it happens... I really do, the PC is at it's best when AMD is at it's best and taking the fight to nVidia and Intel, it brings prices down, innovation happens rapidly... Consumer wins.



Pemalite said:

AMD's Roadmaps and Anandtech seem to agree that after Navi. AMD will have a next-gen Architecture though.
https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm

And I honestly can't wait... Just like Terascale got long in the tooth in it's twilight years, the same is happening with Graphics Core Next.

Unless you have information that I do not?

That's just speculation on Anandtech's part ... 

AMD never explicitly proclaimed that they'll bring an entirely new GPU architecture or leave GCN. I can see that happening for maybe say high performance compute (GCN3/Vega have a specialized GCN ISA implementation for this) but I don't think AMD intends to get rid of GCN for gaming ... 

Pemalite said:

Anyone who asserts that the Gamecube was inferior to the Playstation 2 is an opinion not worth it's weight... It's pretty much that.

At the end of the day... The proof is in the pudding, Gamecube games in general were a big step up over the Playstation 2 and would trend closer to the Original Xbox in terms of visuals than the Playstation 2.

Metroid on Gamecube showed that the console could handle high poly counts... They even had 3D mesh water ripples... Watch the digital foundry video I linked to prior.
But you are right, the Gamecube was a texturing powerhouse.

@Bold Really ? Despite real programmers having worked on both systems saying otherwise ?

I don't deny that the GC has it's own advantages like in it's memory sub-system or texturing performance but it sounds like it had some pretty serious design flaws with plenty of it's own bottleneck so it sounded nothing more than a texturing machine ... 

Proof is not in the pudding because quite a few times multiplats came out inferior on the GC in comparison to the PS2. Just because visuals in some exclusives trended like for like on the Xbox did not mean the other things did like game logic, physics, AI, and alpha effects. A game's technical prowess is MORE than just it's textures and for that the GC was resoundingly weaker than the PS2 in those aspects ... (most of the time games on GC were of low geometric complexity because of crappy clipping performance)

Metroid looked impressive as an exclusive but we can't use it for hardware comparisons so we're just gonna have to deal with Baldur's Gate Dark Alliance (where GC very clearly had inferior water physics simulation) or NFS Underground (pared back alpha effects) ... 

GC was probably only ever good at texturing so I can imagine why it didn't get many multiplats later on when game logic got more complex ... (Xbox had vertex shaders and the PS2's VU is the modern day equivalent of Turing's mesh shaders) 

GC is just overrated hardware IMO ... (can make good looking pixels but pretty shit in other departments)

Pemalite said:

Don't foget the Gamecube and Wii had the 1t SRAM and the Xbox 360 had eDRAM and the Xbox One had eSRAM.
It's not a pre-requisite for doing multiple passes. In-fact deferred renderers tend to rely on it more so.

The Original Xbox wasn't built for that kind of workload though, it's architecture was traditional-PC at the time.

The Geforce 2 didn't have fully programmable pixel shader pipelines... The point I am trying to convey though is that certain pieces of hardware, even if they lack the intrinsic functionality of a more modern piece of hardware, can still actually be capable of similar effects in hardware using a few tricks of the trade.

That doesn't mean said effects will be employed however due to lack of horse power or how said effects will blow out the render time budget.

Deferred rendering has other reasons for using on-chip memory buffers like it's fat G-buffer but still don't try multipass on anything other than a PS2 ...

Original Xbox was a beast. That thing ran the VAST majority of the code meant for either the GC or PS2 better! Original Xbox was undeniably better because of the fact that it just ran the code and the multiplats faster but most of all it stood to gain more from custom programming than either the GC or PS2 ... (Xbox's bottlenecks were mostly caused by software rather than it's hardware but it still rocked in multiplats)

GC on the otherhand was NEVER more powerful than the PS2 and it didn't even match it's competitors in terms of feature set at the time so no way was it ever going to be comparable to the sub-HD twins of 7th gen. At best, GC had some spotlights like RE4 but more often than not developers didn't like it very much because it just wasn't up to the task of running multiplats that were on the PS2 ... (it's probably why developers found out the hard way when they tried running microcode optimized for the VUs on the GC, performance tanked hard)

I'm starting to think if Nintendo wanted a technical redo during 6th gen they probably would've included capabilities similar to a vertex shader along with a DVD driver ... (multiplats or lack of on the GC were pretty painful)

Pemalite said:

The die-shots show it to have allot in common with R700. But R700 has allot of common with R600 anyway... R700 is based on the R600 design, it's all Terascale.
So I wouldn't be surprised. In saying that, it doesn't really matter at the end of the day.
WiiU was a bust.

Looking at open source drivers for both the R600 and R700 came in handy for WII U emulation regardless ... 

Pemalite said:

nVidia won't maintain support for older hardware, they will eventually relegate older parts to life-support.
In-fact it has already started to happen to Fermi. Kepler is next.

https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-legacy-ends-32bit-os-support

I feel like in an era where Moore's Law is coming to an end, there should be better support for older hardware ... 

Apple for instance is doing their customers right by giving their devices more updates over the lifetime in comparison to Android which just compulsively drops anything older than 2 years old ... 

Pemalite said:

Same can be said for Graphics Core Next... There are always smaller tweaks between Graphics Core Next revisions... It's actually expected whenever a GPU lineup is refreshed, new features and capabilities are expected to be added.


I mean, AMD doubled the ACE units in Bonaire for example or introduced primitive discard in Polaris, there is always some deviation.
Hence... The entire point of why I look at it from a high-level perspective, certain GPU families can certainly be grouped together... Sure you can nit-pick individualistic points of contention and go from there, but at the end of the day you won't get far.

No it really can't ? For the most part, GCN ISAs are compatible with each other and the same isn't true of most Nvidia's GPU architectures where they likely just keep making different or modified instruction encodings ...

Pemalite said:

nVidia's license actually includes access to everything. -Around 15 companies have such a license.

Whether they will use it is another matter entirely.

No it doesn't ... 

It makes NO sense for Nvidia to just purposefully hamstring themselves to not include a superior design if they have access to it ... 

I'll just assume that Nvidia doesn't have access to ARM's in-house designs. Big corporation with lot's of money but they can't even hire some CPU designers to salvage their designs so that's why their CPU team is garbage. If AMD/Intel's worst was "bulldozer/netburst" then Nvidia consistently keep pulling those designs at the same level and THAT'S saying something ... (I'm not even kidding you how bad NV's CPU designs are)

Pemalite said:

7nm will be less ambitious than their 10nm process and the team handling 7nm has hit every milestone on time.

Let's hope so after 4 years of delays with 10nm ... 

Pemalite said:

Grand Theft Auto 5 is certainly maintained.

World of WarCraft is certainly maintained.

A benchmark suite should present data that shows how the hardware performs in every single scenario. Not just a nitpick to show the hardware in the best possible light.

GTA V's graphics code particularly is NOT maintained while World of WarCraft's graphics code is maintained ... (ala WoW now has a D3D12 backend)

GTA V is not deserving of being benchmarked again in comparison to WoW IMO. It's a bad idea to keep using outdated benchmarks so it's important for benchmarks to FIT the hardware. Even on Nvidia it's a bad idea to use old benchmarks ... (doing things the old way which really works AGAINST the new hardware isn't fair)

Do we take CPU benchmarks seriously if it includes MMX instructions despite the fact that the said MMX instructions are effectively deprecated ? (every time Intel and AMD makes their MMX implementation slower as each new CPU generation goes on and in fact compilers now EMULATE IT)

Pemalite said:

To be frank... I will believe it when I see it. People have been proclaiming AMD's return to being competitive in the GPU landscape for years, but other than price... They haven't really put up much of a fight.

In saying that, I honestly hope it happens... I really do, the PC is at it's best when AMD is at it's best and taking the fight to nVidia and Intel, it brings prices down, innovation happens rapidly... Consumer wins.

I understand your skepticism ...