By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

 

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34

For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Pemalite said:

Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's.

Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor.

Not to say that AMD hasn't gained a slight boost out of it with development pipelines, but it just means sweet bugger all in the grand scheme of things... Which is certainly not a good thing.
I would like AMD to be toe-to-toe with nVidia, that is when innovation is at it's best and prices are at it's lowest.

Don't worry about it, changes are happen now ...

Pemalite said:

You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners.

The Gamecube/Wii had Vertex Shader... But the transform and lighting engine was semi-programmable too, I would even argue more programmable than AMD's Morpheus parts back in the day.

In saying that... It's not actually an ATI designed part, it is an ArtX designed part so certain approaches will deviate.
For such a tiny chip, she could pull her weight, especially when you start doing multiple passes and leverage it's texturing capabilities.

Sadly overall, due to it's lack of overall performance relative to the Xbox 360, the effects did have to be paired back rather substantially.

And I am not denying that the Xbox 360 and Playstation 3 didn't come with their own bunch of goodies either... The fact that the Xbox 360 for example has Tessellation is one of them.

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

Pemalite said:

Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.


And like I said before... Adreno was based upon AMD's desktop GPU efforts, so it will obviously draw similarities with parallel GPU releases for other markets... But Xenos is certainly based upon R500 with features taken from R600... But that just means the Adreno was also based on the R500 also.

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

Pemalite said:

When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite.

Obviously AMD has always pushed it's Direct X capabilities harder than OpenGL... Even going back to the Radeon 9000 vs Geforce FX war with Half Life 2 (Direct X) vs Doom 3 (OpenGL.)

OpenGL wasn't to bad on the later Terascale parts.

I mean... Take Wolfenstein via OpenGL on id tech... AMD pretty much dominated nVidia on this title with it's Terascale parts.
Granted, it's an engine/game that favored AMD hardware, but the fact that it was OpenGL is an interesting aspect.

https://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/22

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

Pemalite said:

I think you are nitpicking a little to much.

Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA.
They aren't 1:1 with each other.

Same holds true for nVidia.

But from an overall design principle, Fermi and Kepler are related, just like Maxwell and Pascal.

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

Pemalite said:

End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC.

nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot.

As for OpenGL, that is being depreciated in favor of Vulkan anyway for next gen. (Should have happened this gen, but I digress.)

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

Pemalite said:

DDR3 1600mhz on a 64bit bus = 12.8GB/s.
DDR4 3200mhz on a 64bit bus = 25.6GB/s.
DDR5 6400mhz on a 64bit bus = 51.2GB/s.

DDR3 1600mhz on a 128bit bus = 25.6GB/s.
DDR4 3200mhz on a 128bit bus = 51.2GB/s.
DDR5 6400mhz on a 128bit bus = 102.4GB/s.

DDR3 1600mhz on a 256bit bus = 51.2GB/s.
DDR4 3200mhz on a 256bit bus = 102.4GB/s.
DDR5 6400mhz on a 256bit bus = 204.8GB/s.

I wouldn't be so bold to assume that motherboards will start coming out with 512bit busses to drive 400GB/s+ of bandwidth. That would be prohibitively expensive.
The Xbox One X is running with a 384-bit bus... but that is a "premium" console... Even then that would mean...

DDR3 1600mhz on a 384bit bus = 76.8GB/s.
DDR4 3200mhz on a 384bit bus = 153.6GB/s.
DDR5 6400mhz on a 384bit bus = 307.2GB/s.

That's an expensive implementation for only 307GB/s of bandwidth, considering that ends up being less than the Xbox One X... And that is using GDDR5X... And no way is that approaching Geforce 1080 levels of performance.
AMD would need to make some catastrophic leaps in efficiency to get there... And while we are still shackled to Graphics Core Next... Likely isn't happening anytime soon.

You would simply be better off using GDDR6 on a 256bit bus.

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Pemalite said:

They stuffed up with 10nm... And allot of work has had to go into making that line usable.
However, the team that is working on 7nm at Intel hasn't had the setbacks that the 10nm team has... And for good reason.

I remain cautiously optimistic though.

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Pemalite said:

We should if said games are the most popular games in history that are actively played by millions of gamers.

Those individuals who are upgrading their hardware probably wants to know how it will handle their favorite game... And considering we aren't at a point yet where low-end parts or IGP's aren't capable of driving GTA5 at 1080P ultra... Well. There is still relevance to including it in a benchmark suite.

Plus it gives a good representation of how the hardware handles older API's/work loads.

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Pemalite said:

Good to hear. Fully expected though. But doesn't really change the landscape much.

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)



Pemalite said:
Trumpstyle said:

It's an Anonymous source... We have no idea if it holds any credibility. Sub~40 CU's seems about what I would expect with high clockspeeds though, mainstream part and all that.
"Radeon Maxwell sauce" is bullshit though. Polaris and Vega brought many Maxwell-driving-efficiency to the table anyway... Plus Maxwell is getting old, shows how many years AMD is behind in many aspects that makes nVidia GPU's so efficient for gaming.

Also there is a larger performance gap between Polaris and the 2070 than just 30%... And the jump between the RX 580 and Vega 64 is a good 45-65% depending on application... Navi with 40 CU's will likely come up short against even Vega 56 and definitely Vega 64 and the RTX 2070.

Either way, like all rumors, grain of salt and all of that, don't take it as gospel until AMD does a revealing or we can see benchmarks from legitimate outlets like Anandtech.















I like this leak though, from what I understand videocardz.com recieved it at or before march 12 way before any gonzalo leak showing a PS5 with 1,8ghz and I remember there was a leak on gfxbench showing a 20CU navi card with radeon 590 performance, it didn't make any sense at the time as the performance was just way to good and we thought the benchmark was misreading the CU's on the card, but maybe AMD has improved the CU's and clock by a lot on Navi.

You're misunderstanding the Maxwell sauce, it means better perf/teraflops and perf/mm2 gcn hasn't received this yet, its actually gone the opposite way where Vega decreased perf/mm2. But yes I think a 40CU's should land a bit below geforce 2060.

eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:
eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Would PS possibly try a PS4 + Pro price arrangement right off the bat next gen?

PS5 - 8 core Ryzen - 8.3TF Navi - checkerboard 4k/60 - $399

PS5 Pro - 8 core Ryzen - 12.9TF Navi - native 4k/60 - $499



eva01beserk said:
For thouse claiming an 8.3tf ps5. Let me just ask you, would you even consider buying a next gen conso that is 33% jump from previous gen for $400?

The extra performance the Playstation 4 Pro and Xbox One X bring to the table is mostly sunk into driving higher resolutions and framerates... Games aren't being designed with those consoles as a baseline.

The jump from the base Playstation 4 and Xbox One to next gen should be a rather sizable one.

fatslob-:O said:

Don't worry about it, changes are happen now ...

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

fatslob-:O said:

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

fatslob-:O said:

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

fatslob-:O said:

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

That was kinda' the point?

fatslob-:O said:

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

fatslob-:O said:

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

fatslob-:O said:

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

fatslob-:O said:

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

fatslob-:O said:

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Never going to happen.

fatslob-:O said:

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

fatslob-:O said:

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

fatslob-:O said:

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

Trumpstyle said:

I like this leak though, from what I understand videocardz.com recieved it at or before march 12 way before any gonzalo leak showing a PS5 with 1,8ghz and I remember there was a leak on gfxbench showing a 20CU navi card with radeon 590 performance, it didn't make any sense at the time as the performance was just way to good and we thought the benchmark was misreading the CU's on the card, but maybe AMD has improved the CU's and clock by a lot on Navi.

It's a "leak" for a reason. Aka. A rumor. Take it with a grain of salt until we have empirical evidence. Aka. Real hardware in our hands.

Remember, Navi is still Graphics Core Next, keep your expectations inline for a GPU architecture that is 7+ years old.

Trumpstyle said:

You're misunderstanding the Maxwell sauce, it means better perf/teraflops and perf/mm2 gcn hasn't received this yet, its actually gone the opposite way where Vega decreased perf/mm2. But yes I think a 40CU's should land a bit below geforce 2060.

I understand perfectly. You don't seem to understand that a large chunk of the features found in Maxwells core architecture was adopted by Vega and Polaris already.
I think you should look at performance per clock of Vega and Fiji and see how things have changed.

Trumpstyle said:

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

While Microsoft and Nintendo has consoles on the market... There will always be a choice available not to get a Sony console.

Trumpstyle said:

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Citation needed.
I am sure Fatslob will like to jump on this claim with me too.

Last edited by Pemalite - on 15 May 2019

--::{PC Gaming Master Race}::--

Pemalite said:

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

Not so because AMD intends on keeping GCN in the foreseeable! As long as Sony, Microsoft, Google, and potentially EA as well as other ISVs (Valve wants to keep developing open source drivers for AMD on Linux/DXVK and Ubisoft are developing their Vulkan backend exclusively for AMD hardware on Stadia) wants to keep GCN then GCN will stay alive ... 

AMD's customers are more than just PC gamers ... 

Pemalite said:

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

Gamecube was pretty overrated IMO since quite a few devs didn't find an improvement whenever they ported code from PS2. Take for example Baldur's Gate Dark Alliance, NFS Underground series, and one of the lead programmers proclaimed that the GC couldn't handle Burnout 3 because it was inferior in comparison to PS2 hardware ... (I'd be interested in other opinions from programmers within the industry that worked during that generation how the GC actually fared against the PS2)

I think ERP from Beyond3D might be Brian Fehdrau but judging by his testing, Flipper's clipping performance was a massive showstopper in it's hardware design ... (now I'm definitely convinced that the GC is just overrated after actually hearing from some developers)

I'm starting to get the impression that the GC was only ever good at texturing. It couldn't handle high poly counts, alpha effects, physics, AI like the PS2 could ...

Metroid Prime looks great but you can't really use exclusives as a measuring stick for comparison ... (no way effects would rival 360 at all since it'll probably blow the doors off the GC when original Xbox by comparison ran nearly all of the code faster in comparison to the GC)

GeForce 2 at most could potentially similar be similar to pixel shaders but from GeForce 3 then and onwards, it had a truly programmable vertex pipeline ... 

The only hardware where it was acceptable to do multiple passes was on the PS2 with it's stupidly fast eDRAM but then it'll be a massive performance impact. Doing multiple passes on MGS2 brought even the Xbox down to it's knees by comparison to the PS2. Multiple passes was especially a no go from 7th gen and onwards even when the 360 had eDRAM because programmers started realizing that poor cache locality resulted in bad performance on parallel processing ... 

Pemalite said:

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

The WII U is a mix between the R600 and the R700 according to the leading WII U emulator developer ... 

Pemalite said:

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

Maybe so market wise but Nvidia won't be able to as easily maintain support for older hardware in the future ... 

I don't deny that Nvidia has a sound approach since quite a few developers are resisting the push for more explicit APIs like DX12 or Vulkan but it'll come at the cost of bad experiences later on with as hardware ages in comparison to AMD ... 

Pemalite said:

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

But that's just the high level view. If we take a look at the lower level like say Nvidia's PTX which is their intermediate representation for their GPUs (AMD equivalent is AMDIL IIRC) then the changes are really profound. Fermi only supports binding upto 8 RWTexture2D in any given shader stages while Kepler ups this 64, Fermi doesn't support bindless handles for textures/images and not to mention the changes behind PTX ... 

Pemalite said:

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

Nvidia licenses the ISA, not ARM's in-house designs ... 

It's probably the biggest reasons why Nvidia's CPUs are still trash in comparison to what ARM offers but since their mainly a GPU chip design company, they'll obviously cheap out whenever they can ... 

Pemalite said:

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

It's not good for preservation purposes especially since Nvidia dropped 3D vision support ... (it sucks especially if a feature is dropped because the future won't be able to experience the same things we did)

I don't believe it'll be sustainable in the near future to just deprecate things especially when investment in software is rising ... 

Pemalite said:

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

They should launch 10nm first before talking about 7nm ... 

I still don't trust Intel's manufacturing group ... 

Pemalite said:

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

Older games aren't worth benchmarking because their code becomes unmaintained so their more of a liability to include than gaining any real useful data point ... 

It's not good to use unmaintained software to collect performance data of new hardware since bottlenecks could be due to code rather than the hardware itself ... (older code just isn't going to look so great running on a new piece of hardware) 

This is why I strongly advocate against using older games but I guess we'll agree to disagree ...

Pemalite said:

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

It's true Nvidia holds the performance crown but the tables are turning and the playing field is changing against them so their going to be in hostile territory soon ... 



fatslob-:O said:

Not so because AMD intends on keeping GCN in the foreseeable! As long as Sony, Microsoft, Google, and potentially EA as well as other ISVs (Valve wants to keep developing open source drivers for AMD on Linux/DXVK and Ubisoft are developing their Vulkan backend exclusively for AMD hardware on Stadia) wants to keep GCN then GCN will stay alive...

AMD's Roadmaps and Anandtech seem to agree that after Navi. AMD will have a next-gen Architecture though.
https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm

And I honestly can't wait... Just like Terascale got long in the tooth in it's twilight years, the same is happening with Graphics Core Next.

Unless you have information that I do not?

fatslob-:O said:

Gamecube was pretty overrated IMO since quite a few devs didn't find an improvement whenever they ported code from PS2. Take for example Baldur's Gate Dark Alliance, NFS Underground series, and one of the lead programmers proclaimed that the GC couldn't handle Burnout 3 because it was inferior in comparison to PS2 hardware ... (I'd be interested in other opinions from programmers within the industry that worked during that generation how the GC actually fared against the PS2)

I think ERP from Beyond3D might be Brian Fehdrau but judging by his testing, Flipper's clipping performance was a massive showstopper in it's hardware design ... (now I'm definitely convinced that the GC is just overrated after actually hearing from some developers)

I'm starting to get the impression that the GC was only ever good at texturing. It couldn't handle high poly counts, alpha effects, physics, AI like the PS2 could ...

Metroid Prime looks great but you can't really use exclusives as a measuring stick for comparison ... (no way effects would rival 360 at all since it'll probably blow the doors off the GC when original Xbox by comparison ran nearly all of the code faster in comparison to the GC)

Anyone who asserts that the Gamecube was inferior to the Playstation 2 is an opinion not worth it's weight... It's pretty much that.

At the end of the day... The proof is in the pudding, Gamecube games in general were a big step up over the Playstation 2 and would trend closer to the Original Xbox in terms of visuals than the Playstation 2.

Metroid on Gamecube showed that the console could handle high poly counts... They even had 3D mesh water ripples... Watch the digital foundry video I linked to prior.
But you are right, the Gamecube was a texturing powerhouse.


fatslob-:O said:

GeForce 2 at most could potentially similar be similar to pixel shaders but from GeForce 3 then and onwards, it had a truly programmable vertex pipeline ... 

The only hardware where it was acceptable to do multiple passes was on the PS2 with it's stupidly fast eDRAM but then it'll be a massive performance impact. Doing multiple passes on MGS2 brought even the Xbox down to it's knees by comparison to the PS2. Multiple passes was especially a no go from 7th gen and onwards even when the 360 had eDRAM because programmers started realizing that poor cache locality resulted in bad performance on parallel processing ... 

Don't foget the Gamecube and Wii had the 1t SRAM and the Xbox 360 had eDRAM and the Xbox One had eSRAM.
It's not a pre-requisite for doing multiple passes. In-fact deferred renderers tend to rely on it more so.

The Original Xbox wasn't built for that kind of workload though, it's architecture was traditional-PC at the time.

The Geforce 2 didn't have fully programmable pixel shader pipelines... The point I am trying to convey though is that certain pieces of hardware, even if they lack the intrinsic functionality of a more modern piece of hardware, can still actually be capable of similar effects in hardware using a few tricks of the trade.

That doesn't mean said effects will be employed however due to lack of horse power or how said effects will blow out the render time budget.

fatslob-:O said:
The WII U is a mix between the R600 and the R700 according to the leading WII U emulator developer ... 

The die-shots show it to have allot in common with R700. But R700 has allot of common with R600 anyway... R700 is based on the R600 design, it's all Terascale.
So I wouldn't be surprised. In saying that, it doesn't really matter at the end of the day.
WiiU was a bust.

fatslob-:O said:

Maybe so market wise but Nvidia won't be able to as easily maintain support for older hardware in the future ... 

I don't deny that Nvidia has a sound approach since quite a few developers are resisting the push for more explicit APIs like DX12 or Vulkan but it'll come at the cost of bad experiences later on with as hardware ages in comparison to AMD ... 

nVidia won't maintain support for older hardware, they will eventually relegate older parts to life-support.
In-fact it has already started to happen to Fermi. Kepler is next.

https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-legacy-ends-32bit-os-support

fatslob-:O said:
But that's just the high level view. If we take a look at the lower level like say Nvidia's PTX which is their intermediate representation for their GPUs (AMD equivalent is AMDIL IIRC) then the changes are really profound. Fermi only supports binding upto 8 RWTexture2D in any given shader stages while Kepler ups this 64, Fermi doesn't support bindless handles for textures/images and not to mention the changes behind PTX ... 

Same can be said for Graphics Core Next... There are always smaller tweaks between Graphics Core Next revisions... It's actually expected whenever a GPU lineup is refreshed, new features and capabilities are expected to be added.

I mean, AMD doubled the ACE units in Bonaire for example or introduced primitive discard in Polaris, there is always some deviation.
Hence... The entire point of why I look at it from a high-level perspective, certain GPU families can certainly be grouped together... Sure you can nit-pick individualistic points of contention and go from there, but at the end of the day you won't get far.

fatslob-:O said:

Nvidia licenses the ISA, not ARM's in-house designs ... 

It's probably the biggest reasons why Nvidia's CPUs are still trash in comparison to what ARM offers but since their mainly a GPU chip design company, they'll obviously cheap out whenever they can ... 

nVidia's license actually includes access to everything. -Around 15 companies have such a license.

Whether they will use it is another matter entirely.

fatslob-:O said:

It's not good for preservation purposes especially since Nvidia dropped 3D vision support ... (it sucks especially if a feature is dropped because the future won't be able to experience the same things we did)

I don't believe it'll be sustainable in the near future to just deprecate things especially when investment in software is rising ... 

And AMD dropped Mantle Support... And made the entire situation of True Audio confusing as some parts didn't have the block in hardware and then they abolished the feature from their drivers.

It happens. That's life. Neither company has a perfect track record.

fatslob-:O said:

They should launch 10nm first before talking about 7nm ... 

I still don't trust Intel's manufacturing group ... 

7nm will be less ambitious than their 10nm process and the team handling 7nm has hit every milestone on time.

fatslob-:O said:

Older games aren't worth benchmarking because their code becomes unmaintained so their more of a liability to include than gaining any real useful data point ... 

It's not good to use unmaintained software to collect performance data of new hardware since bottlenecks could be due to code rather than the hardware itself ... (older code just isn't going to look so great running on a new piece of hardware) 

This is why I strongly advocate against using older games but I guess we'll agree to disagree ...

Grand Theft Auto 5 is certainly maintained.

World of WarCraft is certainly maintained.

A benchmark suite should present data that shows how the hardware performs in every single scenario. Not just a nitpick to show the hardware in the best possible light.

fatslob-:O said:
It's true Nvidia holds the performance crown but the tables are turning and the playing field is changing against them so their going to be in hostile territory soon ..

To be frank... I will believe it when I see it. People have been proclaiming AMD's return to being competitive in the GPU landscape for years, but other than price... They haven't really put up much of a fight.

In saying that, I honestly hope it happens... I really do, the PC is at it's best when AMD is at it's best and taking the fight to nVidia and Intel, it brings prices down, innovation happens rapidly... Consumer wins.



--::{PC Gaming Master Race}::--

Pemalite said:

AMD's Roadmaps and Anandtech seem to agree that after Navi. AMD will have a next-gen Architecture though.
https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm

And I honestly can't wait... Just like Terascale got long in the tooth in it's twilight years, the same is happening with Graphics Core Next.

Unless you have information that I do not?

That's just speculation on Anandtech's part ... 

AMD never explicitly proclaimed that they'll bring an entirely new GPU architecture or leave GCN. I can see that happening for maybe say high performance compute (GCN3/Vega have a specialized GCN ISA implementation for this) but I don't think AMD intends to get rid of GCN for gaming ... 

Pemalite said:

Anyone who asserts that the Gamecube was inferior to the Playstation 2 is an opinion not worth it's weight... It's pretty much that.

At the end of the day... The proof is in the pudding, Gamecube games in general were a big step up over the Playstation 2 and would trend closer to the Original Xbox in terms of visuals than the Playstation 2.

Metroid on Gamecube showed that the console could handle high poly counts... They even had 3D mesh water ripples... Watch the digital foundry video I linked to prior.
But you are right, the Gamecube was a texturing powerhouse.

@Bold Really ? Despite real programmers having worked on both systems saying otherwise ?

I don't deny that the GC has it's own advantages like in it's memory sub-system or texturing performance but it sounds like it had some pretty serious design flaws with plenty of it's own bottleneck so it sounded nothing more than a texturing machine ... 

Proof is not in the pudding because quite a few times multiplats came out inferior on the GC in comparison to the PS2. Just because visuals in some exclusives trended like for like on the Xbox did not mean the other things did like game logic, physics, AI, and alpha effects. A game's technical prowess is MORE than just it's textures and for that the GC was resoundingly weaker than the PS2 in those aspects ... (most of the time games on GC were of low geometric complexity because of crappy clipping performance)

Metroid looked impressive as an exclusive but we can't use it for hardware comparisons so we're just gonna have to deal with Baldur's Gate Dark Alliance (where GC very clearly had inferior water physics simulation) or NFS Underground (pared back alpha effects) ... 

GC was probably only ever good at texturing so I can imagine why it didn't get many multiplats later on when game logic got more complex ... (Xbox had vertex shaders and the PS2's VU is the modern day equivalent of Turing's mesh shaders) 

GC is just overrated hardware IMO ... (can make good looking pixels but pretty shit in other departments)

Pemalite said:

Don't foget the Gamecube and Wii had the 1t SRAM and the Xbox 360 had eDRAM and the Xbox One had eSRAM.
It's not a pre-requisite for doing multiple passes. In-fact deferred renderers tend to rely on it more so.

The Original Xbox wasn't built for that kind of workload though, it's architecture was traditional-PC at the time.

The Geforce 2 didn't have fully programmable pixel shader pipelines... The point I am trying to convey though is that certain pieces of hardware, even if they lack the intrinsic functionality of a more modern piece of hardware, can still actually be capable of similar effects in hardware using a few tricks of the trade.

That doesn't mean said effects will be employed however due to lack of horse power or how said effects will blow out the render time budget.

Deferred rendering has other reasons for using on-chip memory buffers like it's fat G-buffer but still don't try multipass on anything other than a PS2 ...

Original Xbox was a beast. That thing ran the VAST majority of the code meant for either the GC or PS2 better! Original Xbox was undeniably better because of the fact that it just ran the code and the multiplats faster but most of all it stood to gain more from custom programming than either the GC or PS2 ... (Xbox's bottlenecks were mostly caused by software rather than it's hardware but it still rocked in multiplats)

GC on the otherhand was NEVER more powerful than the PS2 and it didn't even match it's competitors in terms of feature set at the time so no way was it ever going to be comparable to the sub-HD twins of 7th gen. At best, GC had some spotlights like RE4 but more often than not developers didn't like it very much because it just wasn't up to the task of running multiplats that were on the PS2 ... (it's probably why developers found out the hard way when they tried running microcode optimized for the VUs on the GC, performance tanked hard)

I'm starting to think if Nintendo wanted a technical redo during 6th gen they probably would've included capabilities similar to a vertex shader along with a DVD driver ... (multiplats or lack of on the GC were pretty painful)

Pemalite said:

The die-shots show it to have allot in common with R700. But R700 has allot of common with R600 anyway... R700 is based on the R600 design, it's all Terascale.
So I wouldn't be surprised. In saying that, it doesn't really matter at the end of the day.
WiiU was a bust.

Looking at open source drivers for both the R600 and R700 came in handy for WII U emulation regardless ... 

Pemalite said:

nVidia won't maintain support for older hardware, they will eventually relegate older parts to life-support.
In-fact it has already started to happen to Fermi. Kepler is next.

https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-legacy-ends-32bit-os-support

I feel like in an era where Moore's Law is coming to an end, there should be better support for older hardware ... 

Apple for instance is doing their customers right by giving their devices more updates over the lifetime in comparison to Android which just compulsively drops anything older than 2 years old ... 

Pemalite said:

Same can be said for Graphics Core Next... There are always smaller tweaks between Graphics Core Next revisions... It's actually expected whenever a GPU lineup is refreshed, new features and capabilities are expected to be added.


I mean, AMD doubled the ACE units in Bonaire for example or introduced primitive discard in Polaris, there is always some deviation.
Hence... The entire point of why I look at it from a high-level perspective, certain GPU families can certainly be grouped together... Sure you can nit-pick individualistic points of contention and go from there, but at the end of the day you won't get far.

No it really can't ? For the most part, GCN ISAs are compatible with each other and the same isn't true of most Nvidia's GPU architectures where they likely just keep making different or modified instruction encodings ...

Pemalite said:

nVidia's license actually includes access to everything. -Around 15 companies have such a license.

Whether they will use it is another matter entirely.

No it doesn't ... 

It makes NO sense for Nvidia to just purposefully hamstring themselves to not include a superior design if they have access to it ... 

I'll just assume that Nvidia doesn't have access to ARM's in-house designs. Big corporation with lot's of money but they can't even hire some CPU designers to salvage their designs so that's why their CPU team is garbage. If AMD/Intel's worst was "bulldozer/netburst" then Nvidia consistently keep pulling those designs at the same level and THAT'S saying something ... (I'm not even kidding you how bad NV's CPU designs are)

Pemalite said:

7nm will be less ambitious than their 10nm process and the team handling 7nm has hit every milestone on time.

Let's hope so after 4 years of delays with 10nm ... 

Pemalite said:

Grand Theft Auto 5 is certainly maintained.

World of WarCraft is certainly maintained.

A benchmark suite should present data that shows how the hardware performs in every single scenario. Not just a nitpick to show the hardware in the best possible light.

GTA V's graphics code particularly is NOT maintained while World of WarCraft's graphics code is maintained ... (ala WoW now has a D3D12 backend)

GTA V is not deserving of being benchmarked again in comparison to WoW IMO. It's a bad idea to keep using outdated benchmarks so it's important for benchmarks to FIT the hardware. Even on Nvidia it's a bad idea to use old benchmarks ... (doing things the old way which really works AGAINST the new hardware isn't fair)

Do we take CPU benchmarks seriously if it includes MMX instructions despite the fact that the said MMX instructions are effectively deprecated ? (every time Intel and AMD makes their MMX implementation slower as each new CPU generation goes on and in fact compilers now EMULATE IT)

Pemalite said:

To be frank... I will believe it when I see it. People have been proclaiming AMD's return to being competitive in the GPU landscape for years, but other than price... They haven't really put up much of a fight.

In saying that, I honestly hope it happens... I really do, the PC is at it's best when AMD is at it's best and taking the fight to nVidia and Intel, it brings prices down, innovation happens rapidly... Consumer wins.

I understand your skepticism ... 



Good news, the wait is almost over. AMD is going to reveal all the details for navi on E3. Im guessing ms will have no choice but to anounce something on their day, sony will have to do some kind off show around that date. So less than a month to have the juicy details.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

EricHiggin said:
Trumpstyle said:

I'm unsure if you mean me, but I will buy PS5 on first day or close to that. I have a ps4 pro so PS5 is a much bigger jump than 33%. The people who owns a Xbox one X is a small group of people probably maybe about 2% of total console owners, even those owners will have no choice but to get the PS5 as I suspect that Sony will have improved game versions of all theirs exclusive games giving them 4k/60fps. Giving them a big advantage against Microsoft.

You simply need to realize Moores law is DEAD and the jump from 16nm to 7nm is smaller than 28nm to 16nm, we actually a bit lucky that Sony/Microsoft even manage to double the flop numbers if I'm correct.

Would PS possibly try a PS4 + Pro price arrangement right off the bat next gen?

PS5 - 8 core Ryzen - 8.3TF Navi - checkerboard 4k/60 - $399

PS5 Pro - 8 core Ryzen - 12.9TF Navi - native 4k/60 - $499

Nah I think there will only be one PS5, no PS5 pro. Microsoft probably just assumed that Sony will release a $400 dollar console and thought the best strategy against it is a $300 + a $500 console.

Need to clarify that 4k/60fps I wrote, what I meant is current Sony exclusive games will be patched to run on their current 4k implementation at 60fps, so god of war and Horizon zero down will be 4k checkerboard and Spiderman 1500p upscaled to 4k. This on PS5.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!