By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

AMD's Roadmaps and Anandtech seem to agree that after Navi. AMD will have a next-gen Architecture though.
https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm

And I honestly can't wait... Just like Terascale got long in the tooth in it's twilight years, the same is happening with Graphics Core Next.

Unless you have information that I do not?

That's just speculation on Anandtech's part ... 

AMD never explicitly proclaimed that they'll bring an entirely new GPU architecture or leave GCN. I can see that happening for maybe say high performance compute (GCN3/Vega have a specialized GCN ISA implementation for this) but I don't think AMD intends to get rid of GCN for gaming ... 

Pemalite said:

Anyone who asserts that the Gamecube was inferior to the Playstation 2 is an opinion not worth it's weight... It's pretty much that.

At the end of the day... The proof is in the pudding, Gamecube games in general were a big step up over the Playstation 2 and would trend closer to the Original Xbox in terms of visuals than the Playstation 2.

Metroid on Gamecube showed that the console could handle high poly counts... They even had 3D mesh water ripples... Watch the digital foundry video I linked to prior.
But you are right, the Gamecube was a texturing powerhouse.

@Bold Really ? Despite real programmers having worked on both systems saying otherwise ?

I don't deny that the GC has it's own advantages like in it's memory sub-system or texturing performance but it sounds like it had some pretty serious design flaws with plenty of it's own bottleneck so it sounded nothing more than a texturing machine ... 

Proof is not in the pudding because quite a few times multiplats came out inferior on the GC in comparison to the PS2. Just because visuals in some exclusives trended like for like on the Xbox did not mean the other things did like game logic, physics, AI, and alpha effects. A game's technical prowess is MORE than just it's textures and for that the GC was resoundingly weaker than the PS2 in those aspects ... (most of the time games on GC were of low geometric complexity because of crappy clipping performance)

Metroid looked impressive as an exclusive but we can't use it for hardware comparisons so we're just gonna have to deal with Baldur's Gate Dark Alliance (where GC very clearly had inferior water physics simulation) or NFS Underground (pared back alpha effects) ... 

GC was probably only ever good at texturing so I can imagine why it didn't get many multiplats later on when game logic got more complex ... (Xbox had vertex shaders and the PS2's VU is the modern day equivalent of Turing's mesh shaders) 

GC is just overrated hardware IMO ... (can make good looking pixels but pretty shit in other departments)

Pemalite said:

Don't foget the Gamecube and Wii had the 1t SRAM and the Xbox 360 had eDRAM and the Xbox One had eSRAM.
It's not a pre-requisite for doing multiple passes. In-fact deferred renderers tend to rely on it more so.

The Original Xbox wasn't built for that kind of workload though, it's architecture was traditional-PC at the time.

The Geforce 2 didn't have fully programmable pixel shader pipelines... The point I am trying to convey though is that certain pieces of hardware, even if they lack the intrinsic functionality of a more modern piece of hardware, can still actually be capable of similar effects in hardware using a few tricks of the trade.

That doesn't mean said effects will be employed however due to lack of horse power or how said effects will blow out the render time budget.

Deferred rendering has other reasons for using on-chip memory buffers like it's fat G-buffer but still don't try multipass on anything other than a PS2 ...

Original Xbox was a beast. That thing ran the VAST majority of the code meant for either the GC or PS2 better! Original Xbox was undeniably better because of the fact that it just ran the code and the multiplats faster but most of all it stood to gain more from custom programming than either the GC or PS2 ... (Xbox's bottlenecks were mostly caused by software rather than it's hardware but it still rocked in multiplats)

GC on the otherhand was NEVER more powerful than the PS2 and it didn't even match it's competitors in terms of feature set at the time so no way was it ever going to be comparable to the sub-HD twins of 7th gen. At best, GC had some spotlights like RE4 but more often than not developers didn't like it very much because it just wasn't up to the task of running multiplats that were on the PS2 ... (it's probably why developers found out the hard way when they tried running microcode optimized for the VUs on the GC, performance tanked hard)

I'm starting to think if Nintendo wanted a technical redo during 6th gen they probably would've included capabilities similar to a vertex shader along with a DVD driver ... (multiplats or lack of on the GC were pretty painful)

Pemalite said:

The die-shots show it to have allot in common with R700. But R700 has allot of common with R600 anyway... R700 is based on the R600 design, it's all Terascale.
So I wouldn't be surprised. In saying that, it doesn't really matter at the end of the day.
WiiU was a bust.

Looking at open source drivers for both the R600 and R700 came in handy for WII U emulation regardless ... 

Pemalite said:

nVidia won't maintain support for older hardware, they will eventually relegate older parts to life-support.
In-fact it has already started to happen to Fermi. Kepler is next.

https://www.anandtech.com/show/12624/nvidia-moves-fermi-to-legacy-ends-32bit-os-support

I feel like in an era where Moore's Law is coming to an end, there should be better support for older hardware ... 

Apple for instance is doing their customers right by giving their devices more updates over the lifetime in comparison to Android which just compulsively drops anything older than 2 years old ... 

Pemalite said:

Same can be said for Graphics Core Next... There are always smaller tweaks between Graphics Core Next revisions... It's actually expected whenever a GPU lineup is refreshed, new features and capabilities are expected to be added.


I mean, AMD doubled the ACE units in Bonaire for example or introduced primitive discard in Polaris, there is always some deviation.
Hence... The entire point of why I look at it from a high-level perspective, certain GPU families can certainly be grouped together... Sure you can nit-pick individualistic points of contention and go from there, but at the end of the day you won't get far.

No it really can't ? For the most part, GCN ISAs are compatible with each other and the same isn't true of most Nvidia's GPU architectures where they likely just keep making different or modified instruction encodings ...

Pemalite said:

nVidia's license actually includes access to everything. -Around 15 companies have such a license.

Whether they will use it is another matter entirely.

No it doesn't ... 

It makes NO sense for Nvidia to just purposefully hamstring themselves to not include a superior design if they have access to it ... 

I'll just assume that Nvidia doesn't have access to ARM's in-house designs. Big corporation with lot's of money but they can't even hire some CPU designers to salvage their designs so that's why their CPU team is garbage. If AMD/Intel's worst was "bulldozer/netburst" then Nvidia consistently keep pulling those designs at the same level and THAT'S saying something ... (I'm not even kidding you how bad NV's CPU designs are)

Pemalite said:

7nm will be less ambitious than their 10nm process and the team handling 7nm has hit every milestone on time.

Let's hope so after 4 years of delays with 10nm ... 

Pemalite said:

Grand Theft Auto 5 is certainly maintained.

World of WarCraft is certainly maintained.

A benchmark suite should present data that shows how the hardware performs in every single scenario. Not just a nitpick to show the hardware in the best possible light.

GTA V's graphics code particularly is NOT maintained while World of WarCraft's graphics code is maintained ... (ala WoW now has a D3D12 backend)

GTA V is not deserving of being benchmarked again in comparison to WoW IMO. It's a bad idea to keep using outdated benchmarks so it's important for benchmarks to FIT the hardware. Even on Nvidia it's a bad idea to use old benchmarks ... (doing things the old way which really works AGAINST the new hardware isn't fair)

Do we take CPU benchmarks seriously if it includes MMX instructions despite the fact that the said MMX instructions are effectively deprecated ? (every time Intel and AMD makes their MMX implementation slower as each new CPU generation goes on and in fact compilers now EMULATE IT)

Pemalite said:

To be frank... I will believe it when I see it. People have been proclaiming AMD's return to being competitive in the GPU landscape for years, but other than price... They haven't really put up much of a fight.

In saying that, I honestly hope it happens... I really do, the PC is at it's best when AMD is at it's best and taking the fight to nVidia and Intel, it brings prices down, innovation happens rapidly... Consumer wins.

I understand your skepticism ...