Quantcast
View Post
Pemalite said:

It's a bit of a stretch if said changes will ever be relevant though. Because by the time it potentially does... AMD may have moved onto it's next-gen architecture on the PC. (Graphics Core Next isn't sticking around forever!)

Not so because AMD intends on keeping GCN in the foreseeable! As long as Sony, Microsoft, Google, and potentially EA as well as other ISVs (Valve wants to keep developing open source drivers for AMD on Linux/DXVK and Ubisoft are developing their Vulkan backend exclusively for AMD hardware on Stadia) wants to keep GCN then GCN will stay alive ... 

AMD's customers are more than just PC gamers ... 

Pemalite said:

History has shown that the Gamecube and Wii were punching around the same level as the original Xbox in terms of visuals.
But anyone who worked with the TEV could actually pull off some interesting effects, many of which could rival the Xbox 360.

https://www.youtube.com/watch?v=RwhS76r0OqE

Take the Geforce 2 for example... Using the register combiners you could pull off some shader effects that the Geforce 4 Ti was doing... And the Geforce 2 was a very much a highly fixed-function part.

Just because the hardware isn't a 1:1 match, doesn't mean you cannot pull of similar effects with obviously different performance impacts.

As for doing multiple passes... It depends on the architecture, not all architectures have a big performance impact.

Gamecube was pretty overrated IMO since quite a few devs didn't find an improvement whenever they ported code from PS2. Take for example Baldur's Gate Dark Alliance, NFS Underground series, and one of the lead programmers proclaimed that the GC couldn't handle Burnout 3 because it was inferior in comparison to PS2 hardware ... (I'd be interested in other opinions from programmers within the industry that worked during that generation how the GC actually fared against the PS2)

I think ERP from Beyond3D might be Brian Fehdrau but judging by his testing, Flipper's clipping performance was a massive showstopper in it's hardware design ... (now I'm definitely convinced that the GC is just overrated after actually hearing from some developers)

I'm starting to get the impression that the GC was only ever good at texturing. It couldn't handle high poly counts, alpha effects, physics, AI like the PS2 could ...

Metroid Prime looks great but you can't really use exclusives as a measuring stick for comparison ... (no way effects would rival 360 at all since it'll probably blow the doors off the GC when original Xbox by comparison ran nearly all of the code faster in comparison to the GC)

GeForce 2 at most could potentially similar be similar to pixel shaders but from GeForce 3 then and onwards, it had a truly programmable vertex pipeline ... 

The only hardware where it was acceptable to do multiple passes was on the PS2 with it's stupidly fast eDRAM but then it'll be a massive performance impact. Doing multiple passes on MGS2 brought even the Xbox down to it's knees by comparison to the PS2. Multiple passes was especially a no go from 7th gen and onwards even when the 360 had eDRAM because programmers started realizing that poor cache locality resulted in bad performance on parallel processing ... 

Pemalite said:

You aren't getting it.
Adreno is derived from Radeon technology.
Xenos is derived from Radeon technology.

Both are derived from the same technology base of the same era... Obviously there will be architectural similarities, you don't go about reinventing the wheel if certain design philosophy's work.

Fact is... At the time, ATI used it's desktop Radeon technology as the basis for all other market segments.

As for the Wii U... The general consensus is it's R700 derived with some differences.
https://www.techinsights.com/blog/nintendo-wii-u-teardown
https://forums.anandtech.com/threads/wii-u-gpu-scans-now-up.2299839/
https://www.neogaf.com/threads/wiiu-latte-gpu-die-photo-gpu-feature-set-and-power-analysis.511628/

The WII U is a mix between the R600 and the R700 according to the leading WII U emulator developer ... 

Pemalite said:

In saying that, nVidia's approach is clearly paying off because nVidia's hardware has been superior to AMD's for gaming for generations.

Maybe so market wise but Nvidia won't be able to as easily maintain support for older hardware in the future ... 

I don't deny that Nvidia has a sound approach since quite a few developers are resisting the push for more explicit APIs like DX12 or Vulkan but it'll come at the cost of bad experiences later on with as hardware ages in comparison to AMD ... 

Pemalite said:

Even Anandtech recognizes that Kepler has many of the same underpinnings as Fermi.
https://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

But that's just the high level view. If we take a look at the lower level like say Nvidia's PTX which is their intermediate representation for their GPUs (AMD equivalent is AMDIL IIRC) then the changes are really profound. Fermi only supports binding upto 8 RWTexture2D in any given shader stages while Kepler ups this 64, Fermi doesn't support bindless handles for textures/images and not to mention the changes behind PTX ... 

Pemalite said:

That isn't it at all... nVidia is a full ARM Architecture licensee...
https://en.wikipedia.org/wiki/Arm_Holdings#Licensees
https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-arms-business-model-works/3

Nvidia licenses the ISA, not ARM's in-house designs ... 

It's probably the biggest reasons why Nvidia's CPUs are still trash in comparison to what ARM offers but since their mainly a GPU chip design company, they'll obviously cheap out whenever they can ... 

Pemalite said:

nVidia does have more cash and more profits than AMD, so of course.
And I honestly hope PhysX does get depreciated.

Vulkan is slowly replacing OpenGL pretty much across the entire gaming spectrum.
Content Creation/Scientific tasks obviously have different requirements.

It's not good for preservation purposes especially since Nvidia dropped 3D vision support ... (it sucks especially if a feature is dropped because the future won't be able to experience the same things we did)

I don't believe it'll be sustainable in the near future to just deprecate things especially when investment in software is rising ... 

Pemalite said:

Has it really though? Because everything points to the 7nm team hitting it's design goals.
https://www.anandtech.com/show/14312/intel-process-technology-roadmap-refined-nodes-specialized-technologies
https://www.anandtech.com/show/13683/intel-euvenabled-7nm-process-tech-is-on-track

2021 with EUVL, 7nm.

They should launch 10nm first before talking about 7nm ... 

I still don't trust Intel's manufacturing group ... 

Pemalite said:

Difference is... Minecraft will run perfectly fine on even the shittiest Intel Integrated Graphics today.

When I look at a benchmark today, it's due to wanting to find out how todays hardware runs todays games... And yes, some of the games I play are going to be a little older... And that is fine.
It's good to find out how newer architectures run older and newer titles... Which is why a Benchmark Suite usually includes a multitude of titles to cover all bases, it's an extra data point to provide consumers with a more comprehensive idea for their purchasing decisions.

The fact that a benchmark suite includes an older game or two really isn't a relevant complaining point, ignore those numbers if you must, but they are valuable pieces of information for other people.

Plus many games use older API's. - I mean, you said yourself that you don't think Vulkan will replace OpenGL.

Older games aren't worth benchmarking because their code becomes unmaintained so their more of a liability to include than gaining any real useful data point ... 

It's not good to use unmaintained software to collect performance data of new hardware since bottlenecks could be due to code rather than the hardware itself ... (older code just isn't going to look so great running on a new piece of hardware) 

This is why I strongly advocate against using older games but I guess we'll agree to disagree ...

Pemalite said:

In short, with all those titles today, nVidia still holds an overall advantage. Those are the undeniable facts presented by benchmarks from across the entire  internet.

It's true Nvidia holds the performance crown but the tables are turning and the playing field is changing against them so their going to be in hostile territory soon ...