Quantcast
View Post
Pemalite said:

Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's.

Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor.

Not to say that AMD hasn't gained a slight boost out of it with development pipelines, but it just means sweet bugger all in the grand scheme of things... Which is certainly not a good thing.
I would like AMD to be toe-to-toe with nVidia, that is when innovation is at it's best and prices are at it's lowest.

Don't worry about it, changes are happen now ...

Pemalite said:

You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners.

The Gamecube/Wii had Vertex Shader... But the transform and lighting engine was semi-programmable too, I would even argue more programmable than AMD's Morpheus parts back in the day.

In saying that... It's not actually an ATI designed part, it is an ArtX designed part so certain approaches will deviate.
For such a tiny chip, she could pull her weight, especially when you start doing multiple passes and leverage it's texturing capabilities.

Sadly overall, due to it's lack of overall performance relative to the Xbox 360, the effects did have to be paired back rather substantially.

And I am not denying that the Xbox 360 and Playstation 3 didn't come with their own bunch of goodies either... The fact that the Xbox 360 for example has Tessellation is one of them.

TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)

Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ... 

Performance was one issue but the Flipper didn't have the feature set either to cope with ... 

Pemalite said:

Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.


And like I said before... Adreno was based upon AMD's desktop GPU efforts, so it will obviously draw similarities with parallel GPU releases for other markets... But Xenos is certainly based upon R500 with features taken from R600... But that just means the Adreno was also based on the R500 also.

Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX) 

The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...

Pemalite said:

When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite.

Obviously AMD has always pushed it's Direct X capabilities harder than OpenGL... Even going back to the Radeon 9000 vs Geforce FX war with Half Life 2 (Direct X) vs Doom 3 (OpenGL.)

OpenGL wasn't to bad on the later Terascale parts.

I mean... Take Wolfenstein via OpenGL on id tech... AMD pretty much dominated nVidia on this title with it's Terascale parts.
Granted, it's an engine/game that favored AMD hardware, but the fact that it was OpenGL is an interesting aspect.

https://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/22

By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ... 

An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN) 

Pemalite said:

I think you are nitpicking a little to much.

Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA.
They aren't 1:1 with each other.

Same holds true for nVidia.

But from an overall design principle, Fermi and Kepler are related, just like Maxwell and Pascal.

On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ... 

Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ... 

I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ... 

With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions) 

Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)

Pemalite said:

End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC.

nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot.

As for OpenGL, that is being depreciated in favor of Vulkan anyway for next gen. (Should have happened this gen, but I digress.)

@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ... 

As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ... 

Pemalite said:

DDR3 1600mhz on a 64bit bus = 12.8GB/s.
DDR4 3200mhz on a 64bit bus = 25.6GB/s.
DDR5 6400mhz on a 64bit bus = 51.2GB/s.

DDR3 1600mhz on a 128bit bus = 25.6GB/s.
DDR4 3200mhz on a 128bit bus = 51.2GB/s.
DDR5 6400mhz on a 128bit bus = 102.4GB/s.

DDR3 1600mhz on a 256bit bus = 51.2GB/s.
DDR4 3200mhz on a 256bit bus = 102.4GB/s.
DDR5 6400mhz on a 256bit bus = 204.8GB/s.

I wouldn't be so bold to assume that motherboards will start coming out with 512bit busses to drive 400GB/s+ of bandwidth. That would be prohibitively expensive.
The Xbox One X is running with a 384-bit bus... but that is a "premium" console... Even then that would mean...

DDR3 1600mhz on a 384bit bus = 76.8GB/s.
DDR4 3200mhz on a 384bit bus = 153.6GB/s.
DDR5 6400mhz on a 384bit bus = 307.2GB/s.

That's an expensive implementation for only 307GB/s of bandwidth, considering that ends up being less than the Xbox One X... And that is using GDDR5X... And no way is that approaching Geforce 1080 levels of performance.
AMD would need to make some catastrophic leaps in efficiency to get there... And while we are still shackled to Graphics Core Next... Likely isn't happening anytime soon.

You would simply be better off using GDDR6 on a 256bit bus.

Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...

If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...

Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...

Pemalite said:

They stuffed up with 10nm... And allot of work has had to go into making that line usable.
However, the team that is working on 7nm at Intel hasn't had the setbacks that the 10nm team has... And for good reason.

I remain cautiously optimistic though.

Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ... 

I don't trust Intel to actually deliver on their manufacturing roadmap ... 

Pemalite said:

We should if said games are the most popular games in history that are actively played by millions of gamers.

Those individuals who are upgrading their hardware probably wants to know how it will handle their favorite game... And considering we aren't at a point yet where low-end parts or IGP's aren't capable of driving GTA5 at 1080P ultra... Well. There is still relevance to including it in a benchmark suite.

Plus it gives a good representation of how the hardware handles older API's/work loads.

Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ... 

What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ... 

If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?

Pemalite said:

Good to hear. Fully expected though. But doesn't really change the landscape much.

It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ... 

Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ... 

Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)