Pemalite said: Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's. Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor. |
Don't worry about it, changes are happen now ...
Pemalite said:
You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners. |
TEV (24 instructions) is comparable to the original Xbox's "pixel shaders" (12 instructions) which were shader model 1.1. There's no solid definition of a shader anyways. The ATI Flipper most certainly did not have vertex shaders according to emulator developers ... (vertex pipeline was 100% fixed function)
Doing "multiple passes" is not something to be proud of and is actively frowned upon by many developers since it cuts rendering performance by a big factor ...
Performance was one issue but the Flipper didn't have the feature set either to cope with ...
Pemalite said:
Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.
|
Well he seems to want an Adreno 2XX GPU for reverse engineering the X360's alpha to coverage behaviour and he's a developer of the Xbox 360's emulator specializing on the GPU ... (he seems to be convinced that the X360 is closely related to the Adreno 2XX)
The one console part that's truly based on the R600 was the WIIU's 'Latte' graphics chip in which case looking at open source drivers did actually help the WIIU's graphics emulation ...
Pemalite said:
When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite. OpenGL wasn't to bad on the later Terascale parts. |
By the time the benchmark was taken, it was a SIX(!) year old game. Let's try something a little newer like Wolfenstein: The New Order ...
An R9 290 was SLOWER than a GTX 760! (OpenGL was horrendous then for AMD, pre-GCN but even then OpenGL is still bad on GCN)
Pemalite said:
I think you are nitpicking a little to much. Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA. |
On Kepler, they straight deprecated an ENTIRE SET of surface memory instructions compared to Fermi. Even on GCN for example from gen 1 to gen 2, they removed a total of 4 instructions at the LOWEST LEVEL but since consoles are GCN gen 2 AMD doesn't have to worry about future software breaking compatibility with GCN gen 1 hardware in the future. On the Vega ISA, they removed a grand total of 3 instructions ...
Just consider this for a moment, PTX is just an intermediary while GCN docs are real low level details. Despite being GCN assembly, Nvidia manages to somehow change more at the higher level than AMD does at the low level so there's no telling what other sweeping changes Nvidia has applied at the low level ...
I highly doubt Fermi or Kepler are related, at least to the degree each GCN generation are ...
With Maxwell or Pascal that's a big maybe since reverse engineering a copy of Super Mario Odyssey revealed that there's a Pascal(!) codepath for NVN's compute engine so there may yet be an upgrade path for the Switch ... (no way in hell are they going to upgrade to either Volta or Turing though since Nvidia removed Maxwell specific instructions)
Also I forgot to note but the reason why Nvidia doesn't license from ARM's designs is because they want to save money ... (all of Nvidia's CPU designs suck hard)
Pemalite said:
End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC. nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot. |
@Bold Is it truly ? AMD deprecated their Mantle API today so what is stopping Nvidia from doing the same with GPU accelerated PhysX that's failed to be standardized ? Eventually, Nvidia will find it is not sensible to maintain such so that becomes a feature that's lost FOREVER ...
As for OpenGL being deprecated, I doubt it because the other industries (content creation/professional/scientific) aren't moving fast enough in comparison to game development so unless AMD offers technical assistance for them, they'll be crippled at the mercy of AMD's OpenGL stack ...
Pemalite said:
DDR3 1600mhz on a 64bit bus = 12.8GB/s. DDR3 1600mhz on a 256bit bus = 51.2GB/s. |
Seeing as how threadripper was designed with an octa-channel memory controller, there's no reason to rule out a high-end APU either ...
If bandwidth is an issue then AMD could opt to make special boards that are presoldered with APUs and GDDR5/6 memory modules like the Subor-Z ...
Nothing preventing AMD from getting 1080 levels of performance like above in a smaller, cheaper, and more efficient form factor ...
Pemalite said:
They stuffed up with 10nm... And allot of work has had to go into making that line usable. |
Every time Intel has delayed 10nm, it was also met with delays on 7nm as well so I doubt Intel could just as easily scrap their previous work and just start anew ...
I don't trust Intel to actually deliver on their manufacturing roadmap ...
Pemalite said:
We should if said games are the most popular games in history that are actively played by millions of gamers. |
Careful, Minecraft is the most popular PC game ever but I doubt that'd be a benchmark ...
What people are looking for from a current day benchmark suite is not popularity but they expect reasonably (modern) pathological (less than 5%) cases ...
If GPU designers drop native support for older APIs (glide) and the testers had to use a translation layer (emulator) would that somehow be a good representation of how hardware handles work at all ?
Pemalite said:
Good to hear. Fully expected though. But doesn't really change the landscape much. |
It poses quite a few ramifications though since many of the other pieces are falling into place for AMD's immediate future and the Apex engine was a good candidate for a Vulkan renderer when technically high-end game franchises such as Just Cause, Mad Max, and Rage are featured on it ...
Frostbite 3, Northlight, Nitrous, Asura, Snowdrop, Glacier 2, Dawn, Serious 3, Source 2, Foundation, Total War, 4A, RE and many other internal engines are changing the playing field (DX12/Vulkan) for AMD but now it's time to cut the wire (Creation/AnvilNEXT/Dunia/IW) and finally pull the plug (UE4/Unity) once and for all ...
Only several engines left that are offending but they'll drop one by one soon enough and steady ... (engines changing are going to show up in the benchmark suites)