Pemalite said: Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions. |
Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)
Vega 64 was an increase in performance in comparison to the Fury X despite regressing in memory bandwidth so I don't think the Radeon VII needs 1 TB/s when just 640 GB/s could probably do the job just as effectively in giving the chip nearly the same performance uplift ...
In fact, I don't think I've ever seen a benchmark where the Radeon VII ended up being 2x faster than the Vega 64 ...
Pemalite said:
Not to mention rolling out a version of Direct X 12 for Windows 7. ----------------------------------------------------------------------------------------------------------------------------------------------------------- EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse. |
They didn't take advantage of this during last generation. The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ...
AMD only really started taking advantage of low level GPU optimizations during this generation ...
Pemalite said:
nVidia can also afford to spend more time and effort on upkeep. |
Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series)
To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ...
Pemalite said:
Actually I do! But it's not as extensive as you portray it to be. |
With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ...
Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs!
At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ...
Pemalite said:
Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node. |
Think about it this way ...
If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!
Pemalite said:
nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores. |
They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...
I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ...
Pemalite said:
Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more. |
Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ...
Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ...
Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV)
Pemalite said:
I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old. |
Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ...
Pemalite said:
Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.
|
Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ...
Pemalite said:
Well... It was a game built for 7th gen hardware first and foremost. |
It being specifically built for last generation is exactly why we should dump it ...
Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ...
"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ...