By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions.
Graphics Core Next being a highly compute orientated architecture generally cannot get enough bandwidth.

In saying that... There is a point of diminishing returns... Despite the fact that Vega 7 increased bandwidth by 112% and compute by 9%... Performance only jumped by a modest 30-40% depending on game... So the "sweet spot" in terms of bandwidth is likely between Vega 64 and Vega 7. Maybe 768GB/s?

Vega 7's inherent architectural limitations tends to stem not from Compute or Bandwidth though... So when you overclock the Ram by an additional 20% (1.2TB/s!) you might only get a couple % points of performance... But bolstering core clock will net almost a linear increase, so it's not bandwidth starved by any measure.

Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)

Vega 64 was an increase in performance in comparison to the Fury X despite regressing in memory bandwidth so I don't think the Radeon VII needs 1 TB/s when just 640 GB/s could probably do the job just as effectively in giving the chip nearly the same performance uplift ... 

In fact, I don't think I've ever seen a benchmark where the Radeon VII ended up being 2x faster than the Vega 64 ... 

Pemalite said:

Not to mention rolling out a version of Direct X 12 for Windows 7.

-----------------------------------------------------------------------------------------------------------------------------------------------------------

EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse.

In short though, without a doubt nVidia does get more support in engines on the PC side of the equation over AMD... Despite the fact AMD has had it's hardware in the majority of console over the last few generations. (Wii, WiiU, Xbox 360, Xbox One, Playstation 4.)

Part of that is nVidias collaboration with developers... Which has been a thing for decades.

ATI did start meeting nVidia head on back in the R300 days though... Hence the battle-lines between Doom 3 and Half Life 2, but nothing of that level of competitiveness has been seen since.

They didn't take advantage of this during last generation. The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ... 

AMD only really started taking advantage of low level GPU optimizations during this generation ... 

Pemalite said:

nVidia can also afford to spend more time and effort on upkeep.

Both AMD and nVidia's drivers are more complex than some older Windows/Linux Kernels.

Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series) 

To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ... 

Pemalite said:

Actually I do! But it's not as extensive as you portray it to be.
I.E. Pascal and Maxwell share a significant amount of similarities from top to bottom... Kepler and Fermi could be grouped together also. Turing is a significant deviation from prior architectures, but shares a few similarities from Volta.

Even then AMD isn't as clean cut either... They have GCN 1.0, 2.0, 3.0, 4.0, 5.0 and soon 6.0.

With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ... 

Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs! 

At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ... 

Pemalite said:

Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node.

Think about it this way ... 

If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!

Pemalite said:

nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores.

For mobile your point about power is relevant, but for a fixed console... Not so much. You have orders of magnitude more TDP to play with.
An 8-core ARM SoC with a Geforce 1060 would give an Xbox One X with it's 8-core Jaguars a run for it's money.

They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...  

I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ... 

Pemalite said:

Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more.

nVidia is diversifying as... Which you alluded to... Their Console and PC gaming customer base isn't really growing, hence where they are seeing the bulk of their gains.
nVidia certainly does have a future, they aren't going anywhere soon... They have Billions in their war chest.

Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ... 

Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ... 

Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV) 

Pemalite said:

I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old.

Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ...

Pemalite said:

Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.


But like I said... If any company has the potential, it's certainly Intel.

Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ... 

Pemalite said:

Well... It was a game built for 7th gen hardware first and foremost.
However... Considering it's one of the largest selling games in history... Is played by millions of gamers around the world... And actually still pretty demanding even at 4k, it's a relevant game to add to any benchmark in my opinion.

It's one data point though, you do need others in a benchmark "suite" so you can get a comprehensive idea how a part performs in newer and older titles, better or worse.

It being specifically built for last generation is exactly why we should dump it ... 

Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ... 

"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ...