By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - AMD’s Vega 10 High-Bandwidth Cache Controller aims to improve performance by almost 100% in minimum FPS

Pemalite said:

God dammit. Why can't the explain how they achieved what they did with this cache? I want details, not claims.
What kind of  cache is it? Is it using existing DRAM on the GPU? A proper high-speed cache on the GPU itself? Is it powered by combustible kittens from outer space?

***

It's use however is going to be extremely limited.

Low-End GPU's already have more memory than they could ever possibly hope to use...
The mid-range could see some benefit, especially when manufacturers sell a version of a GPU with less memory to save on costs. (Like the Radeon RX 480.)

The High-End typically always has enough memory to make such a feature worthless anyway, with a few-edge case exceptions like Fury and Fury X, but that was because it was using such a new memory technology. (HBM/Stacked memory connected via an Interposer.)

If this "cache" takes up transistors and thus drives up costs and it's benefit is only in low-memory situations, then I would rather they didn't bother and just threw more GCN pipelines in.

Thanks for the clarification, that's about what I thought.

What about tasks with high RAM requirements and less processing like video editing and stuff? I'm assuming a high end card with lots of RAM but an application that needs even more RAM. Could that be a benefit?



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Around the Network
vivster said:

If they had to limit the VRAM to show the effect I'm gonna assume it will have little to no effect with a good amount of RAM.

edit: yeah it already says it in the text.

So in what kind of fringe cases will this work? GPUs should run into the computing limit way faster than the memory limit in games. So it's probably good for video editing and stuff?

Actually, it is a feature that will (Hope) be good for most GPU tasks, 4K gaming especially.



Pemalite said:

God dammit. Why can't the explain how they achieved what they did with this cache? I want details, not claims.
What kind of  cache is it? Is it using existing DRAM on the GPU? A proper high-speed cache on the GPU itself? Is it powered by combustible kittens from outer space?

***

It's use however is going to be extremely limited.

Low-End GPU's already have more memory than they could ever possibly hope to use...
The mid-range could see some benefit, especially when manufacturers sell a version of a GPU with less memory to save on costs. (Like the Radeon RX 480.)

The High-End typically always has enough memory to make such a feature worthless anyway, with a few-edge case exceptions like Fury and Fury X, but that was because it was using such a new memory technology. (HBM/Stacked memory connected via an Interposer.)

If this "cache" takes up transistors and thus drives up costs and it's benefit is only in low-memory situations, then I would rather they didn't bother and just threw more GCN pipelines in.

Probably if they bothered doing it, they're considering to make a base model Vega aimed to those that want more than Polaris, but not the true highest end, and probably they'll use it in high-end Ryzen-Vega APUs too.
About its cost in resources and money, MMUs used to be separated chips before the 486 and the 68040, then they became small enough compared to other units like ALUs and FPUs to be included with them in a single chip, and with time their share of the total chip size became smaller and smaller, particularly in the high-end CPUs with L1 and large L2 on-chip caches and high-end GPUs with many computing units.



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


vivster said:

Thanks for the clarification, that's about what I thought.

What about tasks with high RAM requirements and less processing like video editing and stuff? I'm assuming a high end card with lots of RAM but an application that needs even more RAM. Could that be a benefit?

For professionals sure... I could see some massive benefits. There are scenario's where you can never have enough Ram.
Just like how AMD combining an SSD with their Radeons has massive benefits for some professional users.

But for gaming, I don't see much benefit.

I still wish to know more and know exactly how AMD has achieved this though, but that's the enthusiast in me talking.

Alby_da_Wolf said:

Probably if they bothered doing it, they're considering to make a base model Vega aimed to those that want more than Polaris, but not the true highest end, and probably they'll use it in high-end Ryzen-Vega APUs too.
About its cost in resources and money, MMUs used to be separated chips before the 486 and the 68040, then they became small enough compared to other units like ALUs and FPUs to be included with them in a single chip, and with time their share of the total chip size became smaller and smaller, particularly in the high-end CPUs with L1 and large L2 on-chip caches and high-end GPUs with many computing units.

APU's aren't really the target audience I don't think... The same issue apply's there as low-end GPU's anyway... Their performance is such garbage, that just 1Gb-2Gb of ram is enough.
Plus APU's you can change the amount of memory that you can allocate to them with the push of a button anyway.

The same thing with Memory Management Units as you mention applys to other components... L2 cache used to be a seperate chip, the North Bridge on motherboards had allot of components moved onto the CPU... x86-x64 support used to cost a significant amount of die-space when it was first introduced on the Athlon 64, today it's almost neglible to even mention.

Cache can take advantage of scales of fabrication... But if you are adding a cache into a GPU that is only usefull in edge-case scenarios when memory starts to get low... Then it's probably not a very good cache to have unless it's extremely cost-effective to implement, otherwise you would be better spending those transistors on something else that benefits the entire GPU.
But... That also doesn't mean anything at this stage anyway, we don't actually have any idea how AMD is achieving this... We are just postulating.



--::{PC Gaming Master Race}::--

Chazore said:

[...] According to the red team, this new memory architecture allows Vega GPUs to do a number of exciting new things that its predecessors can’t. [...]

Like competing with Nvidia?

On a more serious note, glad AMD seems to be back on track, some competition will put the prices down for sure...



Around the Network

Looks like an 1800x with an RX Vega is on the cards folks.



pedromr said:
Chazore said:

[...] According to the red team, this new memory architecture allows Vega GPUs to do a number of exciting new things that its predecessors can’t. [...]

Like competing with Nvidia?

On a more serious note, glad AMD seems to be back on track, some competition will put the prices down for sure...

Not sure if much will change with Vega. I don't think the flagship Vega can reach the 1080ti. And I'm sure the Volta Titan will hit long before we hear about a Vega successor. So it seems everything will stay the same like it has been in the past few years.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Well, I have an I7 4790k, in my PC, with an R9 390X. I was planning to switch to Ryzen 7 1800x, with a GTX 1080... Looks like I'll be holding onto my 390x a bit longer. Which is fine. My biggest issue was that CPU. I can already run everything at near max settings at 60 FPS, but when I'm streaming, or doing other things, my framerate takes a serious hit, and I cannot stand that.



Pemalite said:

[...]

Alby_da_Wolf said:

Probably if they bothered doing it, they're considering to make a base model Vega aimed to those that want more than Polaris, but not the true highest end, and probably they'll use it in high-end Ryzen-Vega APUs too.
About its cost in resources and money, MMUs used to be separated chips before the 486 and the 68040, then they became small enough compared to other units like ALUs and FPUs to be included with them in a single chip, and with time their share of the total chip size became smaller and smaller, particularly in the high-end CPUs with L1 and large L2 on-chip caches and high-end GPUs with many computing units.

APU's aren't really the target audience I don't think... The same issue apply's there as low-end GPU's anyway... Their performance is such garbage, that just 1Gb-2Gb of ram is enough.
Plus APU's you can change the amount of memory that you can allocate to them with the push of a button anyway.

The same thing with Memory Management Units as you mention applys to other components... L2 cache used to be a seperate chip, the North Bridge on motherboards had allot of components moved onto the CPU... x86-x64 support used to cost a significant amount of die-space when it was first introduced on the Athlon 64, today it's almost neglible to even mention.

Cache can take advantage of scales of fabrication... But if you are adding a cache into a GPU that is only usefull in edge-case scenarios when memory starts to get low... Then it's probably not a very good cache to have unless it's extremely cost-effective to implement, otherwise you would be better spending those transistors on something else that benefits the entire GPU.
But... That also doesn't mean anything at this stage anyway, we don't actually have any idea how AMD is achieving this... We are just postulating.

Ooops! Yes, I was just considering the additional functions in the MMU, and not the cache itself, thet shold use much more silicon.
Anyway, even if they designed the new functions to deliver the best in some situations, and even if general purpose caches give their best benefits on CPUs,
this cache should give some performance improvements also in less ideal cases where its special purposes aren't relevant.



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW!