vivster said:
Thanks for the clarification, that's about what I thought.
What about tasks with high RAM requirements and less processing like video editing and stuff? I'm assuming a high end card with lots of RAM but an application that needs even more RAM. Could that be a benefit?
|
For professionals sure... I could see some massive benefits. There are scenario's where you can never have enough Ram.
Just like how AMD combining an SSD with their Radeons has massive benefits for some professional users.
But for gaming, I don't see much benefit.
I still wish to know more and know exactly how AMD has achieved this though, but that's the enthusiast in me talking.
Alby_da_Wolf said:
Probably if they bothered doing it, they're considering to make a base model Vega aimed to those that want more than Polaris, but not the true highest end, and probably they'll use it in high-end Ryzen-Vega APUs too. About its cost in resources and money, MMUs used to be separated chips before the 486 and the 68040, then they became small enough compared to other units like ALUs and FPUs to be included with them in a single chip, and with time their share of the total chip size became smaller and smaller, particularly in the high-end CPUs with L1 and large L2 on-chip caches and high-end GPUs with many computing units.
|
APU's aren't really the target audience I don't think... The same issue apply's there as low-end GPU's anyway... Their performance is such garbage, that just 1Gb-2Gb of ram is enough.
Plus APU's you can change the amount of memory that you can allocate to them with the push of a button anyway.
The same thing with Memory Management Units as you mention applys to other components... L2 cache used to be a seperate chip, the North Bridge on motherboards had allot of components moved onto the CPU... x86-x64 support used to cost a significant amount of die-space when it was first introduced on the Athlon 64, today it's almost neglible to even mention.
Cache can take advantage of scales of fabrication... But if you are adding a cache into a GPU that is only usefull in edge-case scenarios when memory starts to get low... Then it's probably not a very good cache to have unless it's extremely cost-effective to implement, otherwise you would be better spending those transistors on something else that benefits the entire GPU.
But... That also doesn't mean anything at this stage anyway, we don't actually have any idea how AMD is achieving this... We are just postulating.