Pemalite said: AMD will be required to sacrifice some of the "density" in order to reduce leakage and noise so they can dial up clock rates. |
Yes, and a 3x density improvement should accommodate that.
|
There's a big difference between designing Navi for being built on the 7nm process node and shrinking Vega 7 to 7nm. Designing a SOC on the 7nm node is very expensive. Vega 7 is a low volume niche product and the shrink to 7nm was most likely a bare minimum effort.
Pemalite said: Graphics Core Next already has a multitude of bottlenecks inherent in it's design, it's why it's a compute monster, but falters in gaming. What happens if we add more Shader Engines? It reduces the utilization across all the CU's. - There is only so much Screenspace that you can dynamically allocate with usable work to keep the entire chip busy. - One of Graphics Core Next's largest issues is effective utilization... Which was a much larger issue with the prior Terascale designs, hence why AMD introduced VLIW4 to increase utilization. |
I don't believe there is a need for making a departure from GCN for AMD. All the current bottlenecks are not any major flaws in the architecture and can be overcome by evolutionary updates of the various units.
Load balancing is not a major issue yet. The number of pixels per CU for 16 CUs@1080p is the same as 64 CUs@4K. There are not any significant inefficiencies when crossing the 16CU boundary at 1080p and the same applies when crossing 64CUs at 4K.
Last edited by Straffaren666 - on 15 March 2019