Pemalite said: There is a massive need. The fact that AMD is a generation or two behind nVidia is a testament to that very fact. |
No, it's not. It only reflects the fact that AMD spent very little engineering resources on the last two generational GCN updates. It doesn't necessarily mean there are major inherent flaws in the GCN architecture. What do you propose? A switch back to a VLIW architecture?
Out of curiosity, what are the "multitude of bottlenecks" of GCN that needs a revolutionary architecture and can't be overcome by evolutionary/generational updates to the GCN architecture?
Pemalite said: It's always an issue. |
No, it's not. Evidently, neither AMDs nor nVidias architectures exhibit any significant inefficiencies due to not being able to feed the CUs with rasterized pixels in a 16CU configuration at 1080p, because of parallelization. Why would it suddenly become a problem for 64CUs@4K? Sure, if one goes wide enough, eventually one will run out of work to parallelize. There is no empiric evidence we've reached that point due to screen-space issues.
Last edited by Straffaren666 - on 16 March 2019