| Pemalite said: One of the largest Achilles heels of Graphics Core Next is it's need for bandwidth... Hence why AMD outfitted Vega 7 with 1TB/s of HBM. |
Meh, I wouldn't worry about the architecture a whole lot because in the long run games are starting to be more optimized than ever before on GCN and even Nvidia by their own admission with Turing, a good amount of bloat in it's silicon are probably due to the features that GCN already had. There's lot's of things that Nvidia changed with Turing to be more at par with the feature set of GCN such as it's more flexible memory model (not even Volta has this), access to barycentric coordinates within pixel shaders, async compute, and the scalar unit so it's no big loss that next gen consoles are going with GCN again when indirect competition (Nvidia) is taking a similar route ...
There are still some things Turing doesn't have compared to GCN like rapid packed math or shader specified stencil values ...
Don't worry about consoles since developers on that platform seem to have an easier time matching Nvidia's equivalent theoretical performance. I don't think performance will be a concern because it's the developers job to figure out the fast/slow paths of the hardware they're working on. Sony could very well do some amazing things with a Radeon VII at hand because of the fact that they have better tools than available on PC ...
Reaching RTX 2080Ti level of performance isn't all that far fetched depending the fast/slow paths each hardware is hitting relative to each other ...







