Pemalite said:
7nm is expensive because it can't be used for large complex chips, it's reserved for NAND untill they improve the situation. |
Nope? You need to conform to increasingly complex set of rules when designing smaller chips, and more advanced tools, processes and IP. There is no way around the need for about 500 man-years to design a mid-range SoC on 7nm, no matter how cheap the foundries make these pieces of silicon out to be. I'm not making this up by the way, you can search for the sources of anything I'm saying. So it cannot and will not change, since we can mathematically predict were we wil end up on more advanced nodes. And those predictions are best-case scenarios on themselves, since the foundries don't want to look bad on their own roadmaps, do they?
Now, to considerations on architecture. AMD didn't quite manage to double the performance. The Fiji is about 50% more efficient than the Tahiti chips, and that's factoring in the HBM. Maybe if you couple the worst-reviewed first bath of Tahiti chips with an R9 Nano (which itself was an effort of desperation to look good on power efficiency, selling an underclocked 8.9B transistor chip to take on mini GTX 970s) you can sort of claim the power efficiency has doubled. The GCN 28nm were on themselves terrible on power efficiency, beating the 40nm Evergreen chips by less than 50% on most instances. In fact, 28nm only did what was supposed to do and doubled 40nm on efficiency with the more recent (GCN 1.2 and Maxwell) architectures, a statement on how early GCN and Kepler sort of sucked. Again, not making up anything, the search engines are your friend here.