By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
haxxiy said:

The upcoming 7nm nodes are "only" about 3 times better than 14-16nm and they are very, very expensive. Anyone who can't afford paying literally hundreds of millions of dollars to design a chip are going to stay put. Do not expect more than one or two consoles, and one or two PC GPUs five years from now or so.

Consoles could be 40-60% better than a GTX 1080 even on their current power requirements. But yeah, if they are as big and power hungry as the fat PS3/X360 SKUs, they could maybe double a GTX 1080.

Of course, everybody could play safe and stay at 10nm. You never know.

7nm is expensive because it can't be used for large complex chips, it's reserved for NAND untill they improve the situation.
Thus that "3 times better" can and will change.

However... nVidia and AMD constantly do well every year throwing out faster and faster GPU's.. Even on the same node, I.E. AMD Managed to more than double performance between the Radeon 7970 and Fury, both were 28nm, both were cosidered monolithic, low yield, expensive chips.

Thus... Even if we were to be using 16/14nm for the next 4 years, expect more than double the performance as AMD and nVidia are both entering this feature size conservatively with their initial batch of processors.

Nope? You need to conform to increasingly complex set of rules when designing smaller chips, and more advanced tools, processes and IP. There is no way around the need for about 500 man-years to design a mid-range SoC on 7nm, no matter how cheap the foundries make these pieces of silicon out to be. I'm not making this up by the way, you can search for the sources of anything I'm saying. So it cannot and will not change, since we can mathematically predict were we wil end up on more advanced nodes. And those predictions are best-case scenarios on themselves, since the foundries don't want to look bad on their own roadmaps, do they?

Now, to considerations on architecture. AMD didn't quite manage to double the performance. The Fiji is about 50% more efficient than the Tahiti chips, and that's factoring in the HBM.  Maybe if you couple the worst-reviewed first bath of Tahiti chips with an R9 Nano (which itself was an effort of desperation to look good on power efficiency, selling an underclocked 8.9B transistor chip to take on mini GTX 970s) you can sort of claim the power efficiency has doubled.  The GCN 28nm were on themselves terrible on power efficiency, beating the 40nm Evergreen chips by less than 50% on most instances. In fact, 28nm only did what was supposed to do and doubled 40nm on efficiency with the more recent (GCN 1.2 and Maxwell) architectures, a statement on how early GCN and Kepler sort of sucked. Again, not making up anything, the search engines are your friend here.