fatslob-:O said:
Pemalite said:
Well, technically, the design is already "done" with AMD's 28nm Oland core, unfortunatly it still doesn't get as cheap as the 40nm Cedar and Caicos Chips, but that will change eventually... A possible respin with the removal of the GDDR5 memory controller might be a possibility? Then again, Cedar and Caicos has DDR2, DDR3 and GDDR5 controllers, so it can't be that expensive transister wise.
The Xbox One though... Has a relatively monolothic 5~ Billion transister chip, they would probably save cash by doing process shrinks aggressively. Would be a completely different story if the Chip was only 200~ million transisters.
AMD tends to only shop at Global Foundries and TSMC. - Should be interesting to see the ramifications of Apple buying up capacity from TSMC in regards to nVidia's and AMD's 20nm GPU lines.
|
Although the design maybe finished you still need to port it to a newer process node ... (That costs money.)
Actually I don't believe that cost reductions will come from manufacturing the chips with smaller feature sizes as we're forced to use double patterning which exponentially increases the wafer cost. The only way I can see a cost reduction for an XB1 unit is that with a smaller chip comes with a smaller power consumption therefore we can use a cheaper power supply and fan to drive down the costs but these gains are marginal at best and could be counteracted by the fact that it can be potentially more expensive to produce the chip at a newer process node ...
I think Apple will have to eventually switch to Samsung cause they have better chip scaling with their 14nm FinFETs compared to TSMC's 16nm FinFETs so I believe Nvidia can get a lot of TSMC's fab capacity easily and the same goes for AMD too but I think they might consider Global Foundries for their GPUs in the future since these guys licensed Samsung's 14nm node which will give AMD a possible leverage on Nvidia in chip scaling department ...
|
Double patterning was only supposed to be an interim solution, Intel has used it in the past, for parts of it's 65nm and 45nm chips, TSMC used it for parts of it's 32nm lithography too.
Intel also used it lightly at 22nm and 14nm.
Intel started investigating the use of EUV back in 2003, which could have driven things down to 10-7nm without the need for double or quad patterning, unfortunatly I haven't heard anymore on that front in years, I assume it's been abandoned due to a number of possible reasons.
There are also differen't patterning types, for example spacer-based double-patterning which is oft-used in NAND, Double, Triple, Quad-druple and beyond patterning, dual-tone, self-align, double expose, double etch and more all have different pro's and cons as well as costs.
But as things stand right now, double patterning is still relatively economically feasible, the foundries have done extensive research into it for years and consumer products have been using it.
However, it probably wouldn't be economically feasible to a point. (I.E. Tiny, low profit chip.)
I agree, that Apple will probably have to go back to Samsung, eventually, or take advantage of some spare capacity from Intel, who are looking to expand to producing chips for other companies.
However, TSMC still has a few tricks up it's sleeve and Apple can easily buy some I.P. from IBM to improve it's chips. (Resonant, Clock-mesh anyone? Or just buy IBM outright...)
To be honest, Apple probably has more important things on it's hands, like upgrade the pathetic amount of Ram in it's phones, which affects performance.
However, we probably won't see a shift during the next GPU cycle but the one after, to much effort has probably been done to get designs working with TSMC's fabs, or things could go as status quo, who knows, all speculation at this point. :P