| Intrinsic said: For consoles, going to a smaller Fab process, will ALWAYS be cheaper for them. This is simply because unlike everything else, when a console die chrinks, they don't cram in more transistors to make them more powerful. The chip remains identical but just smaller. That simply means that they can make more of those chips per wafer. So where MS/Sony may have been spending $800M for 10M chips before, now they would be spending $800 for 16M or so chips. |
Please keep up.
When shrinking it's only cheaper to a point.
AMD for instance could have shrunk it's 40nm Caicos (Terascale 2 based) GPU's down to 28nm and not changed a thing, but they didn't, why? Cost, it's stayed the same for half a decade.
This generation Microsoft seems to be a little more aggressive than the last, so that 5~ billion transister chip must be stupidly costly, to put that into perspective, the Xbox One and PS4 chips are equivalent to the brand new GeForce GTX 980 in terms of transister counts, nVidia flogs that GPU off for $700-$800 AUD.
| Intrinsic said: The other thing to consider, and this is why console manufacturers may have to wait a little, is that when the fab process shrinks, for the first 2-3 months yeilds are lower. What that simply means is that at this point the chips cost more to make than their bigger counterparts. And only people willing to pay a "defect" premium will still order and build chips at the new size. usually GPUs. There is sort of a pecking order for who gets to use the new fab process first. By the time the GPU guys, Apple and a couple of other smartphone makers get their own chips, another 8-9 months would have passed and by that time the fab process is usually perfected and yeilds are at a maximum. Then the console guys can come in and make a block order for 16M chips. Rinse and repeat. |
Correct. - There is a "Ramp up" time before yields reach an acceptable level, you are wrong on the timeframe however, it really depends on each successive fabrication nodes particular characteristics on how long it takes to reach acceptable yields, hint: It's not always 2-3 months, it can take significantly longer or shorter, I can provide some examples if need be.
As for paying for defective parts... Again, not always true, AMD had a deal in place with Global Foundries where they only paid for functioning chips, unfortunatly that deal has expired.
And for the record me and fatslob don't really "argue" - We more or less have casual debates, yes sometimes we throw overly large camels at each other to make a point.
| fatslob-:O said: And multiple patterning will continue to be the solution for the future even when EUV gets rolled out ... Actually, EUV is alive and well but what you were talking about is Intel abondoning 157nm wavelength immersion lithography ... Semiconductor foundries may still think it's economically feasible, however they should not expect cost reductions in comparison to 28/22nm and multple exposure is the most common used method ... I don't think Apple will look to Intel since they ask for fairly high margins ... |
Ah, cheers for clarifying that.
I'll take a wait and see approach as we drop to 14nm and below, I know Intel was dissapointed with 14nm verses 22nm in multiple aspects, but things improved.
You never know with Apple! Besides, they can afford the higher costs, they have stupidly fat profit margins. (Mostly due to the pitifull amounts of Ram and Low-resolution screens.)

www.youtube.com/@Pemalite







