By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:


No, it belongs to every fabrication process ever used and will ever be used.

There was a reason why Intel was slow to move Atom to a cutting edge lithography, why they used to release it's motherboard chipsets a lithography or two behind it's processors, it was costs and profit margins, it was simply cheaper to use the older lithography.

The reason why Intel was slow to move Atom for cutting edge lithography is cause they didn't want to compromise on higher margin parts but if they had some extra fab capacity then they would've without doubt moved Atom on to more cutting edge lithography instantly ...

The older technology, the cheaper it is mentality doesn't apply to chip manufacturing since newer process nodes have ALWAYS provided cost reduction aside from 28/22nm and below ...

It does apply, again, up to a point.
Maturity, Die Size, Fab Capacity all play a role, it's not as black and white as you think it is.
Not to mention additional R&D is required to shift anything to a new node, which also costs.

AMD for instance has been getting Caicos and Cedar built at 40nm for 4-5 years now, nVidia is pushing out the Geforce 705 and 730 on 40nm, which stems back to the Geforce 400 series.
Atom and Intel Chipsets historically used older nodes for years.

They all have one thing in common, they are all, small and cheap dies even on older process nodes.
Atom only recently started using the latest and greatest lithography due to Intel wanting to balloon it's transister counts to be competitive against ARM.

You also only have finite fab capacity, it doesn't make sense to cut into that capacity with low-margin parts, unless there is a damn good reason.



--::{PC Gaming Master Race}::--