Zkuq said:
The die cost is probably not the only cost though. There's probably research and development costs involved too, because any time you create anything do, someone has to make sure it supports all the legacy stuff too, and that the legacy stuff also works. And someone has to design how it fits into the big picture. There's probably documentation and other things involved too. It's probably not a terrible cost altogether, but it probably adds up, and I can see why Intel would like to get rid of all the legacy stuff. |
They don't really go back and update the entire chip by hand anymore, they instead focus on a few libraries, which avoids most of that.
In AMD's case... They will update parts of the CCD but leave the large IoD chip the same.
However... If we go back 20 years ago when AMD extended the x86 ISA to x64, Fred Weber who is AMD's ex CTO claimed that the cost for x86 support was negligible, that the x86 decoder was less than 10% of the chip, that was with AMD's Hammer... And every time we got a transistor boost and a die-shrink, that actually decreases in percentage terms.
AMD Hammer was based on 130nm and had 106~ million transistors.
So x86 compatibility would have been around 10~ million transistors.
Current Ryzen chips have upwards of 16,630 million transistors. (CCD and IOD in the 9950X) - Not including the 3D cache which has about 5~ billion transistors.
And you start to see that x86 itself isn't the issue.
The issue starts to creep in when you wish to built tiny, ultra low-power cores like Intel Atom... Which are chips that are still around the 150~ million transistor mark.
I.E. Intels Quad Core Cheery Trail Z8750 is still 176~ million transistors.
And this is where another approach *could* be taken to cut the fat and reduce the baggage to compete with ARM, but for desktops, laptops and servers... It's a non-issue.
--::{PC Gaming Master Race}::--