By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft - Smaller, cheaper, cooler Xbox One processor in development

daredevil.shark said:
Carl2291 said:
I cant wait for the Slim. Tis when Im finally going to jump in.


I like when I find people who are thinking like me.

I didn't think talk of a slim version would happen this fast, but now I can wait for that. Competition is good.

Do you think there is any chance they'll integrate the power brick? Did they ever with the 360s? I still have one of the last remaining 20GB phat models, a true trooper, survived two ps3s and a wii! (It was an early rrod replacement though, first model with hdmi)



Around the Network
fatslob-:O said:

My purpose is to focus on transistor/cost ...

 

That will change very soon since almost every foundry can offer 28nm such as Global Foundries, Samsung, STMicroelectronics, UMC, and TSMC! 

Another reason why the chip makers wouldn't want to transition is because the costs of designing a low margin part isn't worth it ...

Well, technically, the design is already "done" with AMD's 28nm Oland core, unfortunatly it still doesn't get as cheap as the 40nm Cedar and Caicos Chips, but that will change eventually... A possible respin with the removal of the GDDR5 memory controller might be a possibility? Then again, Cedar and Caicos has DDR2, DDR3 and GDDR5 controllers, so it can't be that expensive transister wise.

The Xbox One though... Has a relatively monolothic 5~ Billion transister chip, they would probably save cash by doing process shrinks aggressively.
Would be a completely different story if the Chip was only 200~ million transisters.

AMD tends to only shop at Global Foundries and TSMC. - Should be interesting to see the ramifications of Apple buying up capacity from TSMC in regards to nVidia's and AMD's 20nm GPU lines.




www.youtube.com/@Pemalite

Pemalite said:

Well, technically, the design is already "done" with AMD's 28nm Oland core, unfortunatly it still doesn't get as cheap as the 40nm Cedar and Caicos Chips, but that will change eventually... A possible respin with the removal of the GDDR5 memory controller might be a possibility? Then again, Cedar and Caicos has DDR2, DDR3 and GDDR5 controllers, so it can't be that expensive transister wise.

The Xbox One though... Has a relatively monolothic 5~ Billion transister chip, they would probably save cash by doing process shrinks aggressively.
Would be a completely different story if the Chip was only 200~ million transisters.

AMD tends to only shop at Global Foundries and TSMC. - Should be interesting to see the ramifications of Apple buying up capacity from TSMC in regards to nVidia's and AMD's 20nm GPU lines.

Although the design maybe finished you still need to port it to a newer process node ... (That costs money.) 

Actually I don't believe that cost reductions will come from manufacturing the chips with smaller feature sizes as we're forced to use double patterning which exponentially increases the wafer cost. The only way I can see a cost reduction for an XB1 unit is that with a smaller chip comes with a smaller power consumption therefore we can use a cheaper power supply and fan to drive down the costs but these gains are marginal at best and could be counteracted by the fact that it can be potentially more expensive to produce the chip at a newer process node ...

I think Apple will have to eventually switch to Samsung cause they have better chip scaling with their 14nm FinFETs compared to TSMC's 16nm FinFETs so I believe Nvidia can get a lot of TSMC's fab capacity easily and the same goes for AMD too but I think they might consider Global Foundries for their GPUs in the future since these guys licensed Samsung's 14nm node which will give AMD a possible leverage on Nvidia in chip scaling department ...



fatslob-:O said:
Pemalite said:

Well, technically, the design is already "done" with AMD's 28nm Oland core, unfortunatly it still doesn't get as cheap as the 40nm Cedar and Caicos Chips, but that will change eventually... A possible respin with the removal of the GDDR5 memory controller might be a possibility? Then again, Cedar and Caicos has DDR2, DDR3 and GDDR5 controllers, so it can't be that expensive transister wise.

The Xbox One though... Has a relatively monolothic 5~ Billion transister chip, they would probably save cash by doing process shrinks aggressively.
Would be a completely different story if the Chip was only 200~ million transisters.

AMD tends to only shop at Global Foundries and TSMC. - Should be interesting to see the ramifications of Apple buying up capacity from TSMC in regards to nVidia's and AMD's 20nm GPU lines.

Although the design maybe finished you still need to port it to a newer process node ... (That costs money.) 

Actually I don't believe that cost reductions will come from manufacturing the chips with smaller feature sizes as we're forced to use double patterning which exponentially increases the wafer cost. The only way I can see a cost reduction for an XB1 unit is that with a smaller chip comes with a smaller power consumption therefore we can use a cheaper power supply and fan to drive down the costs but these gains are marginal at best and could be counteracted by the fact that it can be potentially more expensive to produce the chip at a newer process node ...

I think Apple will have to eventually switch to Samsung cause they have better chip scaling with their 14nm FinFETs compared to TSMC's 16nm FinFETs so I believe Nvidia can get a lot of TSMC's fab capacity easily and the same goes for AMD too but I think they might consider Global Foundries for their GPUs in the future since these guys licensed Samsung's 14nm node which will give AMD a possible leverage on Nvidia in chip scaling department ...


Double patterning was only supposed to be an interim solution, Intel has used it in the past, for parts of it's 65nm and 45nm chips, TSMC used it for parts of it's 32nm lithography too.
Intel also used it lightly at 22nm and 14nm.

Intel started investigating the use of EUV back in 2003, which could have driven things down to 10-7nm without the need for double or quad patterning, unfortunatly I haven't heard anymore on that front in years, I assume it's been abandoned due to a number of possible reasons.

There are also differen't patterning types, for example spacer-based double-patterning which is oft-used in NAND, Double, Triple, Quad-druple and  beyond patterning, dual-tone, self-align, double expose, double etch and more all have different pro's and cons as well as costs.
But as things stand right now, double patterning is still relatively economically feasible, the foundries have done extensive research into it for years and consumer products have been using it.
However, it probably wouldn't be economically feasible to a point. (I.E. Tiny, low profit chip.)

I agree, that Apple will probably have to go back to Samsung, eventually, or take advantage of some spare capacity from Intel, who are looking to expand to producing chips for other companies.
However, TSMC still has a few tricks up it's sleeve and Apple can easily buy some I.P. from IBM to improve it's chips. (Resonant, Clock-mesh anyone? Or just buy IBM outright...)
To be honest, Apple probably has more important things on it's hands, like upgrade the pathetic amount of Ram in it's phones, which affects performance.

However, we probably won't see a shift during the next GPU cycle but the one after, to much effort has probably been done to get designs working with TSMC's fabs, or things could go as status quo, who knows, all speculation at this point. :P




www.youtube.com/@Pemalite

As a fan of the Ratchet&Clank-games I was tempted by the Sunset-Overdrive-XBO-bundle, but I have decided to wait for the XBO slim... a white console wouldn't have fit to my other equipment anyways (PS3, PS4, WiiU, PC, AV-Receiver, Bluray-Player, and TV are all black).

An XBO-slim-bundle with Quantum Break next year would be great!



Around the Network
Pemalite said:


Double patterning was only supposed to be an interim solution, Intel has used it in the past, for parts of it's 65nm and 45nm chips, TSMC used it for parts of it's 32nm lithography too.
Intel also used it lightly at 22nm and 14nm.

Intel started investigating the use of EUV back in 2003, which could have driven things down to 10-7nm without the need for double or quad patterning, unfortunatly I haven't heard anymore on that front in years, I assume it's been abandoned due to a number of possible reasons.

There are also differen't patterning types, for example spacer-based double-patterning which is oft-used in NAND, Double, Triple, Quad-druple and  beyond patterning, dual-tone, self-align, double expose, double etch and more all have different pro's and cons as well as costs.
But as things stand right now, double patterning is still relatively economically feasible, the foundries have done extensive research into it for years and consumer products have been using it.
However, it probably wouldn't be economically feasible to a point. (I.E. Tiny, low profit chip.)

I agree, that Apple will probably have to go back to Samsung, eventually, or take advantage of some spare capacity from Intel, who are looking to expand to producing chips for other companies.
However, TSMC still has a few tricks up it's sleeve and Apple can easily buy some I.P. from IBM to improve it's chips. (Resonant, Clock-mesh anyone? Or just buy IBM outright...)
To be honest, Apple probably has more important things on it's hands, like upgrade the pathetic amount of Ram in it's phones, which affects performance.

However, we probably won't see a shift during the next GPU cycle but the one after, to much effort has probably been done to get designs working with TSMC's fabs, or things could go as status quo, who knows, all speculation at this point. :P

And multiple patterning will continue to be the solution for the future even when EUV gets rolled out ...

Actually, EUV is alive and well but what you were talking about is Intel abondoning 157nm wavelength immersion lithography ...

Semiconductor foundries may still think it's economically feasible, however they should not expect cost reductions in comparison to 28/22nm and multple exposure is the most common used method ...

I don't think Apple will look to Intel since they ask for fairly high margins ... 



Pemalite said:
fatslob-:O said:
Pemalite said:


No, it belongs to every fabrication process ever used and will ever be used.

There was a reason why Intel was slow to move Atom to a cutting edge lithography, why they used to release it's motherboard chipsets a lithography or two behind it's processors, it was costs and profit margins, it was simply cheaper to use the older lithography.

The reason why Intel was slow to move Atom for cutting edge lithography is cause they didn't want to compromise on higher margin parts but if they had some extra fab capacity then they would've without doubt moved Atom on to more cutting edge lithography instantly ...

The older technology, the cheaper it is mentality doesn't apply to chip manufacturing since newer process nodes have ALWAYS provided cost reduction aside from 28/22nm and below ...

It does apply, again, up to a point.
Maturity, Die Size, Fab Capacity all play a role, it's not as black and white as you think it is.
Not to mention additional R&D is required to shift anything to a new node, which also costs.

AMD for instance has been getting Caicos and Cedar built at 40nm for 4-5 years now, nVidia is pushing out the Geforce 705 and 730 on 40nm, which stems back to the Geforce 400 series.
Atom and Intel Chipsets historically used older nodes for years.

They all have one thing in common, they are all, small and cheap dies even on older process nodes.
Atom only recently started using the latest and greatest lithography due to Intel wanting to balloon it's transister counts to be competitive against ARM.

You also only have finite fab capacity, it doesn't make sense to cut into that capacity with low-margin parts, unless there is a damn good reason.

The funny thing about you guys argument is that you are both right. But also that you are both just not looking at it in a way that it applies to consoles.

For consoles, going to a smaller Fab process, will ALWAYS be cheaper for them. This is simply because unlike everything else, when a console die chrinks, they don't cram in more transistors to make them more powerful. The chip remains identical but just smaller. That simply means that they can make more of those chips per wafer. So where MS/Sony may have been spending $800M for 10M chips before, now they would be spending $800 for 16M or so chips.

The other thing to consider, and this is why console manufacturers may have to wait a little, is that when the fab process shrinks, for the first 2-3 months yeilds are lower. What that simply means is that at this point the chips cost more to make than their bigger counterparts. And only people willing to pay a "defect" premium will still order and build chips at the new size. usually GPUs. There is sort of a pecking order for who gets to use the new fab process first.

By the time the GPU guys, Apple and a couple of other smartphone makers get their own chips, another 8-9 months would have passed and by that time the fab process is usually perfected and yeilds are at a maximum. Then the console guys can come in and make a block order for 16M chips.

Rinse and repeat.



Intrinsic said:

The funny thing about you guys argument is that you are both right. But also that you are both just not looking at it in a way that it applies to consoles.

For consoles, going to a smaller Fab process, will ALWAYS be cheaper for them. This is simply because unlike everything else, when a console die chrinks, they don't cram in more transistors to make them more powerful. The chip remains identical but just smaller. That simply means that they can make more of those chips per wafer. So where MS/Sony may have been spending $800M for 10M chips before, now they would be spending $800 for 16M or so chips.

The other thing to consider, and this is why console manufacturers may have to wait a little, is that when the fab process shrinks, for the first 2-3 months yeilds are lower. What that simply means is that at this point the chips cost more to make than their bigger counterparts. And only people willing to pay a "defect" premium will still order and build chips at the new size. usually GPUs. There is sort of a pecking order for who gets to use the new fab process first.

By the time the GPU guys, Apple and a couple of other smartphone makers get their own chips, another 8-9 months would have passed and by that time the fab process is usually perfected and yeilds are at a maximum. Then the console guys can come in and make a block order for 16M chips.

Rinse and repeat.

How am I not looking at in a way that applies to consoles ? 

How is that possible when cost/transistor ratio doesn't improve for process nodes past 28nm ? 

Even with improved yields, it still likely won't solve the issue of cost reduction. Yields can improve the cost/transistor ratio but it won't surpass what's achieveable with 28nm ...

You forget that 28nm is the sweet spot ... 



Intrinsic said:

For consoles, going to a smaller Fab process, will ALWAYS be cheaper for them. This is simply because unlike everything else, when a console die chrinks, they don't cram in more transistors to make them more powerful. The chip remains identical but just smaller. That simply means that they can make more of those chips per wafer. So where MS/Sony may have been spending $800M for 10M chips before, now they would be spending $800 for 16M or so chips.


Please keep up.
When shrinking it's only cheaper to a point.
AMD for instance could have shrunk it's 40nm Caicos (Terascale 2 based) GPU's down to 28nm and not changed a thing, but they didn't, why? Cost, it's stayed the same for half a decade.

This generation Microsoft seems to be a little more aggressive than the last, so that 5~ billion transister chip must be stupidly costly, to put that into perspective, the Xbox One and PS4 chips are equivalent to the brand new GeForce GTX 980 in terms of transister counts, nVidia flogs that GPU off for $700-$800 AUD.

Intrinsic said:

The other thing to consider, and this is why console manufacturers may have to wait a little, is that when the fab process shrinks, for the first 2-3 months yeilds are lower. What that simply means is that at this point the chips cost more to make than their bigger counterparts. And only people willing to pay a "defect" premium will still order and build chips at the new size. usually GPUs. There is sort of a pecking order for who gets to use the new fab process first.

By the time the GPU guys, Apple and a couple of other smartphone makers get their own chips, another 8-9 months would have passed and by that time the fab process is usually perfected and yeilds are at a maximum. Then the console guys can come in and make a block order for 16M chips.

Rinse and repeat.


Correct. - There is a "Ramp up" time before yields reach an acceptable level, you are wrong on the timeframe however, it really depends on each successive fabrication nodes particular characteristics on how long it takes to reach acceptable yields, hint: It's not always 2-3 months, it can take significantly longer or shorter, I can provide some examples if need be.

As for paying for defective parts... Again, not always true, AMD had a deal in place with Global Foundries where they only paid for functioning chips, unfortunatly that deal has expired.

And for the record me and fatslob don't really "argue" - We more or less have casual debates, yes sometimes we throw overly large camels at each other to make a point.

 

fatslob-:O said:

And multiple patterning will continue to be the solution for the future even when EUV gets rolled out ...

Actually, EUV is alive and well but what you were talking about is Intel abondoning 157nm wavelength immersion lithography ...

Semiconductor foundries may still think it's economically feasible, however they should not expect cost reductions in comparison to 28/22nm and multple exposure is the most common used method ...

I don't think Apple will look to Intel since they ask for fairly high margins ... 

Ah, cheers for clarifying that.


I'll take a wait and see approach as we drop to 14nm and below, I know Intel was dissapointed with 14nm verses 22nm in multiple aspects, but things improved.

You never know with Apple! Besides, they can afford the higher costs, they have stupidly fat profit margins. (Mostly due to the pitifull amounts of Ram and Low-resolution screens.)




www.youtube.com/@Pemalite