By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - (Update) Rumor: PlayStation 5 will be using Navi 9 (more powerful than Navi 10), new update Jason Schreier said Sony aim more then 10,7 Teraflop

 

How accurate this rumored is compared to the reality

Naah 26 35.62%
 
Its 90% close 14 19.18%
 
it's 80% close 8 10.96%
 
it's 70% close 5 6.85%
 
it's 50% close 13 17.81%
 
it's 30% close 7 9.59%
 
Total:73
Bofferbrauer2 said:
Intrinsic said:

What are you saying bruh? Like do you understand how chip pricing works at all??? 

AMD gettig $100 for ach chip they give to sony isn't them selling them at bargain prices at all. Thats them selling them at "bulk/OEM" pricing which is totally normal when any company puts in orders in the region of millions. 

Take the 3600G or instance, say AMD sells that at retail for $220, that pansout like this... the actual cost of making each of those chips (what AMD pays to the foundry) is like $30/$40. Then AMD will add their markup to account for things like yields, profits, packaging and shipping..etc. At this point the chip comes up to around $170. Then they put their MSRP sticker price of $220 so te retailers make their own ut too.

If tht chip was going into a console, first off the console manufacturer will pay a sizeable sum to "customize" their chip. This reduces how much AMD spends on R&D for that chip and nothing stops them from taking elements of that chips design into their general product line. Then AMD is not worrying about costs like packaging, shiping, marketing and there isn't a retailer cut either. AMD also isn't worrying about yields as that will be something sony/ms absorbs. 

So selling each chip for $100 wilbe them making a good deal amount of money.

I don't even get how any of this is relevant..... are you saying that AMD is somehow not going to be selling chips at that prices anymore because they are doing well now? Well if that is what you are saying then you are just wrong. There is a reason why even Apple only puts AMD GPUs in their computers. And Nvidia is just nonsense with regards to the kinda hardware that works for consoles. Not only are they resistant to drop prices, they also just don't make APUs (that aren't ARM based). So sony/ms using them will mean they 'must" build a discrete cpu/gpu system.

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

The actual cost depends how much AMD has to pay per wafer, divided by how many chips on that wafer are salvageable for that purpose. So let's say a wafer cost 1000$ (I'm just making up a price here), 20 such chips would fit on it but only 10 would be fully functioning, the others would have to be sold as either 3400G or binned entirely due to defects. In this case AMD would certainly charge at least 100$ on the 3600G to cover the costs already, and use the 3400G for winnings.

However, on a console that's not possible, hence why the PS4 has 2 deactivated CU to improve the yield rate.

@italic: These costs are not always covered, I can remember that the cost of some chips were actually worked into the yearly contracts instead of receiving a sum early on. And considering AMD didn't seem to have gotten any Lump sum (if they did, it doesn't show up in the financial reports at least), I do think they have to cover for those expenses with the chip sales.

@underlined: Well, no, I'm not saying that they won't do it anymore, but rather that they are not obliged to do so anymore to have any sizable income at all.

At the time when the PS4/XBO came out, AMD CPUs were doing very badly and were the laughingstock of the industry. They just released Hawaii, but had much problems keeping up with NVidias updated Kepler (GeForce 700 series), so earnings were breaking away left and right and could only really compete over the price. As a result their profit margin plummeted, and still is awfully low for the sector (it's under 50% while Intel and NVidia are close to or above 70%; at the time it even dropped below 30%, which is bad in any sector). All this made that AMD was desperate for some stable income, which made Sony and Microsoft holding all the cards during the price negotiations. But that won't be the case this time, and AMD will squeeze some winnings out of the chips.

Also, as a side note, you give the costs at 30-40$. Tell me how that works if about half of the sales are from console chips (which was true in 2016) yet the profit margin is at only 24%? Do you think AMD sold their other chips all below production price? And how could that be, considering most chips cost much more than the one in the PS4? Or do you think they had such an R&D expense that it covers half the expenses before wages and taxes? Just saying that your price is off, it may be well below 100$ by then, but I don't think anywhere close to the numbers you're putting there, more like 60-80$. Don't forget that 350mm2 ain't exactly a small chip (a 10 core Skylake-X is only 322mm2, for instance) and that such a big chip normally sells at quite some higher prices for reasons detailed above.

Your Apple example is a bit special, they use it due to OGL and OCL capabilities, where NVidia is weaker than AMD and generally has been like that. Them being cheaper than NVidia is only icing on the cake. But that's going to change soon anyway, considering that Apple wants to design all their chips in-house and are migrating everything to ARM.

https://www.macrotrends.net/stocks/charts/AMD/amd/profit-margins

https://ycharts.com/companies/NVDA/profit_margin

https://ycharts.com/companies/INTC/gross_profit_margin

70% profit margin isn't something common at all.

 

Mummelmann said:
DonFerrari said:

except 60fps is hardly something console gaming requires or cares for most genres.

Perhaps, but developers do, and aiming for higher fps means better margins. As it stands, a lot of games with 30 fps aim dip into slideshow territory, games like AC: Odyssey are a good example, I don't think I've ever seen worse frame rates in a modern AAA title. With more effects and higher resolutions, severe drops in frame rate become even more jarring, a more advanced and crisp video or render stands out all the more when it slows down. 

Xbox One X doesn't provide proper 4K, it uses checkerboard rendering and rather few effects, frame rates on many titles are very low, Destiny 2 runs at a somewhat unstable 30 fps.

Half-assed resolutions with poor performance is not what developers want to work with, it's better to find an entry point where they can get both visual fidelity and performance and not be forced to choose. And if the hardware ends up too costly, developers and publishers have a smaller market to sell their software to. I'd much rather the standard be a stable 1440p with full effects and shading than stripped down or faked 4K, then perhaps release a more expensive version of the console that does more or less what the Pro and One X do to the line-up right now.

The fact that 30 fps is still more or less industry standard in console gaming in 2019 is downright shameful, especially since games often dip well below.

Which console developer is aiming for 60fps besides FPS, racing, fighting and competitive multiplayer?

Most of console games are 30fps for most of the time and this doesn't seem like it will change.

There is nothing shameful in 30fps standard. Most console gamers have accepted/expected/preferred 30fps with higher IQ than 60fps having to sacrifice everything else to about half.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Around the Network
DonFerrari said:
Bofferbrauer2 said:

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

The actual cost depends how much AMD has to pay per wafer, divided by how many chips on that wafer are salvageable for that purpose. So let's say a wafer cost 1000$ (I'm just making up a price here), 20 such chips would fit on it but only 10 would be fully functioning, the others would have to be sold as either 3400G or binned entirely due to defects. In this case AMD would certainly charge at least 100$ on the 3600G to cover the costs already, and use the 3400G for winnings.

However, on a console that's not possible, hence why the PS4 has 2 deactivated CU to improve the yield rate.

@italic: These costs are not always covered, I can remember that the cost of some chips were actually worked into the yearly contracts instead of receiving a sum early on. And considering AMD didn't seem to have gotten any Lump sum (if they did, it doesn't show up in the financial reports at least), I do think they have to cover for those expenses with the chip sales.

@underlined: Well, no, I'm not saying that they won't do it anymore, but rather that they are not obliged to do so anymore to have any sizable income at all.

At the time when the PS4/XBO came out, AMD CPUs were doing very badly and were the laughingstock of the industry. They just released Hawaii, but had much problems keeping up with NVidias updated Kepler (GeForce 700 series), so earnings were breaking away left and right and could only really compete over the price. As a result their profit margin plummeted, and still is awfully low for the sector (it's under 50% while Intel and NVidia are close to or above 70%; at the time it even dropped below 30%, which is bad in any sector). All this made that AMD was desperate for some stable income, which made Sony and Microsoft holding all the cards during the price negotiations. But that won't be the case this time, and AMD will squeeze some winnings out of the chips.

Also, as a side note, you give the costs at 30-40$. Tell me how that works if about half of the sales are from console chips (which was true in 2016) yet the profit margin is at only 24%? Do you think AMD sold their other chips all below production price? And how could that be, considering most chips cost much more than the one in the PS4? Or do you think they had such an R&D expense that it covers half the expenses before wages and taxes? Just saying that your price is off, it may be well below 100$ by then, but I don't think anywhere close to the numbers you're putting there, more like 60-80$. Don't forget that 350mm2 ain't exactly a small chip (a 10 core Skylake-X is only 322mm2, for instance) and that such a big chip normally sells at quite some higher prices for reasons detailed above.

Your Apple example is a bit special, they use it due to OGL and OCL capabilities, where NVidia is weaker than AMD and generally has been like that. Them being cheaper than NVidia is only icing on the cake. But that's going to change soon anyway, considering that Apple wants to design all their chips in-house and are migrating everything to ARM.

https://www.macrotrends.net/stocks/charts/AMD/amd/profit-margins

https://ycharts.com/companies/NVDA/profit_margin

https://ycharts.com/companies/INTC/gross_profit_margin

70% profit margin isn't something common at all.

 

I said before taxes, you give me after taxes. Though I understand that I explained that only far down the text, so the error is on me, mea culpa.

With that knowledge, you'll see that the profit rate AMD had before taxes and operating costs (the gross profit margin, couldn't find the word before) is about what NVidia does percentage wise  after all operating costs have been taken into account. And that one dropped to 54% in the last quarter while having been above that for many quarters beforeSame for Intel, who even mostly stay above 60%.

AMD, on the other hand, is mostly between 20 and 40%, with one quarter even dropping to just 4%. But the general trend is upwards, I don't doubt they can break into the 40% range this year. But for that they need to make some money from their chips.



Bofferbrauer2 said:

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

But we know the vega based 2400G exists. We also know that AMD always makes a few APUs with every product series. And those things aren't designed to be graphical powerhouses anyways so RAM bottleneck is a moot point.

Bofferbrauer2 said: 

The actual cost depends how much AMD has to pay per wafer, divided by how many chips on that wafer are salvageable for that purpose. So let's say a wafer cost 1000$ (I'm just making up a price here), 20 such chips would fit on it but only 10 would be fully functioning, the others would have to be sold as either 3400G or binned entirely due to defects. In this case AMD would certainly charge at least 100$ on the 3600G to cover the costs already, and use the 3400G for winnings.

However, on a console that's not possible, hence why the PS4 has 2 deactivated CU to improve the yield rate.

Ok while I know you are just making examples, lets try and make it more accurate. The size of wafer commonly used for AMD and in turn sony/ms is a 300mm diameter wafer. If each die is around 350mm2 you can get around 200 chips/wafer. Now each wafer costs anywhere between $300 and $10000 on the foundry side of things depending on number of processing steps and complexity.

AMD will pay the agreed amount for every wafer..... regardless of what is working or not working in it as long as an agreed upon  minimum chip yield per wafer is met. So say the ost of this wafer is $10,000 (and this is not how much a console APU wafer will cost). AMD has at this point spent $50/chip. Assuming every single one of the chips work. Then now they are about to sell it to sony. They know sony wants to be able to hit x and y clock speeds, 20 chips are off the table. Then they find out that of the 180 left 20 are defective. They are now left with 160. So the cost of each chip for them is $62. Then they sell it to sony/ms for $100+.

As per yield rates, sony/ms after having agreed on their processor design will know that that processor will cost them a fortune if they want everything to be perfect. Opting to deactivate CUsi part of the pricing process.

Bofferbrauer2 said: 

@underlined: Well, no, I'm not saying that they won't do it anymore, but rather that they are not obliged to do so anymore to have any sizable income at all.

At the time when the PS4/XBO came out, AMD CPUs were doing very badly and were the laughingstock of the industry. They just released Hawaii, but had much problems keeping up with NVidias updated Kepler (GeForce 700 series), so earnings were breaking away left and right and could only really compete over the price. As a result their profit margin plummeted, and still is awfully low for the sector (it's under 50% while Intel and NVidia are close to or above 70%; at the time it even dropped below 30%, which is bad in any sector). All this made that AMD was desperate for some stable income, which made Sony and Microsoft holding all the cards during the price negotiations. But that won't be the case this time, and AMD will squeeze some winnings out of the chips.

Its not an "obligation"..... its just simple business practice. Your issue here is that you seem  to think that the way the console manufacturers are paying for their chips is some sort of give away price...... its not. Its the kinda price you get when you are dealing with a company and a promise to by upwards of 50M of something from day one. Console OEM pricing is not remotely indicative of what retail pricing will be or what profit margins for direct sales or to lower tier OEMs (smaller volumes) will be. Not just for chips but for ever single component that goes into the console. A good way to look at it is that whatever the cost of that component is at retail, the console OEM will be paying less than half that amount.

And you have got this backwards, AMD even needs their money less now than it ever has. So its more likely to work with them for what is more or less a licensed chip than to tr and milk them for anything.

Bofferbrauer2 said: 

Also, as a side note, you give the costs at 30-40$. Tell me how that works if about half of the sales are from console chips (which was true in 2016) yet the profit margin is at only 24%? Do you think AMD sold their other chips all below production price? And how could that be, considering most chips cost much more than the one in the PS4? Or do you think they had such an R&D expense that it covers half the expenses before wages and taxes? Just saying that your price is off, it may be well below 100$ by then, but I don't think anywhere close to the numbers you're putting there, more like 60-80$. Don't forget that 350mm2 ain't exactly a small chip (a 10 core Skylake-X is only 322mm2, for instance) and that such a big chip normally sells at quite some higher prices for reasons detailed above.

Again you are going about this wrong...... yes back in 2016 growth from semi custom sector (which is probably 90% consoles) equated to about half of AMDs quarterly revenue in certain quarters. That's total revenue not profit margins. There is a very big difference. eg in a particular quarter revenue in that division was at $590M (real number in their 2nd quarter 2016). Now if in that quarter alone they took in orders of say 5M chips and got around $100 for each one what does that give you? Yup.... around $500M.

Still hard to understand?

Last edited by Intrinsic - on 18 March 2019

Bofferbrauer2 said:
DonFerrari said:

https://www.macrotrends.net/stocks/charts/AMD/amd/profit-margins

https://ycharts.com/companies/NVDA/profit_margin

https://ycharts.com/companies/INTC/gross_profit_margin

70% profit margin isn't something common at all.

 

I said before taxes, you give me after taxes. Though I understand that I explained that only far down the text, so the error is on me, mea culpa.

With that knowledge, you'll see that the profit rate AMD had before taxes and operating costs (the gross profit margin, couldn't find the word before) is about what NVidia does percentage wise  after all operating costs have been taken into account. And that one dropped to 54% in the last quarter while having been above that for many quarters beforeSame for Intel, who even mostly stay above 60%.

AMD, on the other hand, is mostly between 20 and 40%, with one quarter even dropping to just 4%. But the general trend is upwards, I don't doubt they can break into the 40% range this year. But for that they need to make some money from their chips.

Understand your point.

Still even though profit margins are quite important, to have a steady money influx that is assured is quite reassuring. Even more when we get to projections that GPU purchase racing is slowing down.

But I do agree that at this time AMD is on less pressure for thin margins than they were at the start of last gen, so that may increase the MSRP of the consoles for the same performance they were expecting (or put pressure on Sony/MS to eat up more cost)



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Cannot wait. Hopefully the reveal time frame is true. I do not care about the rumored specs at this point. I simply care that we get to know sooner than later because my excitement is sky high!



01000110 01101111 01110010 00100000 01001001 01111001 01101111 01101100 01100001 01101000 00100001 00100000 01000110 01101111 01110010 00100000 01000101 01110100 01100101 01110010 01101110 01101001 01110100 01111001 00100001 00100000

Around the Network
Intrinsic said:

 Now each wafer costs anywhere between $300 and $10000 on the foundry side of things depending on number of processing steps and complexity.

AMD will pay the agreed amount for every wafer..... regardless of what is working or not working in it as long as an agreed upon  minimum chip yield per wafer is met.

Wafer costs jumped from around $4k for 12/14nm to around $6k for 7nm (Source: IC Knowledge LLC). What AMD agrees to pay we do not and will never know, these are very complex deals.



Intrinsic said:
Bofferbrauer2 said:

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

But we know the vega based 2400G exists. We also know that AMD always makes a few APUs with every product series. And those things aren't designed to be graphical powerhouses anyways so RAM bottleneck is a moot point.



And you have got this backwards, AMD even needs their money less now than it ever has. So its more likely to work with them for what is more or less a licensed chip than to tr and milk them for anything.

Bofferbrauer2 said: 

Also, as a side note, you give the costs at 30-40$. Tell me how that works if about half of the sales are from console chips (which was true in 2016) yet the profit margin is at only 24%? Do you think AMD sold their other chips all below production price? And how could that be, considering most chips cost much more than the one in the PS4? Or do you think they had such an R&D expense that it covers half the expenses before wages and taxes? Just saying that your price is off, it may be well below 100$ by then, but I don't think anywhere close to the numbers you're putting there, more like 60-80$. Don't forget that 350mm2 ain't exactly a small chip (a 10 core Skylake-X is only 322mm2, for instance) and that such a big chip normally sells at quite some higher prices for reasons detailed above.

Again you are going about this wrong...... yes back in 2016 growth from semi custom sector (which is probably 90% consoles) equated to about half of AMDs quarterly revenue in certain quarters. That's total revenue not profit margins. There is a very big difference. eg in a particular quarter revenue in that division was at $590M (real number in their 2nd quarter 2016). Now if in that quarter alone they took in orders of say 5M chips and got around $100 for each one what does that give you? Yup.... around $500M.

Still hard to understand?

1. The 2400G has only 11 CU because RAM bandwith can't support more than that, they'd choke any additional CU to death. That's the reason why the amount of Compute Units has only slightly increased, from 5 in Llano in 2010, 6 on Trinity in 2012, 8 In Kaveri in 2014 and now 11 with Raven Ridge. Each of these increases also came with larger bandwith: Llano had DDR3 1600, Trinity DDR3 1866, Kaveri DDR3 2133 and Raven Ridge DDR4 2666. Hence why I'm saying that 20 CU is unrealistic with DDR4 memory (which is technically only specified until DDR4 3200, anything above is overclocked). I could see 12-15 CU with DDR4 3200 but without any drastic changes 20 CU just can't be fed with data.

2. I spent 2 posts detailing why AMD was in no position to milk them for money, I won't try it again. Just saying that this time around they are in a better position and won't accept just scraps.

3. I know they made half of their revenue with the semi-custom chips and that's not profit margins. But you argued that those chips have an over 100% profit margin. So where is all that profit if even the gross profit margin is only 24%? That's why I made the other examples, to show you that your profit margin is just unrealistically large. AMD would have had to sell everything else at a loss to reach such a low gross profit margin with the consoles deal bringing 50% of he revenue at such a large profit margin. Is that really so hard to understand?



Intrinsic said:

AMD gettig $100 for ach chip they give to sony isn't them selling them at bargain prices at all. Thats them selling them at "bulk/OEM" pricing which is totally normal when any company puts in orders in the region of millions. 

Take the 3600G or instance, say AMD sells that at retail for $220, that pansout like this... the actual cost of making each of those chips (what AMD pays to the foundry) is like $30/$40. Then AMD will add their markup to account for things like yields, profits, packaging and shipping..etc. At this point the chip comes up to around $170. Then they put their MSRP sticker price of $220 so te retailers make their own ut too.

Pretty sure AMD's profit margins are on average 61% for PC chips.
So a $220 CPU is likely costing AMD $85.8 in manufacturing and other logistics.

Consoles are actually a significant revenue driver for AMD though, which is good... Not nearly as lucrative as PC chip sales, but it helped keep AMD afloat when it needed it most.
https://www.anandtech.com/show/8913/amd-reports-q4-fy-2014-and-full-year-results

Bofferbrauer2 said:

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

DDR4 4000 can offer more bandwidth than HBM2. It is entirely how wide you wish to take things... But before then you reach a point where it's more economical to choose another technology anyway.

However... Considering that current Ryzen APU's with 38GB/s~ of bandwidth are certainly bandwidth starved with 11 CU's... I doubt that is going to change with 20CU APU's that have 68GB/s~ bandwidth.

But if you were to run that DDR4 4000 DRAM on a 512-bit bus, suddenly we are talking 256GB/s of bandwidth, which is more than sufficient for even a 40 CU count.

Straffaren666 said:

Specifying some of AMD's improvement is irrelevant as long as you don't also specify what nVidia has achieved. A lot of the engineering work goes into improving the performance and power efficiency by switching from third pary cell libraries to custom IC designs for a particular process node. Something nVidia obviously has spent a lot more resources on than AMD and that's something which doesn't show up as a new feature in marketing material.

I am aware. Not my first Rodeo.

Straffaren666 said:

I spend much of my working time analyzing GPU frame-traces, identifying bottlenecks and how to work around them. Every GPU architecture has bottlenecks, that's nothing new, it's just a matter of what kind of workload you throw at them. I have full access to all the performance counters of the GCN achitecture, both in a numerical and a visual form. For instance, I can see the number of wavefronts executing on each individual SIMD of each CU at any given time during the trace, the issue rate of VALU, SALU, VMEM, EXP, branch instructions, wait cycles due to accessing the K$ cache, exporting pixels or fetching instructions, stalls due to texture rate or texture memory accesses, number of read/write accesses to the color or depth caches, the number of drawn quads, the number of context rolls, the number of processed primitives and percentage of culled primitives, stalls in the rasterizer due to the SPI (Shader Processor Input) or the PA (Primitive Assembly), number of indices processed and reused by the VGT (Vertex Geometry Tessellator), the number of commands parsed/processed by the CGP/CPC, stalls in the CPG/CPC, number of L2 read/writes, L2 hit/miss rate. That's just a few of the available performace counters I've access to. In addition to that I have full documentation to the GCN architecture and I've developed several released games targeting it. Based on that I've a pretty good picture of the strengths/weaknesses of the architecture and I'm interested in hearing if you perhaps have some insight that I lack.

I am unable to verify any of that, nor does it take precedence over my own knowledge or qualifications. In short, it's irrelevant.

Straffaren666 said:

The geometry rate isn't really a bottleneck for GCN. Even if it was, the geometry processing parallelizes quite well and could be solved by increasing the number of VGTs. It won't be a problem in the future either for two reasons. 1) The pixel rate will always be the limiting factor. 2) Primitive/mesh shaders gives the graphics programmer the option to use the CU's compute power to process geometry.

It's always been a bottleneck in AMD's hardware even going back to Terascale.

Straffaren666 said:

I asked you to specify the inherent flaws and bottlenecks in the GCN architecture that you claim prevents the PS5 from using more than 64CUs, not AMD's marketing material about their GPUs. So again, can you please specify the "multitude of bottlenecks".

Bottlenecks (Like Geometry) have always been an Achilles heel of AMD GPU architectures even back in the Terascale days.

nVidia was always on the ball once they introduced their Polymorph engines.

But feel free to enlighten me on why AMD's GPU's fall short despite their overwhelming advantage in single precision floating point operations relative to their nVidia counterpart.

Straffaren666 said:

I'm not sure what you mean. It clearly says the area reduction is 70% and a 60% reduction in power consumtion. Pretty inline with what I wrote. An area reduction of 70% would yield a density increase of 3.3x. Probably just a rounding issue.

Here are the links to TSMC's own numbers.

https://www.tsmc.com/english/dedicatedFoundry/technology/10nm.htm

https://www.tsmc.com/english/dedicatedFoundry/technology/7nm.htm

I was neither disagreeing or agreeing with your claims, just wanted evidence for my own curiosity to take your claim seriously.

Density at any given node is always changing, Intel is on what... It's 3rd or 4th iteration of 14nm? And each time density has changed. Hence why it's important to do apple to apples comparisons.

As for TSMC's 10nm and 7nm comparisons... I would not be surprised if TSMC's 10nm process actually leveraged a 14nm BEOL... TSMC, Samsung, Global Foundries etc' don't tend to do full node (FEOL+BEOL) shrinks at the same time like Intel does.
The 7nm process likely leverages 10nm design rules...

But even then TSMC's information on their 7nm process is likely optimized for sram at the moment, where-as their 10nm process is not in the links you provided, which ultimately skews things in 7nm's favour as you are less likely to need less patterning... And you can optimize for the sram cells relatively simple structures compared to more complex logic.



--::{PC Gaming Master Race}::--

Pemalite said:

 

Bofferbrauer2 said:

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

DDR4 4000 can offer more bandwidth than HBM2. It is entirely how wide you wish to take things... But before then you reach a point where it's more economical to choose another technology anyway.

However... Considering that current Ryzen APU's with 38GB/s~ of bandwidth are certainly bandwidth starved with 11 CU's... I doubt that is going to change with 20CU APU's that have 68GB/s~ bandwidth.

But if you were to run that DDR4 4000 DRAM on a 512-bit bus, suddenly we are talking 256GB/s of bandwidth, which is more than sufficient for even a 40 CU count.

Knowing you, I'm sure you are aware that to connect the RAM sticks with a 512 bit bus would need many more layers on the motherboards, thus making them hugely expensive, so not exactly an economic solution. It's after all also the reason why we're still stuck with just dual channel, quad channel would help iGPU/APU a lot but make the boards much more expensive.

However, what I could see as possible would be reintroducing the sideport memory, in this case as a 2-4GB HBM stack, functioning as a LLC. But for that I would have expected special boards for APUs, like a 540G and 560GX, similar to the 760G/790GX during the HD 4000 series. Just with a very fast sideport this time, please.



Bofferbrauer2 said:

Knowing you, I'm sure you are aware that to connect the RAM sticks with a 512 bit bus would need many more layers on the motherboards, thus making them hugely expensive, so not exactly an economic solution. It's after all also the reason why we're still stuck with just dual channel, quad channel would help iGPU/APU a lot but make the boards much more expensive.

Hence why I said it would be more economical to choose another technology anyway. :P

Bofferbrauer2 said:

However, what I could see as possible would be reintroducing the sideport memory, in this case as a 2-4GB HBM stack, functioning as a LLC. But for that I would have expected special boards for APUs, like a 540G and 560GX, similar to the 760G/790GX during the HD 4000 series. Just with a very fast sideport this time, please.

Ah sideport. I had the Asrock M3A790GHX at one point with 128MB of DDR3 Sideport memory. - But because it clocked at only 1200mhz, it only offered 4.8GB/s of bandwidth verses the system memories 25.6GB/s of bandwidth... So the increase in performance was marginal at best. (I.E. Couple of percentage points.)

On older boards that ran with DDR2 memory that topped out at 800mhz (12.8GB/s of bandwidth) the difference was certainly more pronounced.

On that Asrock board I got more of a performance kick from simply overclocking the IGP to 950Mhz than from turning on Sideport memory, but that is entirely down to the implementation.

In saying that, GPU performance has certainly outstripped the rate of system memory bandwidth increases... I mean heck... The Latest Ryzen notebooks are often still running Dual-Channel DDR4 @ 2400mhz, which is 38.4GB/s of bandwidth, not really a big step up over the Asrock's 25.6Gb/s of bandwidth is it? Yet the GPU is likely 50x more capable overall.

Sideport is great, if implemented well and not on a narrow 16-bit bus. - And you wouldn't even need to use expensive HBM memory to get some big gains.
GDDR5 is cheap and plentiful and on a 32-bit bus could offer 50GB/s which combined with system memory (I assume by a striping method) would offer some tangible gains.
GDDR6 would be a step up even again where 75GB/s should be easy enough to hit... Ideally you would want around 100-150GB/s for decent 1080P gaming.

If they threw it onto a 64-bit bus, then that would double all of those rates, but I would imagine tracing would become an issue, especially on ITX/mATX boards, forcing the requirement of more PCB layers.



--::{PC Gaming Master Race}::--