By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - (Update) Rumor: PlayStation 5 will be using Navi 9 (more powerful than Navi 10), new update Jason Schreier said Sony aim more then 10,7 Teraflop

 

How accurate this rumored is compared to the reality

Naah 26 35.62%
 
Its 90% close 14 19.18%
 
it's 80% close 8 10.96%
 
it's 70% close 5 6.85%
 
it's 50% close 13 17.81%
 
it's 30% close 7 9.59%
 
Total:73
Screenshot said:
OdinHades said:

I don't think so. On PC video cards with 8 GB can already be a limiting factor at full HD. Those new consoles will want to push 4K textures and we're talking about shared memory here. So I think 24 GB is the minimum for a console to be somewhat relevant in the next 6 years or so. 32 GB would be better. Anything below that will be a serious bottleneck in the future. Not today and not tomorrow, but soon enough.

16 gig is more than enough for 4k even on pc. 

16 GB system memory, yes. But if we're talking about shared memory that would be something like 8 GB system memory + 8 GB VRAM. That wouldn't be enough for gaming in 4K for the next 6 years or so.



Official member of VGC's Nintendo family, approved by the one and only RolStoppable. I feel honored.

Around the Network
Intrinsic said:
Bofferbrauer2 said:

I know that fully well.

That being said, AMD was forced to pay up for a specific amount of wafers from Globalfoundries upfront in 2015, whether they'd actually need them or not (and even pay extra if they produce chips at other foundries). Thus, AMD sold their chips to Sony and Microsoft at the lowest possible price for them to ensure they don't need to pay for unused wafers. However, since GF doesn't have a 7nm process, the deal got eased a lot and the Ryzen/Epyc chips already fulfill it, thus having no need to go as low with the price as they did this gen.

Thats not how that happened.... 

AMD had contracted a certain number of chips with GF. This is normal with thee chip contracts for everyone. The issue was that TSCM moved onto a newer process earlier than GF and AMD and its customers needed to be on tat new process.  So even though AMD and their customers had shifted onto a new process AMD was now in breach of contract with GF since they now needed  far fewer wafers from them.

They never were now selling their chips to sony and MS for some bargain bin price. I really don't know where you heard that.

AMD got forced to buy a specific amount of wafers from GF per year, what the reasons for that are is irrelevant, they had to pay even if AMD didn't need or couldn't sell any more chips from any additional wafers.

And yeah, they got bargain prices. AMD got around 100$ for the OG PS4 chip (the article write must think AMD produces hem out of thin air at no cost), and that's also what the chip did more or less cost in production. So not, AMD did not gain much from it, but was good enough to stay afloat.

Besides, NVidia said themselves they weren't interested because the margins were way too small. Like the article details, NVidia made about 10$ per PS4, which isn't very much and barely worth the work. The X1 was readily available and in surplus, so they had nothing to loose from the Nintendo deal as they needed to do zero work for it. But they stated themselves they won't do custom chips anymore, which also limits any upgrade to the Switch to an X2 unless they change their mind on it.



OdinHades said:
Screenshot said:

16 gig is more than enough for 4k even on pc. 

16 GB system memory, yes. But if we're talking about shared memory that would be something like 8 GB system memory + 8 GB VRAM. That wouldn't be enough for gaming in 4K for the next 6 years or so.

Errrrr..... no. Just no. What in hell are you doing with 8GB of RAM for the CPU? CPU code just doesn't require that much RAM. The reason the GPU ends up taking as much RAM as it does is because of textures and stuff like that. Basically images.

Bofferbrauer2 said:

AMD got forced to buy a specific amount of wafers from GF per year, what the reasons for that are is irrelevant, they had to pay even if AMD didn't need or couldn't sell any more chips from any additional wafers.

And yeah, they got bargain prices. AMD got around 100$ for the OG PS4 chip (the article write must think AMD produces hem out of thin air at no cost), and that's also what the chip did more or less cost in production. So not, AMD did not gain much from it, but was good enough to stay afloat.

Besides, NVidia said themselves they weren't interested because the margins were way too small. Like the article details, NVidia made about 10$ per PS4, which isn't very much and barely worth the work. The X1 was readily available and in surplus, so they had nothing to loose from the Nintendo deal as they needed to do zero work for it. But they stated themselves they won't do custom chips anymore, which also limits any upgrade to the Switch to an X2 unless they change their mind on it.

What are you saying bruh? Like do you understand how chip pricing works at all??? 

AMD gettig $100 for ach chip they give to sony isn't them selling them at bargain prices at all. Thats them selling them at "bulk/OEM" pricing which is totally normal when any company puts in orders in the region of millions. 

Take the 3600G or instance, say AMD sells that at retail for $220, that pansout like this... the actual cost of making each of those chips (what AMD pays to the foundry) is like $30/$40. Then AMD will add their markup to account for things like yields, profits, packaging and shipping..etc. At this point the chip comes up to around $170. Then they put their MSRP sticker price of $220 so te retailers make their own ut too.

If tht chip was going into a console, first off the console manufacturer will pay a sizeable sum to "customize" their chip. This reduces how much AMD spends on R&D for that chip and nothing stops them from taking elements of that chips design into their general product line. Then AMD is not worrying about costs like packaging, shiping, marketing and there isn't a retailer cut either. AMD also isn't worrying about yields as that will be something sony/ms absorbs. 

So selling each chip for $100 wilbe them making a good deal amount of money.

I don't even get how any of this is relevant..... are you saying that AMD is somehow not going to be selling chips at that prices anymore because they are doing well now? Well if that is what you are saying then you are just wrong. There is a reason why even Apple only puts AMD GPUs in their computers. And Nvidia is just nonsense with regards to the kinda hardware that works for consoles. Not only are they resistant to drop prices, they also just don't make APUs (that aren't ARM based). So sony/ms using them will mean they 'must" build a discrete cpu/gpu system.



Intrinsic said:
OdinHades said:

16 GB system memory, yes. But if we're talking about shared memory that would be something like 8 GB system memory + 8 GB VRAM. That wouldn't be enough for gaming in 4K for the next 6 years or so.

Errrrr..... no. Just no. What in hell are you doing with 8GB of RAM for the CPU? CPU code just doesn't require that much RAM. The reason the GPU ends up taking as much RAM as it does is because of textures and stuff like that. Basically images.

Bofferbrauer2 said:

AMD got forced to buy a specific amount of wafers from GF per year, what the reasons for that are is irrelevant, they had to pay even if AMD didn't need or couldn't sell any more chips from any additional wafers.

And yeah, they got bargain prices. AMD got around 100$ for the OG PS4 chip (the article write must think AMD produces hem out of thin air at no cost), and that's also what the chip did more or less cost in production. So not, AMD did not gain much from it, but was good enough to stay afloat.

Besides, NVidia said themselves they weren't interested because the margins were way too small. Like the article details, NVidia made about 10$ per PS4, which isn't very much and barely worth the work. The X1 was readily available and in surplus, so they had nothing to loose from the Nintendo deal as they needed to do zero work for it. But they stated themselves they won't do custom chips anymore, which also limits any upgrade to the Switch to an X2 unless they change their mind on it.

What are you saying bruh? Like do you understand how chip pricing works at all??? 

AMD gettig $100 for ach chip they give to sony isn't them selling them at bargain prices at all. Thats them selling them at "bulk/OEM" pricing which is totally normal when any company puts in orders in the region of millions. 

Take the 3600G or instance, say AMD sells that at retail for $220, that pansout like this... the actual cost of making each of those chips (what AMD pays to the foundry) is like $30/$40. Then AMD will add their markup to account for things like yields, profits, packaging and shipping..etc. At this point the chip comes up to around $170. Then they put their MSRP sticker price of $220 so te retailers make their own ut too.

If tht chip was going into a console, first off the console manufacturer will pay a sizeable sum to "customize" their chip. This reduces how much AMD spends on R&D for that chip and nothing stops them from taking elements of that chips design into their general product line. Then AMD is not worrying about costs like packaging, shiping, marketing and there isn't a retailer cut either. AMD also isn't worrying about yields as that will be something sony/ms absorbs. 

So selling each chip for $100 wilbe them making a good deal amount of money.

I don't even get how any of this is relevant..... are you saying that AMD is somehow not going to be selling chips at that prices anymore because they are doing well now? Well if that is what you are saying then you are just wrong. There is a reason why even Apple only puts AMD GPUs in their computers. And Nvidia is just nonsense with regards to the kinda hardware that works for consoles. Not only are they resistant to drop prices, they also just don't make APUs (that aren't ARM based). So sony/ms using them will mean they 'must" build a discrete cpu/gpu system.

Not to forget on PC the 8GB "for CPU" have a lot to do with OS.

So when a console let's say come with 16GB for CPU+GPU in consoles (maybe 4GB for CPU and 12GB for GPU) plus 4GB for OS it will have all that it need.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

OdinHades said:
Screenshot said:

16 gig is more than enough for 4k even on pc. 

16 GB system memory, yes. But if we're talking about shared memory that would be something like 8 GB system memory + 8 GB VRAM. That wouldn't be enough for gaming in 4K for the next 6 years or so.

My point was never what was required for 4K gaming; it was about mainstream priced hardware being able to pull it off. Proper 4K gaming on mainstream devices and with decent frame rates and effects is still a long way off. 20GB of GDDR6 will be expensive as hell in and on itself, especially given that the final build will need to be ready within 9-12 months, most likely, as it's unlikely that Sony will wait all that long before releasing a new PS.

And even if, by some miracle, one managed to pack a mainstream 500$ device with a heap of GDDR6 memory, shared or otherwise, the true bottlenecks would be the rest of the build where they'd inevitably need to save a ton on cheaper solutions, especially if rumors of BC are to be believed. A RTX 2080 with 8 GB memory currently sits at around 750-850$ alone, it can do 4K at good frame rates in most games (65-80 and above is good in my opinion). To get performance at nearly that level in a mainstream box for 500$ within a year or so is quite simply not happening.

Consoles aren't future proof, that's the whole point in all of this. Any console released today will be hopelessly outdated long before it's replaced. Seeing the state of streamed 4K content on TV right now, one can imagine the time it will take before games with rendered assets will reach an acceptable point on any mainstream device. Proper, stable 4K gaming has only been possible on high-end PC's for a couple of years as it is. The 1080 Ti struggled to creep past the 60 mark on fps in most titles.

Again, my point was never what will be required in the future, rather that what will be required in the future will not be met by upcoming consoles, not even close if they want any sort of approachable price point. As far as limits on video memory, it's hard to find ways to strain a modern high-end GPU with 11GB memory or more, even my aging 980 Ti with only 6GB is still doing okay, albeit at 1440p resolutions and not 4K. And, one last time; if high-end GPU's right now are getting on nicely with their allotted memory, there's no way that any affordable machine in 2020 or so will release with similar or better specs.



Around the Network
Mummelmann said:
OdinHades said:

16 GB system memory, yes. But if we're talking about shared memory that would be something like 8 GB system memory + 8 GB VRAM. That wouldn't be enough for gaming in 4K for the next 6 years or so.

My point was never what was required for 4K gaming; it was about mainstream priced hardware being able to pull it off. Proper 4K gaming on mainstream devices and with decent frame rates and effects is still a long way off. 20GB of GDDR6 will be expensive as hell in and on itself, especially given that the final build will need to be ready within 9-12 months, most likely, as it's unlikely that Sony will wait all that long before releasing a new PS.

And even if, by some miracle, one managed to pack a mainstream 500$ device with a heap of GDDR6 memory, shared or otherwise, the true bottlenecks would be the rest of the build where they'd inevitably need to save a ton on cheaper solutions, especially if rumors of BC are to be believed. A RTX 2080 with 8 GB memory currently sits at around 750-850$ alone, it can do 4K at good frame rates in most games (65-80 and above is good in my opinion). To get performance at nearly that level in a mainstream box for 500$ within a year or so is quite simply not happening.

Consoles aren't future proof, that's the whole point in all of this. Any console released today will be hopelessly outdated long before it's replaced. Seeing the state of streamed 4K content on TV right now, one can imagine the time it will take before games with rendered assets will reach an acceptable point on any mainstream device. Proper, stable 4K gaming has only been possible on high-end PC's for a couple of years as it is. The 1080 Ti struggled to creep past the 60 mark on fps in most titles.

Again, my point was never what will be required in the future, rather that what will be required in the future will not be met by upcoming consoles, not even close if they want any sort of approachable price point. As far as limits on video memory, it's hard to find ways to strain a modern high-end GPU with 11GB memory or more, even my aging 980 Ti with only 6GB is still doing okay, albeit at 1440p resolutions and not 4K. And, one last time; if high-end GPU's right now are getting on nicely with their allotted memory, there's no way that any affordable machine in 2020 or so will release with similar or better specs.

except 60fps is hardly something console gaming requires or cares for most genres.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Intrinsic said:

Bofferbrauer2 said:

AMD got forced to buy a specific amount of wafers from GF per year, what the reasons for that are is irrelevant, they had to pay even if AMD didn't need or couldn't sell any more chips from any additional wafers.

And yeah, they got bargain prices. AMD got around 100$ for the OG PS4 chip (the article write must think AMD produces hem out of thin air at no cost), and that's also what the chip did more or less cost in production. So not, AMD did not gain much from it, but was good enough to stay afloat.

Besides, NVidia said themselves they weren't interested because the margins were way too small. Like the article details, NVidia made about 10$ per PS4, which isn't very much and barely worth the work. The X1 was readily available and in surplus, so they had nothing to loose from the Nintendo deal as they needed to do zero work for it. But they stated themselves they won't do custom chips anymore, which also limits any upgrade to the Switch to an X2 unless they change their mind on it.

What are you saying bruh? Like do you understand how chip pricing works at all??? 

AMD gettig $100 for ach chip they give to sony isn't them selling them at bargain prices at all. Thats them selling them at "bulk/OEM" pricing which is totally normal when any company puts in orders in the region of millions. 

Take the 3600G or instance, say AMD sells that at retail for $220, that pansout like this... the actual cost of making each of those chips (what AMD pays to the foundry) is like $30/$40. Then AMD will add their markup to account for things like yields, profits, packaging and shipping..etc. At this point the chip comes up to around $170. Then they put their MSRP sticker price of $220 so te retailers make their own ut too.

If tht chip was going into a console, first off the console manufacturer will pay a sizeable sum to "customize" their chip. This reduces how much AMD spends on R&D for that chip and nothing stops them from taking elements of that chips design into their general product line. Then AMD is not worrying about costs like packaging, shiping, marketing and there isn't a retailer cut either. AMD also isn't worrying about yields as that will be something sony/ms absorbs. 

So selling each chip for $100 wilbe them making a good deal amount of money.

I don't even get how any of this is relevant..... are you saying that AMD is somehow not going to be selling chips at that prices anymore because they are doing well now? Well if that is what you are saying then you are just wrong. There is a reason why even Apple only puts AMD GPUs in their computers. And Nvidia is just nonsense with regards to the kinda hardware that works for consoles. Not only are they resistant to drop prices, they also just don't make APUs (that aren't ARM based). So sony/ms using them will mean they 'must" build a discrete cpu/gpu system.

@bolded: We don't even know if that's a true chip (and at 20CU, I really doubt it, especially considering it will be totally bandwith starved even with DDR4 4000). But I digress.

The actual cost depends how much AMD has to pay per wafer, divided by how many chips on that wafer are salvageable for that purpose. So let's say a wafer cost 1000$ (I'm just making up a price here), 20 such chips would fit on it but only 10 would be fully functioning, the others would have to be sold as either 3400G or binned entirely due to defects. In this case AMD would certainly charge at least 100$ on the 3600G to cover the costs already, and use the 3400G for winnings.

However, on a console that's not possible, hence why the PS4 has 2 deactivated CU to improve the yield rate.

@italic: These costs are not always covered, I can remember that the cost of some chips were actually worked into the yearly contracts instead of receiving a sum early on. And considering AMD didn't seem to have gotten any Lump sum (if they did, it doesn't show up in the financial reports at least), I do think they have to cover for those expenses with the chip sales.

@underlined: Well, no, I'm not saying that they won't do it anymore, but rather that they are not obliged to do so anymore to have any sizable income at all.

At the time when the PS4/XBO came out, AMD CPUs were doing very badly and were the laughingstock of the industry. They just released Hawaii, but had much problems keeping up with NVidias updated Kepler (GeForce 700 series), so earnings were breaking away left and right and could only really compete over the price. As a result their profit margin plummeted, and still is awfully low for the sector (it's under 50% while Intel and NVidia are close to or above 70%; at the time it even dropped below 30%, which is bad in any sector). All this made that AMD was desperate for some stable income, which made Sony and Microsoft holding all the cards during the price negotiations. But that won't be the case this time, and AMD will squeeze some winnings out of the chips.

Also, as a side note, you give the costs at 30-40$. Tell me how that works if about half of the sales are from console chips (which was true in 2016) yet the profit margin is at only 24%? Do you think AMD sold their other chips all below production price? And how could that be, considering most chips cost much more than the one in the PS4? Or do you think they had such an R&D expense that it covers half the expenses before wages and taxes? Just saying that your price is off, it may be well below 100$ by then, but I don't think anywhere close to the numbers you're putting there, more like 60-80$. Don't forget that 350mm2 ain't exactly a small chip (a 10 core Skylake-X is only 322mm2, for instance) and that such a big chip normally sells at quite some higher prices for reasons detailed above.

Your Apple example is a bit special, they use it due to OGL and OCL capabilities, where NVidia is weaker than AMD and generally has been like that. Them being cheaper than NVidia is only icing on the cake. But that's going to change soon anyway, considering that Apple wants to design all their chips in-house and are migrating everything to ARM.



DonFerrari said:
Mummelmann said:

My point was never what was required for 4K gaming; it was about mainstream priced hardware being able to pull it off. Proper 4K gaming on mainstream devices and with decent frame rates and effects is still a long way off. 20GB of GDDR6 will be expensive as hell in and on itself, especially given that the final build will need to be ready within 9-12 months, most likely, as it's unlikely that Sony will wait all that long before releasing a new PS.

And even if, by some miracle, one managed to pack a mainstream 500$ device with a heap of GDDR6 memory, shared or otherwise, the true bottlenecks would be the rest of the build where they'd inevitably need to save a ton on cheaper solutions, especially if rumors of BC are to be believed. A RTX 2080 with 8 GB memory currently sits at around 750-850$ alone, it can do 4K at good frame rates in most games (65-80 and above is good in my opinion). To get performance at nearly that level in a mainstream box for 500$ within a year or so is quite simply not happening.

Consoles aren't future proof, that's the whole point in all of this. Any console released today will be hopelessly outdated long before it's replaced. Seeing the state of streamed 4K content on TV right now, one can imagine the time it will take before games with rendered assets will reach an acceptable point on any mainstream device. Proper, stable 4K gaming has only been possible on high-end PC's for a couple of years as it is. The 1080 Ti struggled to creep past the 60 mark on fps in most titles.

Again, my point was never what will be required in the future, rather that what will be required in the future will not be met by upcoming consoles, not even close if they want any sort of approachable price point. As far as limits on video memory, it's hard to find ways to strain a modern high-end GPU with 11GB memory or more, even my aging 980 Ti with only 6GB is still doing okay, albeit at 1440p resolutions and not 4K. And, one last time; if high-end GPU's right now are getting on nicely with their allotted memory, there's no way that any affordable machine in 2020 or so will release with similar or better specs.

except 60fps is hardly something console gaming requires or cares for most genres.

Perhaps, but developers do, and aiming for higher fps means better margins. As it stands, a lot of games with 30 fps aim dip into slideshow territory, games like AC: Odyssey are a good example, I don't think I've ever seen worse frame rates in a modern AAA title. With more effects and higher resolutions, severe drops in frame rate become even more jarring, a more advanced and crisp video or render stands out all the more when it slows down. 

Xbox One X doesn't provide proper 4K, it uses checkerboard rendering and rather few effects, frame rates on many titles are very low, Destiny 2 runs at a somewhat unstable 30 fps.

Half-assed resolutions with poor performance is not what developers want to work with, it's better to find an entry point where they can get both visual fidelity and performance and not be forced to choose. And if the hardware ends up too costly, developers and publishers have a smaller market to sell their software to. I'd much rather the standard be a stable 1440p with full effects and shading than stripped down or faked 4K, then perhaps release a more expensive version of the console that does more or less what the Pro and One X do to the line-up right now.

The fact that 30 fps is still more or less industry standard in console gaming in 2019 is downright shameful, especially since games often dip well below.



 

Pemalite said: 

AMD has spent a ton of engineering resources on Vega.
It implemented all of Polaris's improvements like Instruction Prefetching and a larger instruction buffer which increased the IPC of each pipe as there is less wave stalls.

But one of Graphics Core Next's largest bottlenecks is... Geometry. Which is ironic considering AMD was pushing Tessellation even back in 2001 when the Playstation 2 was flaunting it's stuff.
To that end... AMD introduced the Primitive Discard Accelerator, which abolishes triangles that are to small and pointless to render.. We also saw the introduction of an Index cache, which stores instanced geometry next to the caches.

Graphics Core Next also tends to be ROP limited, which is why AMD reworked them on Polaris which saw a boost to Delta Colour Compression, Larger L2 caches and so on.

And then with Vega AMD kicked it up again by introducing the Draw Stream Binning Rasterization... Which is where Vega gains the ability to bin polygons on a tiled-basis... That in conjunction with the Primitive Discard Accelerator means a significant reduction in the amount of geometry work that needs to be done, boosting geometry throughput substantially.

On the ROP side of the equation... AMD made the ROPS a client of the L2 cache rather than the memory controller, which as L2 caches increases means the ROPS can better leverage it to bolster overall performance... And also enables render-to-texture instead to a frame--buffer, it's a boon for deferred engines.

And then we have the primitive shader too.

In short... Just during the Polaris/Vega introductions a ton of engineering has been done to the geometry side of the equation, it's always been a sore point with AMD's hardware even going back to Terascale.

Specifying some of AMD's improvement is irrelevant as long as you don't also specify what nVidia has achieved. A lot of the engineering work goes into improving the performance and power efficiency by switching from third pary cell libraries to custom IC designs for a particular process node. Something nVidia obviously has spent a lot more resources on than AMD and that's something which doesn't show up as a new feature in marketing material.

I spend much of my working time analyzing GPU frame-traces, identifying bottlenecks and how to work around them. Every GPU architecture has bottlenecks, that's nothing new, it's just a matter of what kind of workload you throw at them. I have full access to all the performance counters of the GCN achitecture, both in a numerical and a visual form. For instance, I can see the number of wavefronts executing on each individual SIMD of each CU at any given time during the trace, the issue rate of VALU, SALU, VMEM, EXP, branch instructions, wait cycles due to accessing the K$ cache, exporting pixels or fetching instructions, stalls due to texture rate or texture memory accesses, number of read/write accesses to the color or depth caches, the number of drawn quads, the number of context rolls, the number of processed primitives and percentage of culled primitives, stalls in the rasterizer due to the SPI (Shader Processor Input) or the PA (Primitive Assembly), number of indices processed and reused by the VGT (Vertex Geometry Tessellator), the number of commands parsed/processed by the CGP/CPC, stalls in the CPG/CPC, number of L2 read/writes, L2 hit/miss rate. That's just a few of the available performace counters I've access to. In addition to that I have full documentation to the GCN architecture and I've developed several released games targeting it. Based on that I've a pretty good picture of the strengths/weaknesses of the architecture and I'm interested in hearing if you perhaps have some insight that I lack.

The geometry rate isn't really a bottleneck for GCN. Even if it was, the geometry processing parallelizes quite well and could be solved by increasing the number of VGTs. It won't be a problem in the future either for two reasons. 1) The pixel rate will always be the limiting factor. 2) Primitive/mesh shaders gives the graphics programmer the option to use the CU's compute power to process geometry.

I asked you to specify the inherent flaws and bottlenecks in the GCN architecture that you claim prevents the PS5 from using more than 64CUs, not AMD's marketing material about their GPUs. So again, can you please specify the "multitude of bottlenecks".

Pemalite said: 

Yes it is. The entire reason why Terascale 3 ever existed was because load balancing for VLIW5 was starting to get meddlesome as often there were parts of the array being underutilized...
The solution? Reduce it down to VLIW4.

It is also why AMD hasn't pushed out past 64 CU's. They potentially can... But that would require a significant overhaul of various parts of Graphics Core Next in order to balance the load and get more efficient utilization.

It's not always about going big and going home... Graphics Core Next tends to already be substantially larger, slower and hotter than the nVidia equivalent anyway.

That's irrelevant to your claim about running out of parallelizable work due to screen-space issues when scaling past 64 CUs.

The PS5 has probably been in development for over 5 years already. It's Sony's single most important coming product by far. They have spent vast amount of money and HR on it. AMD has dedicated a big amount of the RTG engineers working on it. Is it reasonble to believe the PS5 essentially will be a PS4 Pro with 64 CUs and 64 ROPs shrunk down to 7nm? If so, it'll be the most expensive and inefficient die shrink EVER. The Pro is designed to run 4K in checkerboard. Obviously a true 4K console needs a rasterizer with at least twice the rate of the Pro's 128 pixels/cycle, so it goes without saying that AMD need to scale up other parts than the number of CUs any way and I don't believe they will make a bare minimum upscale on those parts since the lifecycle of a console is about 5-6 years. IMO, they will most likely scale the number of ROPs above 64 as well, but that's less certain. That said, I think there is merit to your claim that there won't be more than 64 CUs in the PS5. I might even agree it's the most plausible configuration. However, I don't agree to your claims about inherent flaws in the GCN architucture preventing the PS5 of having more than 64CUs. IMO, it's more a question of which pricepoint PS5 will have and how big of an initial financial hit Sony is prepared to take than technical hurdles.

Pemalite said: 

https://www.anandtech.com/show/12677/tsmc-kicks-off-volume-production-of-7nm-chips

Apparently there isn't a 3x density improvement? Got a link to substantiate your claims?

I'm not sure what you mean. It clearly says the area reduction is 70% and a 60% reduction in power consumtion. Pretty inline with what I wrote. An area reduction of 70% would yield a density increase of 3.3x. Probably just a rounding issue.

Here are the links to TSMC's own numbers.

https://www.tsmc.com/english/dedicatedFoundry/technology/10nm.htm

https://www.tsmc.com/english/dedicatedFoundry/technology/7nm.htm



With Crytek showing that apparently Real Time Ray Tracing works just fine on Vega 56 GPU's when using GPU agnostic API's, I wouldn't be surprised if Navi could simply brute force hybrid ray tracing on next-gen consoles.
Kind of makes NVIDIA look silly with all their specialised cores (which aren't doing them any favors for power consumption).