By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Bofferbrauer2 said:

The problem is, if you calculate the space of the 40 CU in the One X and compare it with the 64CU in Vega 64, then there's just almost no difference. The Jaguar CPU part in the One X is tiny, even in 28nm it only took 25mm2 (3.1mm2 per core times 8) plus the space of 4MiB cache, and in 14nm would be even smaller, the difference would be very small. If we take 300mm2 for the GPU part alone, then 64CU would be 480mm2, almost exactly the size of Vega 64, which is 486mm2 tall.

So no, the different memory controller will not magically shrink the chip by a large amount

If we are realistic, then this leak is a hoax, pure and simple. 64CU is the technical limit of GCN, and there's no known way around this, so 64CU at 1500Mhz would be the only option in your opinion. Even in a 7nm Navi like that would consume around 250W without even counting the CPU, RAM or any other part in the console.

In other words, the console would run way too hot and consume too much, nevermind the fact that the chip would be way too large.

Your math is off......

Ok.... at 28nm the PS4 had a die size of 348mm2. This shrunk to 321mm2 in the 16nm PS4pro. Yet they were able to double the amount of CUs in the Pro compared to the base PS4.

But lets keep it simple..... cuse reading all you are saying its like you are saying there willbe no difference between a 16nm/14nm chip and a 7nm one. So how about you just spell it out so I am not confused (though I feel I already am).

What do you believe they will be able to fit into a 7nm APU that is anywhere between 350mm2 to 380mm2?

As for technical limits of GCN.... thas just wrong. The Problem isn't the CU count,the problem is the shader engine count. GCN5 has only 4 of them, and the maximum connected to each one is  16CU (vega 64). The last time the amount of shader engines were increased was I think GCN3 or 4 (can't recall), but they have been increased before. And even AMD addressed ths recently in an interview where Raja mentioned that with vega they considerred increasing the number of SEs but they didn't have enough time. So its not like they don't know what to do about that or its some sort of impossible hurdle to overcome.

And this power draw thing...... you do know thats only a problem when you are trying to clock (already ineffecient) chips as high as possible right? And the best solution is to just have more CUs and not have to clock them too high though that could add complexity and affect yeilds. Like nothing stops them fromgoing with an 80CU APU with the GPU clocked at 1.1Ghz - 1.2Ghz. While the desktop iteratons of the same chips could be clocked at 1.5Ghz - 1.8Ghz.

I have said this multiple times already....... until we actualy see working Navi basd hardware, no one (myself included) can say with any certainity what is or isn't possible all based on assumptions made from an older microarch.....

Oh just to add..... I don't believe this (or anyone so ar for that matter) rumor.

Last edited by Intrinsic - on 14 March 2019