By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Intrinsic said:
Bofferbrauer2 said:

The problem is, if you calculate the space of the 40 CU in the One X and compare it with the 64CU in Vega 64, then there's just almost no difference. The Jaguar CPU part in the One X is tiny, even in 28nm it only took 25mm2 (3.1mm2 per core times 8) plus the space of 4MiB cache, and in 14nm would be even smaller, the difference would be very small. If we take 300mm2 for the GPU part alone, then 64CU would be 480mm2, almost exactly the size of Vega 64, which is 486mm2 tall.

So no, the different memory controller will not magically shrink the chip by a large amount

If we are realistic, then this leak is a hoax, pure and simple. 64CU is the technical limit of GCN, and there's no known way around this, so 64CU at 1500Mhz would be the only option in your opinion. Even in a 7nm Navi like that would consume around 250W without even counting the CPU, RAM or any other part in the console.

In other words, the console would run way too hot and consume too much, nevermind the fact that the chip would be way too large.

Your math is off......

Ok.... at 28nm the PS4 had a die size of 348mm2. This shrunk to 321mm2 in the 16nm PS4pro. Yet they were able to double the amount of CUs in the Pro compared to the base PS4.

But lets keep it simple..... cuse reading all you are saying its like you are saying there willbe no difference between a 16nm/14nm chip and a 7nm one. So how about you just spell it out so I am not confused (though I feel I already am).

What do you believe they will be able to fit into a 7nm APU that is anywhere between 350mm2 to 380mm2?

As for technical limits of GCN.... thas just wrong. The Problem isn't the CU count,the problem is the shader engine count. GCN5 has only 4 of them, and the maximum connected to each one is  16CU (vega 64). The last time the amount of shader engines were increased was I think GCN3 or 4 (can't recall), but they have been increased before. And even AMD addressed ths recently in an interview where Raja mentioned that with vega they considerred increasing the number of SEs but they didn't have enough time. So its not like they don't know what to do about that or its some sort of impossible hurdle to overcome.

And this power draw thing...... you do know thats only a problem when you are trying to clock (already ineffecient) chips as high as possible right? And the best solution is to just have more CUs and not have to clock them too high though that could add complexity and affect yeilds. Like nothing stops them fromgoing with an 80CU APU with the GPU clocked at 1.1Ghz - 1.2Ghz. While the desktop iteratons of the same chips could be clocked at 1.5Ghz - 1.8Ghz.

I have said this multiple times already....... until we actualy see working Navi basd hardware, no one (myself included) can say with any certainity what is or isn't possible all based on assumptions made from an older microarch.....

Oh just to add..... I don't believe this (or anyone so ar for that matter) rumor.

@bolded: I said that before already. However, the way they are organised only allows for 64CU. And that's compute engines, not shader engines, which is something different entirely (basically a rebrand of the GCA, the Graphics and Core Array, introduced with GCN2). GCN5 only has 4 of them because they can only 4 of them reliably with instructions. Technically they could go past the 64 with more compute englines, but it wouldn't actually increase the performance as the CU would be idling half the time because they don't get any instructions. Hence why it's agreed that 64CU is the limit.

I know Vega is quite efficient around 1150Mhz (I love undervolting my hardware). But to reach those 12-14TFlops and with the practical limitation to 64CU like discussed above, the only way is to clock the chip way past it's sweet spot.

The proposed 80CU @1.2Ghz would probably not beat a Vega 56 because, like I said, the GCN architecture couldn't feed that many CU with instructions. The scheduler, from which the instructions are coming and  then dispense them to the SE, is the limitation. And that can only be removed by a complete redesign. Hence why AMD never even showed a successor to Navi in the roadmaps, they probably knew already back then that Vega was as far as they could go.

@italic: What I heard was that they could have done so, but decided against it because it wouldn't have removed the problem.

I agree that we need to see Navi to be sure about everything. Since it's so long since Vega, it's possible that they found a way around the problem, rendering our discussion here moot.

@underlined: English ain't my first language (or second or even third for that matter), so sorry if I was confusing you. But I calculated the size in 7nm in a previous answer; there I was just proving his assumption wrong that Vega is just so big due to his HBM memory controller and showed him that the One X, which he drew in as comparison, would be just as large with 64CU (if made in the same process of course) despite having a GDDR5 memory controller.