By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Bofferbrauer2 said:

As if you weren't aware that GPU get clocked down to fit into the tight power consumption maximals of a console. Don't expect the PS5 to come with less than 56 CU, but higher clock speeds instead.

Uh. What? It's a balancing act. So yes I am aware.
Increasing the size of the chip by adding more CU's directly increases power consumption as every single transistor you add requires energy.

There is a reason why nVidia with Pascal decided not to blow out transistor counts and instead focused on driving the clockrates of it's chips up.
It was what offered the best performance/power consumption for a given chip size.

At 7nm (I hate using that term, because it's not a real 7nm process) you can use the extra TDP to drive up clock rates. (Provided you have the appropriate layout, transistor types etc'.)

 

Bofferbrauer2 said:

That won't work out unless Navi base clock is above 1800Mhz, which I very much doubt.

Why not?

There are Pascal chips that boost to 2ghz at 14nm. And that is before overclocking.

Even Pascals base clock is only around 1400Mhz, far from the 1800 I was talking about. It's a big step up from Maxwells 1000 Mhz, but not as huge as you make it out to be. 

They can clock faster, but the TDP is going up accordingly. This includes 2Ghz Models, just check their actual Power consumption. Hint: It's north of 300W in case of a 1080 (non-Ti)

And in case you didn't get it yet, GCN was never meant for such high clock speeds. Vega is mostly clocked too high for it's own good to limit the distance between themselves and NVidia. It's the only way they could do it due to the hard limit of 64 CU with Vega.  As a result the power consumption explodes. A Vega at 1200Mhz consumes much less than it does at 1400Mhz, where most consumer cards are clocked at. The Vega in the Ryzen APU is clocked less agressively and as a result, consumes much less power

Bofferbrauer2 said:

To get enough distance between themselves and the One X (40 CU @1172Mhz), 56 CU will already have to come with about 1500Mhz. That's already close to max clock rate for Vega and in 14nm definitely too much power consumption and heat for a console. In 7nm this should be much more feasible but will still draw a lot of power.

The Xbox One X is using an older Polaris derived part.
As someone who has a Polaris GPU and the Xbox One X I can assure you they are both inefficient, slow, mid-range hardware.

I mean shit... Neither have draw stream rasterization, primitive shaders or rapid packed math... Graphics Core Next in the console is simply slow, old and inefficient, The Xbox One X is no exception. - It is not overtly difficult to make big performance gains.
Heck AMD haven't even enabled draw stream rasterization in it's drivers and relegated primitive shaders for something for developers to opt-into via an API with Vega, those are efficiency gains going to waste.

Fact is... 64 CU's are enough for next gen, with ample clock rate and architectural refinement.

If Vega would be much faster than Polaris at same clock speed, I would agree. But a simulated Polaris at same clock speed is just marginally slower than Vega (though less powerhungry). There's a big reason why Vega is considered so disappointing.

Bofferbrauer2 said:

@bolded: Those are part of the Compute Units (unless you meant CPU Cache too)

False.
I suggest you look at this layout.


Oh ffs, that's a block Diagramm, not a layout!

And even then: You can see the L2 Cache, where do you think the L1 caches are? Yep, that's right: In the Compute Units. And the Graphics Pipeline in that diagramm is just the Front-end of a Compute Unit. What's marked with NCU are the cores of each Compute Units.