By using this site, you agree to our Privacy Policy and our Terms of Use. Close
DonFerrari said:
Pemalite said:

Not really.
You could have 64CU's @ 300mhz.
Or you could have 32CU's @ 600mhz.

They hypothetically would have the same output.
...But that also ignores things like memory buses, caches, various fixed function pipelines and so on.

Isn't the case that more CU would give you more connections and thusfore more bandwidth?

Not always. Because the CU's themselves aren't tied to the actual ROP's.
Vega 56 and Vega 64 have the same amount of ROPS, despite having different CU counts... The only reason why Vega 56 despite having less CU's has lower bandwidth is because of it's lower memory clock.

Bofferbrauer2 said:

Even Pascals base clock is only around 1400Mhz, far from the 1800 I was talking about. It's a big step up from Maxwells 1000 Mhz, but not as huge as you make it out to be.

Please use the quoting system properly. Otherwise I will not bother to reply in future.

And 1400mhz isn't a big step up from 1000mhz? 40% isn't a big step up? Seriously? This is in conjunction with more transistors as well?

1632mhz base on some 1080Ti's, but they spend the bulk of their time at a much higher clock rate.
Which is a 63% minimum increase over the 980Ti.

Bofferbrauer2 said:
They can clock faster, but the TDP is going up accordingly. This includes 2Ghz Models, just check their actual Power consumption. Hint: It's north of 300W in case of a 1080 (non-Ti)

Not always. You see Transistors have an efficiency curve, once you hit the right voltage, with the right frequency you get an optimal amount of performance per watt. It's a very simple concept.

The 1080Ti has a TDP of 250w, the 980Ti has a TDP of 250w.
Actual power consumption hasn't blown out like you imply either.
https://www.anandtech.com/show/11180/the-nvidia-geforce-gtx-1080-ti-review/16

That is not insignificant.

Bofferbrauer2 said:
And in case you didn't get it yet, GCN was never meant for such high clock speeds.

Graphics Core Next is an extremely modular design.
AMD took GCN and touched it up to clock higher with Vega, AMD actually spent the bulk of it's extra transistor budget over Fiji to achieving that, thus it was designed for high clock speeds.
But don't take my word for it: https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2

Bofferbrauer2 said:
Vega is mostly clocked too high for it's own good to limit the distance between themselves and NVidia. It's the only way they could do it due to the hard limit of 64 CU with Vega.  As a result the power consumption explodes. A Vega at 1200Mhz consumes much less than it does at 1400Mhz, where most consumer cards are clocked at. The Vega in the Ryzen APU is clocked less agressively and as a result, consumes much less power

I have already touched on prior in this thread on why Vega is inefficient. It's not due to just the clock rates.
The entire Graphics Core Next architecture is inefficient regardless of clock or product segment.

Fact of the matter is... GPU's like Pascal are doing tile based rasterization, GCN isn't.
Again, don't take my word for it:
https://www.anandtech.com/show/10536/nvidia-maxwell-tile-rasterization-analysis
https://forum.beyond3d.com/threads/amd-vega-hardware-reviews.60246/page-59#post-1997699
https://forum.beyond3d.com/threads/amd-vega-hardware-reviews.60246/page-45#post-1995903


Bofferbrauer2 said:
Oh ffs, that's a block Diagramm, not a layout!

Correct. But the purpose of the block Diagram is to show which units are paired with what in a graphical layout.

Bofferbrauer2 said:

And even then: You can see the L2 Cache, where do you think the L1 caches are? Yep, that's right: In the Compute Units. And the Graphics Pipeline in that diagramm is just the Front-end of a Compute Unit. What's marked with NCU are the cores of each Compute Units.

There is more than just L1 caches.
The rest is just a rehash of crap I already know.



--::{PC Gaming Master Race}::--