By using this site, you agree to our Privacy Policy and our Terms of Use. Close
EpicRandy said:

I get what you say and kind of good advice in general but it's a little more complicated than this. Tflops measure the limits in compute power a chip can have at corresponding clocks. It is actually very precise to determine the capacity of a chip in a nutshell. However, this is only 1 part of the story the others can all be summed up by what % you can actually use in any given scenario or in other words, how starved the chip actually is. It can be starved with an insufficient memory pool, insufficient memory bandwidth, and insufficient power delivery. 

No. No. No.

Tflops is *not* actually a measurement of anything.

It's a theoretical number based on a number of hardware attributes and not a measurement of capability. - It is a number that is impossible to achieve in the real world.

It is extremely imprecise, not precise.

For example...
A Radeon 5870 is 2.72 Teraflops GPU with 2GB @153GB/s of bandwidth.
A Radeon 7850 is 1.76 Teraflops GPU with 2GB Ram @153GB/s of bandwidth.

So the only real difference is almost 1 Teraflops of compute, right? It's accurate according to you right? So the Radeon 5870 should win right?

Then if it's such an accurate measure of compute, why is the 7850 faster in everything, including compute where in some single precision floating point tasks, the 7850 is sometimes more than twice as fast?
(But don't take my word for it)
https://www.anandtech.com/bench/product/1062?vs=1076


Again. Teraflops is absolute bullshit. It literally represents nothing.

EpicRandy said:

Another aspect to consider is the amount and variety of hardware acceleration you have that may result in software bypassing the utilization of CU in some scenarios where it would need to use them on a chip without such acceleration.

in the example you gave the 2700U is actually very starved by the lack of memory pool and bandwidth, The 4500U feature 50% more L2 cache and 2x L3 cache and supports higher memory frequency. The Cpu side is also more power-hungry with the 2700U than the 4500u leaving more leeway for the GPU on the 4500u to use the same 25W TDP.

You are just confirming my point, that the number of CU's is not the be-all, end-all.

EpicRandy said:

For the 5500u vs 4700u, the sole difference is that the Cpu side is less power hungry with the 5500U allowing for higher clocks on the GPU, but make no mistake if you were to isolate the GPU power consumption and compare them both the 4700U would actually be more efficient per watt. Even the more recent rdna2 is most efficient at around 1300 to 1400 MHZ according to this source. The Vega architecture however had a much lower most efficiency clock speed, I could not find a source for this but I remember at the time of the Vega announcement that AMD was using clocks of ~850MHZ in their presentation to portray efficiency increase compared with the older architecture. This was prior to the reveal of the Vega 56 and 64 however so it is possible that it was tested on engineering samples. This may have shifted with node shrinkage also, but could not find anything on this, still really doubt 1800mhz would be more efficient per watt than 1500mhz with the Vega architecture.

Nah. Isolating GPU power consumption doesn't result in higher GPU power consumption.

Remember binning is actually a thing and as a process matures you can obtain higher clockspeeds without a corresponding increase to power consumption, sometimes... You can achieve higher clocks -and- lower power consumption as processes mature.

And you are right, Vega was extremely efficient at lower clocks. - AMD used a lot less dark silicon to insulate parts of the chip to reduce power leakage to obtain higher clockrates.



--::{PC Gaming Master Race}::--