By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HollyGamer said:

You  brought Bethesda as an example, it's mean you prove my point. Bethesda never has any new engine, they always using the same engine from 2001 era. Their engine are limited so it performed bad on hardware that come out after 2001 and new hardware , many effect, graphic and gameplay, AI, NPC etc look and played very outdated. 

Did you even bother to read? Or only pick and choose what you want?

It pretty much happens with every major game engine.

HollyGamer said:

Yes flop is flop, but how Flop perform are different on every uarc, the equation of effectiveness  from one uarc to other uarc is very different . the effectiveness of TFLOPS can be measured from one UARC to other UARC. Navi it's indeed 1.4 times then GCAN.  

No. A flop is exactly the same regardless of the Architecture in question.

A flop is the exact same mathematical equation regardless if it's GCN or RDNA, RNDA isn't taking that mathematical operation and doing it differently, the flop is the same.

The issue is... The flops you standby are a theoretical denominator, not a real world one.

And the reason why RDNA gets more performance than GCN, isn't because of FLOPS at all. It's everything else that feeds that hardware as RDNA has the EXACT same instruction set as GCN. - Meaning how it handles mathematical operations is identical to GCN. So you are wrong on all accounts.

DonFerrari said:

On your comparison of GPUs you used one with DDR4 and other with GDDR5 that would already impact the comparison. We know that the core of your argument is that TFlop have almost no relevance (and after all your explanations I think very little people here put much stock in the TFlop alone), but what I said is ceteris paribus. If everything else on both GPUs is perfectly equal and just the flops are different (let's say because one have a 20% higher clockrate) then the one with the 20% higher clockrate is a stronger GPU (that sure the rest of the system would have to be made to use this advantage). Now if you mix the memory quantity, speed, bandwidth, design of the APU itself and everything else of course you will only be able to go and have a real life performance after they release. And even so you won't really have a very good measurement because same game running on 2 system the difference in performance may not be because one is worse than the other but just how proficient in that HW the dev are.

You do actually get diminishing returns though.

If we take for example:
* 1024 Stream processors * 2 instructions per clock * clockrate
And had...
* 1024 * 2 * 1000Mhz = 2 Teraflops.
And
* 1024 * 2 * 1500Mhz = 3 Teraflops.

The 3 Teraflop part isn't going to necessarily be 50% faster in floating point calculations... The GPU may be having the caches run at an offset of the core clock speed, and may not see the same 50% increase in performance. - Thus bottlenecks in the design come into play which limits your total throughput.

It's similar to when nVidia had the shader clock independent of the core clock back in the Fermi days.

Plus FLOPS doesn't account for the entire capabilities of a chip... It doesn't take into account integer, quarter/half/double precision floating point, geometry, texturing, Ray Tracing and more, it's only one aspect of a GPU, not the complete picture.

It's like using "bits" to determine a consoles capabilities.



Last edited by Pemalite - on 16 December 2019

--::{PC Gaming Master Race}::--