By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Teraflops are NOT a Measurement of Gaming Performance

Bits. CPU speed. Flops. Resolution/ Doesn't matter. Fanboys always find something to latch onto for fanboys wars.



Bite my shiny metal cockpit!

Around the Network

You know I just decided to look up some spec info for my GPU (because I hardly needed to have that committed to memory like this stupid flop war that's been going on), and it turns out my 1080ti has 11.3 Tflops, which puts it above the damn RTX 2080 of all cards, and mine was a model from 2017, 3 years ago. I'ts not my main focus though, because I'm not one to put flops as an end all to be all, but I would put my stock into it's core/boost clock, core count, TDP and cores.



kirby007 said:
But you can compare TF between same manufacturer

I eluded to that toward the end with comparing the likely power to a current AMD GPU in the 5700XT.

But even then if it has a lower TDP which it will do, then it won't run as good as a desktop variant depending on how much lower that power rating is. 

This is mainly for people saying "WEll tHAat's MoRe TeRaFLoPs ThAn a 2080"



PS4(PS5 Soon)and PC gaming

There's only 2 races: White and 'Political Agenda'
2 Genders: Male and 'Political Agenda'
2 Hairstyles for female characters: Long and 'Political Agenda'
2 Sexualities: Straight and 'Political Agenda'

ArchangelMadzz said:


GPU's are far more complicated than that. In the context of supercomputers, then Tflops can be used to ranked the sheer calculation performance.

Teraflops only tell us the single precision floating point capabilities.

So even on a super computer it doesn't tell us the actual performance of super computing tasks outside of that. (I.E. Quarter/Half/Double Precision) Or other types of calculations such as Integers.

Mr Puggsly said:

I disagree with you essentially blaming MS. This whole teraflop thing started with X1 and PS4 around 2013.

I believe Sony touted 4TF even before we heard about X1X's 6TF. It also made sense given all these specs were relatively the same, so the teraflop comparison had validity for 8th gen consoles.

I think MS has thrown out the teraflop figure for Series X for two reasons. First, to the average person that simply means significantly more GPU power and it certainly has that. Second, MS may be confident they have a more powerful machine for the next gen and want to boast.

Started gaining traction during the 7th gen, mostly to try and compare the Xbox 360 and Playstation 3...

It was always hilarious how people were touting the Cells Gflop number without actually understanding the circumstances required for those flop numbers to actually come about.

The fault lays in every camp, even during the "bit" wars, companies were using it to 1-up each other, despite it being ultimately irrelevant.

kirby007 said:
But you can compare TF between same manufacturer

No.

A Radeon 5870 at 2.72 Teraflops is SLOWER than a Radeon 7850 at 1.76 Teraflops. Almost a Teraflop difference.

drkohler said:

You can always compare TF within the same system architecture.

No.

A Geforce 1030 for example can be otherwise identical in terms of Gflop/Tflop, but the DDR4 variant will be half the speed of the GDDR5 variant.

A Radeon 7850 DDR3 variant with the same Glop, will be half the speed of the GDDR5 variant.

Moral of the story? A GPU is allot more than just flops... And the sooner people wake up to that fact, the better.

Mr Puggsly said:

Ah, okay.

Either way TF was already being discussed in 2013, especially as we started seeing resolution disparities.

But it's relevant given the consoles with more TF have also had more power in practice. Consoles are essentially using the same tech.

Not exactly the same tech.

The Playstation 4's GPU has more ACE units, so in asynchronous compute scenarios it would have a substantial advantage over the Xbox One.

The Xbox One by comparison thanks to the ESRAM and lower latency DDR3 DRAM is able to pull ahead in CPU based scenarios... And thus show advantages on the GPU side which relies heavily on draw calls.

victor83fernandes said:

Wrong, everything else being equal, graphic card with same architecture, one being 4TF and the other 12TF is a huge difference. Thats like comparing an xbox S to an xbox X, but a bigger difference because the power of the X was not pushed in any way, games were not design for it.

Next gen, if base PS5 is 9 TF then games will be design for it, and those games will struggle on 4TF, major cuts, and would have slight advantages on 12TF.

We are not comparing teraflops from graphics from different years, those consoles will launch the same year, most likely the same week.

If the Playstation 5 is chasing 4k and Lockhart is chasing 1080P... Then by extension Lockhart needs less resources.

victor83fernandes said:

Wrong, I follow console gaming news every day, without miss, and I haven't heard about teraflops being talked until they announced the power of the X.

But this generation, both the X and pro were under-utilised, because games were build for the base consoles, next gen if the games are build for the 9TF ps5, then 4tf will struggle a lot.

People might say 1080p, fair enough, but you can't even buy 1080p TVs anymore, most gamers have upgraded to 4k TVs or the next gen consoles will make a lot of people upgrade.

People who don't upgrade are the ones who won't jump to next gen so early as games will be too expensive.

ESRAM, GDDR5, Flops, ACE Units were the big talking points at the start of the 8th gen, especially on these forums.

Lockhart will do fine, if you don't think it will be a suitable console... Then as a consumer the solution is simple.
Do. Not. Buy. It. Buy the Xbox Series X or Playstation 5 instead.




--::{PC Gaming Master Race}::--

How many bits is in 12 Teraflops?....I stopped measuring gaming after 64bit



Around the Network

"There seems to be a misconception on how teraflops relate to gaming performance, this is partly due to Microsoft's emphasis on 6TF for their Xbox One X marketing, and now they're emphasising 12TF on the Xbox Series X. "  - OP.

^ it depends on how well balanced the Card is.

If theres no drastic bottlenecks to them, and their the same architecture in core design, you CAN absolutely use Teraflops as related to gameing performance.

Also Architecture to Architecture, you can.
If you take 2 cards that are both GNC, you can compaire them.
Such as the case with the Playstation 4 + Xbox One X.


*I'd also argue that you can "sorta" use it accross architecture as well, if you know how one GPU arch compairs to another, in terms of gameing performance.

Like Nvidia's Turing Core > AMD's GCN core (by alot).
While the differnce between Nvidia Turning Core vs AMD's RDNA 1 is about equal (in terms of Flops to performance).

(note this is not talking about performance pr watt, just about Flops -> performance)

Last edited by JRPGfan - on 09 March 2020

V-r0cK said:

How many bits is in 12 Teraflops?....I stopped measuring gaming after 64bit

The "word-size" of what a CPU is able to compute...... is something differnt, than how many Floating point opperations pr secound it can do.

This is apples and oranges :)

Plus, usually the "flops" mentioned (now) refer to the GPU performance (not its cpu).




Historically speaking AMD flops have been behind Nvidia flops. Basically Nvidia GPU's outperformed AMD GPU's at the same flop level, this has been true for many years now. However, AMD has been making great strides to improve that situation, Navi (RDNA 1) closed that gap somewhat, and RDNA 2 (what PS5 and Xbox Series seem to both be using) is expected to bridge the gap even further, possibly even close the gap completely. It is entirely possible that a 12 tflop RDNA 2 GPU (like the one in Xbox Series X) will fall in between the 2080 Super and 2080ti in terms of gaming performance.

It is also worth noting that if both Xbox Series and PS5 are using RDNA 2, you can get a pretty good sense of their relative GPU performance by comparing flop numbers. If PS5 is 9-10 tflop and XSX is 12 tflop, you will likely see a similar gap in gaming performance between them. Of course even then there are still other factors to take into account, memory speed and bus width, texture and pixel fillrates, these things can make a difference in bridging some of the gap in certain situations.

Last edited by shikamaru317 - on 09 March 2020

Pemalite said:

ESRAM, GDDR5, Flops, ACE Units were the big talking points at the start of the 8th gen, especially on these forums.

Lockhart will do fine, if you don't think it will be a suitable console... Then as a consumer the solution is simple.
Do. Not. Buy. It. Buy the Xbox Series X or Playstation 5 instead.


Ah yeah the memory speed war. Adding ESRAM speeds to DDR3 speed at the start of last gen to make it look closer to PS4.

However if everything else is the same it is a use-able measure. If rumors are true

PS5 will be 76.6% of Series X (smaller difference than ps4 and XBox One, which was 71%)
Lockhart 33% of Series X, 43% of PS5

Or

Series X is 3x Lockhart and 1.3x PS5
PS5 is 2.3x Lockhart (about the same difference as ps4 pro to base ps4)

Of course XBox One had other benefits compared to ps4, as well as the ps4 pro had to ps4.
It will be a weird start of a new gen if all these rumors are true.

XBox One X was 4.2x XBox One with other improvements as well. And that had games running in native 4K (or very close) on XBox One X while the XBox One was struggling below 1080p with other things turned down as well. So maybe 3x difference is enough for 1080p vs 4K. PS4 pro couldn't get past 1440p vs 1080p ps4 with 2.3x increase (plus boosted cpu)

It's a shame no matter how you look at it. The pro machines were just used for higher resolution, which Lockhart seems to pass on to the new generation with its existence.



ArchangelMadzz said:
kirby007 said:
But you can compare TF between same manufacturer

I eluded to that toward the end with comparing the likely power to a current AMD GPU in the 5700XT.

But even then if it has a lower TDP which it will do, then it won't run as good as a desktop variant depending on how much lower that power rating is. 

This is mainly for people saying "WEll tHAat's MoRe TeRaFLoPs ThAn a 2080"

Playstation 5 + Xbox Series X, are BOTH confirmed to be useing RDNA 2.

Its about 50% more effecient in terms of Performance/watt, than a RDNA1 card is, according to AMD.

This means, that even if Playstation 5, uses less power on the GPU than a current 5700XT card does, it could beat it in performance.

power consumption:

5700 XT   ~25% less power used than a 2080ti.

performance:

5700 XT  about ~ 17% slower than a Geforce 2080 ti.

(this is on pc side, with drivers, and games optimised for nvidia bullcrap ect... on consoles differnce would be even smaller)

Now imagine a +50% performance pr watt gain from RDNA1 -> RDNA 2.

Its totally possbily that if consoles are 12 Tflops as rumored, they could be around equal to a Geforce 2080 ti.

"This is mainly for people saying "WEll tHAat's MoRe TeRaFLoPs ThAn a 2080"" - Archangel.


Would you like to dispute any of the arguements above, that says its possible to do so?
Playstation 5 + Xbox Series X are gonna be quite powerfull (imo).

Last edited by JRPGfan - on 09 March 2020