By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS4 exclusives should get a 4K 60 with high resolution assets update next gen Update: Confirmed

Pemalite said:
GOWTLOZ said:

Flops are relevant, they give a good ballpark estimation of what the hardware can do.

No they don't.
Here are two Geforce 730. Roughly the same flops. 50-65% performance difference.
http://www.jagatreview.com/2015/04/gt-730-ddr3-vs-gt-730-gddr5/2/

Here are two Geforce 1030... Roughly same flops. 100-125% performance difference.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

Here are two Radeon 7750.  Roughly the same flops... 40-50%.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

Hows about the Radeon 5870 and Radeon 7850? The 5870 is 2.72 Teraflops verses the 7850's 1.76 Teraflops. - The 5870 should win right, it has almost 1 Teraflops more?
Nope. It looses by about 8-55%

https://www.anandtech.com/bench/product/1062?vs=1076

Tell me again how much accuracy flops will give you in determining a GPU's capabilities. - Evidence overwhelmingly says you are wrong on that claim. Like... Really wrong.

Flops tells us only one thing...
The single precision theoretical floating point performance of a part, it absolutely tells us nothing about half precision, double precision, quarter precision, integer, geometry, bandwidth, texturing, compression, culling capabilities of a part... In short, just like bits... It's a useless metric on it's own.

The performance differences are due to different memory used.

1. One is DDR3 and the other is GDDR5.

2. DDR4 and GDDR5.

3. GDDR3 and GDDR5.

There are other reasons as well, such as the difference in VRAM between the GT 730s. Flops are a good indicator of performance but also drivers, APIs and stuff. GDDR5 and GDDR6 should likely be used in next gen consoles and that with the Flop performance should indicate the performance of next gen consoles.



Around the Network
GOWTLOZ said:
Pemalite said:

No they don't.
Here are two Geforce 730. Roughly the same flops. 50-65% performance difference.
http://www.jagatreview.com/2015/04/gt-730-ddr3-vs-gt-730-gddr5/2/

Here are two Geforce 1030... Roughly same flops. 100-125% performance difference.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

Here are two Radeon 7750.  Roughly the same flops... 40-50%.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

Hows about the Radeon 5870 and Radeon 7850? The 5870 is 2.72 Teraflops verses the 7850's 1.76 Teraflops. - The 5870 should win right, it has almost 1 Teraflops more?
Nope. It looses by about 8-55%

https://www.anandtech.com/bench/product/1062?vs=1076

Tell me again how much accuracy flops will give you in determining a GPU's capabilities. - Evidence overwhelmingly says you are wrong on that claim. Like... Really wrong.

Flops tells us only one thing...
The single precision theoretical floating point performance of a part, it absolutely tells us nothing about half precision, double precision, quarter precision, integer, geometry, bandwidth, texturing, compression, culling capabilities of a part... In short, just like bits... It's a useless metric on it's own.

The performance differences are due to different memory used.

Well i mean... that was Pemalite's point, wasn't it?



GOWTLOZ said:
Pemalite said:

No they don't.
Here are two Geforce 730. Roughly the same flops. 50-65% performance difference.
http://www.jagatreview.com/2015/04/gt-730-ddr3-vs-gt-730-gddr5/2/

Here are two Geforce 1030... Roughly same flops. 100-125% performance difference.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

Here are two Radeon 7750.  Roughly the same flops... 40-50%.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

Hows about the Radeon 5870 and Radeon 7850? The 5870 is 2.72 Teraflops verses the 7850's 1.76 Teraflops. - The 5870 should win right, it has almost 1 Teraflops more?
Nope. It looses by about 8-55%

https://www.anandtech.com/bench/product/1062?vs=1076

Tell me again how much accuracy flops will give you in determining a GPU's capabilities. - Evidence overwhelmingly says you are wrong on that claim. Like... Really wrong.

Flops tells us only one thing...
The single precision theoretical floating point performance of a part, it absolutely tells us nothing about half precision, double precision, quarter precision, integer, geometry, bandwidth, texturing, compression, culling capabilities of a part... In short, just like bits... It's a useless metric on it's own.

The performance differences are due to different memory used.

1. One is DDR3 and the other is GDDR5.

2. DDR4 and GDDR5.

3. GDDR3 and GDDR5.

There are other reasons as well, such as the difference in VRAM between the GT 730s. Flops are a good indicator of performance but also drivers, APIs and stuff. GDDR5 and GDDR6 should likely be used in next gen consoles and that with the Flop performance should indicate the performance of next gen consoles.

Flops are an indicator for absolutely nothing besides a theoretical aka. Not definite single precision floating point capability. That is it. Period.

Clearly you also missed the comparison between the Radeon 5870 and 7850 as it isn't just about DRAM either... As the Radeon 5870 and 7850 had identical amounts of bandwidth, yet the 5870 with more flops looses.

I highly suggest you go back and take a proper look at the evidence I have provided.

As for Ram itself... DDR3 can be faster than GDDR5 in terms of offered bandwidth, it is entirely dependent on how wide you wish to take things.

But the point I am trying to make is... You cannot take a single arbitrary number like Flops, bits, fabrication level, bandwidth and come up with a conclusion that a product will beat another and by a set amount... You need to factor in everything... And you need benchmarks to provide the empirical evidence to back that up.

As it stands... Flops alone is useless, Bandwidth alone is useless. - Bandwidth and Flops in combination is also useless... Otherwise Vega 7 would beat the Geforce 2080Ti.. And we know that isn't happening.

potato_hamster said:
GOWTLOZ said:

The performance differences are due to different memory used.

Well i mean... that was Pemalite's point, wasn't it?

Precisely. Hahahaha

Last edited by Pemalite - on 14 January 2019

--::{PC Gaming Master Race}::--