By using this site, you agree to our Privacy Policy and our Terms of Use. Close

@Slimebeast

Take for example a game that has the following scores:

90,85,90,85,90,90,90,95,90,90,85,90,90,90,100,100,95,90,100,90,85,90,80,85,90,85,90,90,90,100,100,100,95,95,95,90,80,85,85,80,90,90,100

If you average it (and that's 43 different scores) you'll get a 92,67 score. Even if you add, let's say, a 70 score, the score will be 92,15.

As you add scores that are kept to a normal pattern (and in my estimations I did too many variations so that my Gausian Curve was as wide as possible), it will come to a time that even adding a score that differs in a huge way from the normal pattern (the low 5% or the upper 5% of the scores), it's effects won't be felt too much.

This is what happens with reviews. As long as there are tons of 10's - 9's in a game score, you could have an odd out 8 or 7 score and it won't reflect as much on the final result, thus that score being "discredited" in the statistical analysis.



Current PC Build

CPU - i7 8700K 3.7 GHz (4.7 GHz turbo) 6 cores OC'd to 5.2 GHz with Watercooling (Hydro Series H110i) | MB - Gigabyte Z370 HD3P ATX | Gigabyte GTX 1080ti Gaming OC BLACK 11G (1657 MHz Boost Core / 11010 MHz Memory) | RAM - Corsair DIMM 32GB DDR4, 2400 MHz | PSU - Corsair CX650M (80+ Bronze) 650W | Audio - Asus Essence STX II 7.1 | Monitor - Samsung U28E590D 4K UHD, Freesync, 1 ms, 60 Hz, 28"