By using this site, you agree to our Privacy Policy and our Terms of Use. Close
BraLoD said:
JWeinCom said:

On an outlet by outlet basis, no.  Different people review different games and goty awards are usually done by a panel, so it's pretty possible that a small difference in score may not predict who will win goty. A review is one person's opinion on a game, and doesn't necessarily reflect the site. I wrote for a gaming website a while back... I gave Skyward Sword a 10, which I stand by. 99% sure that wasn't our goty.

On an industry-wide basis, you should expect in general more goty awards for games that score higher... So if you have data that shows that's the case, then maybe.  With that being said so, when we're dealing with a 2-3% difference in the scores, the difference in goty awards should be pretty small as well.

https://gotypicks.blogspot.com/?m=1

Pick the desired year in the list in the right.

Scroll down all the awards and see the total, outlet critics and reader awards.

2018: GoW had almost a third more GOTY awards from critics than RDR2. (126 to 97)

2013: TLoU had almost double the GOTY awards from critics than GTAV. (191 to 107)

The problem is that the dataset is not nearly the same.  God of War had 131 scored reviews that make up its metacrtic score.  Meanwhile, there are something like 250 outlets included as critics at that blog.  Assuming all 131 metacritic outlets that reviewed gow did a game of the year award, then still, half of the outlets on the blog are not part of the metacritic score.  The slight difference in metacritic scores, shouldn't be that predictive of how the goty should turn out when the difference was slight to begin with, and your sample is so different.

If you wanted to make the argument, you'd have to narrow the goty awards only to those being issued by sites that reviewed both games, and then look at that data.  Even then it shouldn't be incredibly predictive.

And of course, even if the data turned out the same with GOW still having 30% more goty awards, you'd have a chicken and the egg problem.  How could you conclude that reviewers were biased in giving lower scores rather than being biased when giving out goty awards?

Last edited by JWeinCom - on 12 June 2020