By using this site, you agree to our Privacy Policy and our Terms of Use. Close
binary solo said:

Irregardless of the various failings of metacritic, it's a better metric to use than some random dude posting in an internet forum. There are typically sufficient reviews in a metascore to cancel out various of the most egregious biases. It is particularly effective at negating the influence of the platform preference bias. And in that respect metascore is fairly objective. Platform preference bias can affect metascore by a few points, but the presence of platform preference bias in any given review cannot cause the metascore to alter all that significantly.

But to support your claim that weighted average of selected reviews is sufficiently problematic to make metascore of negligible value you'd have to provide evidence that the contributors to metascore are not a reasonable representation of critics in general; qualitatively or quantitatively. Taking a sample, and weighting the results in an effort to account for certain biases is not in itself a fatally flawed methodology. You need to show either sampling bias, or evaluation bias in Metacritic's methodology in order to show that the best option available is not beter enough to be worth using for any meta meta purpose.

No, actually, I wouldn't. A simple citation to a basic manual on statistics is all I'd need*, and that only if I decided to not hold you responsible for first proving your assertion that metacritic is somehow what you claim it is.

In reality, the system you're lauding as somehow eliminating system bias due to its non-random selection and weighing of a few dozen reviews (a highly questionable assertion in my opinion) demonstrably introduces equally egregious biases which negate the value of whatever information you're trying to discover, as a casual glance at its lists will easily demonstrate.**

So what does metacritic tell us? It tells us what the critics selected by metacritic think, as weighed by metacritic's opinion of the worth of those reviews. Nothing more, nothing less. If it got rid of the weighing system, it might actually tell us something about what professional game critics think, depending on how you define that term and how generous you want to be with your confidence level and interval. But then this thread was never supposed to be about what "critics" thought, was it?

 

*You're championing a system wherein gatekeepers first decide whose opinion is worth valuing - which is how Dewey beat Truman once upon a time - and then further degrading their questionable results by assigning arbitrary weight to each of said opinion. In other words, there's no randomness in the selection sample, and there's nowhere near a sufficient sample size to determine what game critics think of these games, let alone the general gamer population. I find it harder to believe that this system somehow creates even an illusion of a worthwhile result than that my calling twenty people I know who answer their landlines gives me an accurate idea of what most Americans think on any given topic, even without then giving Martha's opinion twice as much importance as Danny's.

 

**Unless you believe that the relationship between how people spend their money and what they think of games are independent of one another. I do not share that view in the least.