By using this site, you agree to our Privacy Policy and our Terms of Use. Close
binary solo said:
It's looking like I've finally managed to make a good prediction with Dark Souls.

I must also lodge a protest regarding the meta-meta scoring for this thread. You can't just take the average meta score, because often one review score will count towards all (or more than one) platforms' metascores. For instance, the Gamespot score (80) is included in the PS4, Xb one and PC scores. Which means it's dragging the meta-meta score down by effectively being counted 3 times if you are merely averaging the metascores. It is happening for scores above the meta-meta average too, but you can't just assume it all evens out. And yes, I'm looking for a 5 pointer for my DSIII score instead of my 1 short 4 points. I should be careful what I wish for perhaps because it could be that refininf the meta-meta score drops it down instead of lifting it up. But even if so, it's better to ensure all reviews only count once to the meta-metascore.

Yea, I have been thinking about the average of metascores and how it has some problems.

Some sites like Gamespot seem to be doing it wrong, they are supposed to specify a platform they played it on and it would only count for that unless indicated they played the other versions too, but from their review it appears they only played one version, yet they claim the review for all three platforms. (By the way, I wouldn't worry about the score yet, I imagine more reviews will come after the game actually releases, and the metascore might change significantly.)

There are also problems of Arkham Knight kind of cases, where the PC version messing up drags the score down significalty due to performace issues on one of the platforms. It could be argued that we have to able to predict any potential issues any platform might have and account for it, but as in the case of Arkham Knight, there wasn't really sufficient info available to predict such a thing. Similarily now with Quantum Break, once the PC version gets enough reviews, will drag the score down, due to performance problems. (This could have been predicted due to UWP, Win 10 and lack of PC info, but is that what these predictions are really about?)

As well as simply one platform having significantly less reviews, yet being weighed equally with other platforms when the average is taken.

While I will not change the rules in the middle of a month, if I would change them I could change them starting with May (as you need to inform people beforehand about the rules).

One of the alternatives I was thinking, was to only count the version with the most reviews. No doubt this could have problems too (Like what if two versions have vastly different scores, and both have many reviews, but I would then have to completely ignore the version with for example 2 less reviews.  And obviously I couldn't go case by case basis, as involving subjectivity in scoring here is a not a good idea.).

Would you prefer such a method, or what were you thinking exactly? What other methods do you think could work?