By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Well this is not my sort of problem, I work mostly with time series and the real work is done by software. I do not think we should get to bothered by people pointing to one site over another most of the time as the difference in scores I doubt is all that meaningful except in the most extreme cases. If we assume that there is such a thing as a 'real' score for a game then what we have at gamerankings is just the average of the error from that 'real' score added to the 'real' score. That the average may not be that meaningful is possible as we have no idea what sort of distribution the error has and even if it is some form of centered about the 'real' score balanced distribution we may run into problems with really high or low scores not being able to accomodate the distribution as scores above 10 and below 0 are excluded. If your willing to assume that the distribution is not a problem then yes adding together these scores will produce a better approximation of the 'real' score than just guessing at who is a better reviewer, most of the time. The biggest problem I think is this idea of 'real' score, there is no way to measure it or even defining it. Also the various reviewers adimt the yard stick for measuring it also changes with time, an idea I am not completely sure how it will influence your error calculations. I personally like IGN and Gametrailers, as in I read or watch their reviews. I look at Gamerankings mostly to see a broad look at stuff, after that I hop over and look sites I trust more and see what they really say as I don't think those numbers really have all that much detail about what a game really is about.



Proud member of the Sonic Support Squad