This is a silly post, as I’m sure everyone involved knows, at least on some level. First off, the OP clings faithfully to wonderful, pure, unbiased data in order to discuss something which I think we all know is anything but unbiased on these forums; the incessant fanboy wars. Unfortunately for all of us, we can't afford to be this naive in the utilitarian world we live in, at least when it comes to more important things. Collections of data are rarely trustworthy when there is an emotional motivation behind them – the eye sees what it likes, the heart quickens in anticipation, and the hand takes a snapshot which rarely manages to include the whole picture, let alone get all the angles.
I don’t want to spend very much time here because I’m in the middle of making a gun-toting fiery deathmatch level in LBP (working title: “Sack-man: Arena of Fiery Death”) but I just wanted to show the same data for a moment, sliced a different way.
Instead of looking at 90+ and 80+ groups, lets try 95+ and 85+ groups. I’ve also put in the review averages so that people can see for themselves where else scores could be cut off (there are more advantageous places for both systems).
95+
360
None
PS3
LittleBigPlanet 95
85+
360
Halo 3 94
Gears of War 2 93
Forza Motorsport 2 90
Fable II (PC port?)
Project Gotham Racing 3 88
Dead or Alive 4 85
Dead Rising 85
Project Gotham Racing 4 85
PS3
Metal Gear Solid 4 94
Ratchet and Clank Future: Tools of Destruction 89
Uncharted: Drake's Fortune 88
Valkyria Chronicles 87
Resistance 2 87
Resistance: Fall of Man 86
So the 360 has one exclusive at 85%+ more than the PS3 does, if we slice the data in a different place. But the PS3 has the only 95+ exclusive. Interesting!
All of the sudden, it looks pretty even. And all a result of throwing out the main assumption made in the OP: that cutoffs should be made in the most obvious place, at 80 and 90.
Of course, you have to draw the line somewhere when you collect data like this, but here’s the thing: By not examining the way that your results shift when you move the lines, by not challenging your own basic assumptions, you make a weaker foundation for your argument, and you end up with something that is more wishful thinking than conclusive data.
I want to point out that my results are not a fluke: by shifting the cutoff points to any number of different places, the 360 or the PS3 can come out way ahead. I’m sure it was random chance that 80+ and 90+ happened to show the 360 in an unusually advantageous light, but when results can change this much depending on the method applied, the underlying finding has to be that the data is inconclusive, and not that some irrefutable fact has been unearthed.
The most useful thing that can be done in data sets like this is to take a step back, list the review scores themselves, and let people in on your reasoning process as you attempt to organize the data. That’s what I’ve tried to do here.
Alic







