@thismeintiel,
I understand the differences between the two, but if someone releases a study publicly, I would think they would want more people to trust it. Otherwise what's the point? You and theprof00 both are confusing some disclosure with 100% disclosure. These studies do not need to be accompanied with each and every little detail as to how the data was collected and compiled and analyzed. But it needs SOMETHING other than just the conclusion. The more the better, but with nothing, it really makes it hard for all but the most trusting sort of people to not question it.
Since it was released publicly, I would expect they would want the results of the study to be believable, but I could just be making an assumption there. Those of you who are defending this study, you are far too trusting. They could just throw any number at you and you trust them because they are reputable? Lack of information brings about more suspicion than misinformation and actually lack of information tends to lead to misinformation as well.
Just accepting it because this may be the way things work (in regards to consumer research) isn't necessarily the best idea. A firm could release parts of the data without really giving away any of their data gathering methods. I want to know about the sample size. As in males in female from the ages of 14-55 with any other set of similarities and differences. I want to know if this single "variable" was the only thing tested/studied, or was it a multi-faceted study, and if so, they tried to account for the problems that can occur in a multi-faceted study. Are there other variables or influences accounted for?
Then again the wording in the study could have been misleading. They could have asked "Would you like to buy GT5 yes/no?" and call that "planning to buy". It's all difference in wording, and spinning things into your favor. Maybe that's a stronger part of the game than any of us know, and the only way for companies to be "reputable" is to essentially produce this type of data from misleading questions on surveys.
Someone else said they couldn't trust the data unless they at least knew where it was from. I think that's a good point, and might be one of the stronger points against this 33% study. I just want data on the sample, which is essentially the most important part.
You know this could just be a good example of the game telephone. It seems like the information passed from OXT GamePlan to PS: The Official Magazine to Games Thirst. Who knows how things got befuddled along the way in that process. So the 39% was a mistake, and is actually 33%, but so is the 600 SKUs thing. GamePlan's own site says 400. Little mistakes yes, but those kinds of things can make people wary, and throw the validity of the data into even more suspicion.
I'm really just throwing a lot of ideas out there. I don't find the result of 33% that far-fetched, I could see it being possible, but without any more information on the study, I just can't bring myself to believe the study.