theprof00 said:
You don't even know how sales tracking works. Do you really think that Brett checks some crazy internet database or calls every single store and asks how many they sold? Do you think he has contracts to have info sent to him every week? NO. It is done through statistics, which is exactly what these guys do. Dude, c'mon. Any study that isn't based on fact will lose all accredidation to it's host firm/company. The most they can do is spin. Like it has been said 10 times already, a company that is 24th in the US in stat tracking is not going to tell you how it collected or analyzed the data. It is a trade secret. The way it works (yes I will lower myself to sit here and explain statistics to you) is that they gather information from several different sources, some by survey, some by gamefly pre-ordering, some by other methods. Each of these methods is then assigned a score based on how reliable they are, usually from looking at past data. They then look through everyone planning to buy gt5 without having a ps3. This is possible in several ways. 1) Surveys which have a list of games and systems with check bubbles which ask the recipient to check which they own, and which they plan to own in several different time slots: Next month, next 6 months, next year. Or, on a gamefly account, this is easy to see because all members have their consoles listed. All of these methods are then examined for probable falsification or errors in collection methods through a series of mathematical/statistical analysis. Then they look at the number of people who plan to buy, and the number who don't plan to buy, and find the population deviancy, to see if these people actually belong to the entire population, or whether they are a small circle whom have accidentally been considered a real population. Testing and analysis is done on this again. First to make sure that their sample population is able to be extrapolated into the total population, and second to determine exactly how many of the population they represent. Through this, in a sample of 1000 people, 300 people can be turned into 1 million, for example. Lastly, they test the validity and error. A valid test will be accurate some 95% of the time, and be off by no more than 5% in either direction. Then, they run tests on the theoretical data by surveying another few sets of 1000 people, calling, or whatnot. If they predicted that 4/10 people who want to by gt5 and don't own a ps3, then for the next few weeks, this should stay the same, give or take a certain number of people. This is all then fed back into the validty testing and error, make sure they sync up, and then release an announcement about what they found. That is how statistics, and more than 95% likely, how this study was done. I wouldn't expect any less from such a high profile company. |
Ugh, can we stop talking about sales tracking? Behavioral research =/= sales tracking. Talking about the validity of a research study on gamers that plan to buy GT5 is NOT the same as sales tracking.
Everything you're talking about there, I understand, but where's your source? You can speculate and assume all you want, but if they didn't come out and say that, then why would you just assume that's their method? If there's a source, post it ahead of time next time, and don't just start speculating as if it's fact. And yes, you're not "explaining statistics" to me, you are speculating.
They don't need to make studies based on gamers fit the whole population anyways, they just need to make it fit gamers. I mean they have to make sure it's not a subgroup yes, but it doesn't have to fit the normal population, though that also depends on where they get the data from. That is unless a company wants some kind of information on how to market to non-gamers maybe? I don't know. Please note, I'm referring to this specific case with the GT5 study, nothing else.
Besides, pulling data from multiple sources for the same sample is a big no-no in this kind of research. Big no-no. They can do multiple studies and combine the data, but they can't pull a bit here and a bit there. That's about as unscientific as you can get. Assigning scores based on reliability is all well and good, but the most reliable method would be to seperate the different sources as different sets of data and not combine them. Otherwise they are increasing variables, sure they can assignment "relevancy scores" but why even take that chance? It's not a guarantee they won't mess up the data with that.
Back to one of your earlier points, I know, it seems stupid for them to possibly lie for something like this since it hurts them in the end, but that doesn't mean you should just assume it's true then.
It's just the defenders of this are assuming that the study is accurate, and the offenders (heh) are just questioning the validity. The only thing you can expect from any high profile company is that they'll do almost whatever it takes to turn profits. Even that shouldn't be assumed. You take too much for granted, and assume it's fact. And please stop being condescending "yes I will lower myself to sit here and explain statistics to you", it's gross and hurts your case.







