By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Chrkeller said:
Immersiveunreality said:

But arent substances and matter more "static" in the form you use statistics to them?

I mean when you do statistics involving the ever fluctuating human mental mindset then you could say there is a bit randomness involved and that it is something able to be "manipulated" or be badly implemented also no?

Error should be randomly distributed across the sample mean, regardless of what data is being collected.  The only difference between physical data collection and consumer perception data is the range of the error.  

Making up an example, but a poll has Trump at 45% +/- 3%.  If I were measuring something my margin of error would be 45 ppm +/- 0.3 ppm.  Both should have distribution around the sample mean, one simply has a tighter range.  But the error should be random in both cases.  

Edit

It is my opinion one of two things occurred during polling in 2016.

1) those executing the panels didn't do a great job at sample selection, and those polled did not accurately represent the general population

2) those being polled weren't comfortable admitting they were voting for trump, thus the data collected had inherent bias from misleading feedback

Or maybe a bit of both happened.  For clarity I don't think the polls were awful and as misleading as some make it out to be.  But I also think there were clearly issues and we can do better.  

There's always a difference between the general population and election attendance, which makes the task of selecting a statistical model in political science a complex and inherently unreliable one. Thus, the chances of any social statistical model suffering from biased samples is huge. FiveThirtyEight and others try to circumvent the problem by individually considering and adjusting poll data to reflect the historical bias and statistical precision of each pollster, but, of course, not even their model is absolutely reliable for the reasons mentioned above.

Either way, for instance, I don't think the statewide sample systematically failed in 2016 as you seem to believe. Trump underperformed NV, AZ, CO, NM, TX, among others, and overperformed OH, PA, MI, MN, IA, among others. This suggests random, not systematic, error from a holistic perspective. The fact that some of these states were far close to tipping over than others is, ahem, merely unfortunate, I'd wager.