top of page
screen-shot-2016-11-08-at-9-59-47-am.png

I began asking myself what I could have done differently in order to achieve my ultimate goal: a perfectly created survey with perfectly accurate responses. It felt that an analysis of data that is not perfectly accurate was valueless and wasteful. Yet, how could I ever know if my responses were accurate? It wasn't like I could just ask my respondents if they were telling the truth.

 

I was stuck between forging a report on useless, inaccurate data and attempting to recreate my poll from the beginning, hoping to get a greater sample size and more honest, reliable results. However, even when I considered creating an entire new poll, I did not know where to start to make it more reliable than the one I had done a week earlier.

​

Disheartened by my perceived failure, I reflected on how I could have established a more clear question. I came to realize that the best way to improve my polling is to review the process by which official pollsters carry out their polls. Theoretically, the combination of proper resources, sample size, and a well written question, polls should reveal perfectly accurate insight to the question at hand. As I began reviewing old pollster data from major political elections, I noticed that the very polls that I was looking to to give me guidance towards perfection were far from perfect themselves. 

​

Specifically, I looked to the political polls that were carried out by major news outlets ahead of the 2016 and 2020 election. Surely, major news producers like FiveThirtyEight and The Economist should be the best, or nearly perfect, at surveying and polling, yet, they often fail themselves. Yet, many of the major new organizations of the country predicted that Clinton would beat Trump by a wide margin in 2016. Many pollsters still have had trouble explaining their major miscalculations. If we cannot trust professional, well established pollsters, what polls can we trust?

bottom of page