What the U.S. election results can tell us about research methods by Dr. Joseph Levitan
Photo by flickr user Katie Hargrave
One of the many surprising outcomes of the November 8 elections was how wrong the polls were. The commonly used statistical models and sampling techniques were well wide of the final counts. This was a serious blow to quantitative methods for political research, and has caused strategists to rethink campaign approaches. Despite this setback, statistics have an essential place in social thought and are incredibly useful for quality decision-making. They allow policy makers to understand important aspects of social phenomenon and how policies may or may not facilitate intended or desired outcomes. Statistical analyses have also become much more sophisticated thanks to an increasingly digitized world and “Big Data.” However, like most innovations, the range of applicability and the limits of the techniques are still being teased out, and it seems that statistical analysis might be finding some of its limits when it comes to predicting human behavior. In this essay I argue that an over-reliance on statistics and polling leads to dangerous mistakes. To balance and mitigate the potential mistakes, it would be valuable to figure out how to integrate qualitative and quantitative approaches more fully.
Data and Decision-Making
While most of us were examining election projections based on polling numbers—constantly reviewing the statistical trends, estimates, and each candidate’s odds of winning the Electoral College—reporters from various news outlets described a significant level of enthusiasm and number of enthusiastic supporters for Trump. They also reported some reasons Trump backers gave for supporting him. The reasons for their support were equally important information for understanding the possible election results, although most people were not able to listen at the time.
The reasons Trump won are complicated, but put simplistically, supporters perceived Trump as a candidate to create change and offer hope (usually based on supporters’ fears). The reasons for supporting the Republican candidate were very different than the reasons Hillary Clinton supporters offered. For example, Trump supporters saw him as someone who would “shake things up,” “make change,” and “be exciting” while Hillary supporters saw her as “well qualified,” “highly competent” and aligning with their policy goals. Trump offered a narrative or story that was being felt and repeated by a number of people who needed that story. In contrast Hillary was offering policy ideas, which, many argued, would move the country forward. However, her platform did not create the same emotional pull and call to action that people respond to from the negative narratives and slogans of the Trump campaign. Qualitative data from the voices of the people was given low consideration because embedded journalists were reporting anecdotal impressions and trying to find compelling sound bites everyday. Even under those pressures many reporters had a clear sense that the reality on the ground was different than the view from the polls. What important information might have been uncovered if a trained ethnographer or qualitative researcher was able to gather comprehensive observation and interview data in situ during a campaign?
“The recent election presents important evidence about why a balanced pragmatic approach that includes multiple methodologies, even for something as statistically-oriented as projecting votes, is vital for understanding human phenomenon and avoiding major mistakes.”
Instead of thinking about “on the ground” voices, many, if not most, concerned citizens relied upon statistics to inform their levels of anxiety, and their actions. The over-reliance on statistics, even from the most rigorous statistical sites, like Fivethirtyeight, was a fatal oversight for people who supported Democratic nominees for senate and house races as well as for the presidency. Statistical projections may have affected the level of involvement in the campaign process, and supporters’ financial backing levels, especially for the disaffected voters who were lukewarm on Hilary Clinton. Polls certainly missed the importance of the feelings of a large number of people. The Clinton Campaign and other Democrats concerned about the election likely paid too much attention to polling and not enough to the opinions of their fellow citizens “on the street”—people talking about their values and beliefs in their own context.
The Importance of Context
It may seem like a strange distinction to make between polling numbers and “people on the street.” After all, aren’t polls supposed to be representative of people on the street? Well, yes, and no. Research has shown how people often do not act in the ways that they talk, and this is especially true when placed in novel situations (Jerolmack & Khan, 2014). In fact, some argue that context influences decisions or actions more than biases or opinions. During polling, individuals respond to questions from a stranger who is intentionally trained to “not bias” responses, placing the respondent in an abstract relational space. This abstract space is not the context in which respondents will vote and it is not where their opinions are supported or discouraged by the people they are surrounded with when they decide on a course of action. As others have discussed, the influence of context might mean that that poll respondents may not follow-through in their poll responses.
Likewise, it is possible that when people are polled, either via the Internet, phone, or mail, they are asked questions by strangers who are removed from the respondent’s day-to-day interactions. In this context respondents may then rely more on abstract values, or respond based on concerns about the opinions of “outsiders.” Because Trump is so divisive and publicly criticized, they may not want to admit their support. Both of these issues are well known in polling research, and they both can be mitigated to some extent by complimentary on the ground qualitative data gathering.
Polling methods do try to account for these issues, but continued research in developmental psychology has shown that there is a fundamental problem with polling techniques and “big data” in general when trying to understand human behavior. Context is essential for understanding human behavior and psychology, yet big data and polling usually remove context from the equation, as quantitative analysis looks at demographic trends across contexts.
I am sure that knowledgeable and creative people are already thinking about contextual models to adjust for these issues. Instead of thinking about statistical modeling, however, perhaps rethinking certain paradigms about polling and spending more energy on understanding how quantitative and qualitative data collection and analysis might work together would be productive.
In the election much of the focus and predictions were made based on national demographic trends of opinions, with less focus on individual districts, and many of us missed crucial information that “anecdotal” stories reported by journalists were presenting and raising concerns about. Embedded reporters were talking to people in local coffee shops and restaurants, which allowed for perhaps a more authentic representation about how people were feeling and how they would vote, or at least another perspective.
Journalists are not qualitative researchers, however, and they have time pressures to submit their reports. A qualitative researcher who has more time and freedom could find important information to contribute to how candidates can better understand constituents and highlight areas where other data sources may be misleading.
Survey Samples and Access
In addition to the issue of context, many of the people who embedded reporters spoke with are also those who current survey techniques habitually miss. There is a “polling gap” between technologically connected and unconnected individuals, which can skew polls. More rigorous sampling outreach may be able to address this issue, but that is costly and difficult. Problems with sampling do not affect the more salient argument about the issue of context, and the possibilities of qualitative research to mitigate that issue.
Strategists may benefit by hiring on-the-ground qualitative researchers to reach hard to sample populations, as well as gather data that can serve as a contextual backdrop to polling statistics. This qualitative data would provide a good counterbalance to polling statistics and focus group data, mutually improving each other. Rigorous qualitative research would also allow for saturation of the narratives people are creating in different regions, allowing politicians to be more responsive to the needs and narratives of the people.
Some people may say that better sampling techniques, more refined statistical analysis, or better survey item wording can control for the issue of missing important demographics of the population, as well as most of the various other issues raised above. Pollsters acknowledge that tweaks need to be made in the process of polling for the sampling-gap issue. For example, there are growing problems with reliability in polls. However, while fixes to polling techniques may improve some of the reliability of polling, it seems that there are vital and important contextual and narrative data that qualitative research can offer for understanding phenomenon—even phenomenon that historically have been based in statistical analysis, such as an election.
Qualitative data gathering, such as performing interviews and making observations within a specific context, allows researchers to uncover essential information that polls may miss, such as the influence of environment on decision-making and grounded reasons for individuals’ choices. It also allows researchers to build narratives around principles and messages, which some strategists say are missing in the Democratic Party. Bringing contextually grounded information into strategic processes is essential for understanding something as important as people’s opinions. Polling is also essential, but the current balance of influential power between statistics and qualitative research has led to some big mistakes, and without bringing back in a balancing perspective, statistical analysis by itself could lead to more big mistakes. Data-driven campaigns and decision-making can still be effective, as long as campaigns and decisions have the right data and interpret it properly. Better data can also make for more responsive and effective policy.
Quantitative and Qualitative Research: Finding a Balance
Qualitative approaches also have significant drawbacks. High quality qualitative research is difficult and time-consuming. Low-quality qualitative research is easy, but can often be misleading. Debates abound about how to tell the difference between meaningful and unreliable qualitative data (e.g. Tracy 2010; Merriam, 2014). However, these drawbacks can be overcome through creative discussion and development of multi-technique data collection and reporting, especially in policy and political matters.
Another drawback to qualitative research for understanding political information is that it is either very difficult or impossible for qualitative research to create statistically reliable generalizations, nor is that its purpose, so I am not arguing for a paradigm shift to purely qualitative research. However, what we can learn from the events on November 8, 2016 is that an over reliance upon any single method of data collection leads to fatal flaws in information. These flaws can be accounted for by a more nuanced and balanced approach to data gathering. Many academics in the social sciences are now pragmatic about mixed methodologies and a synthesis of epistemological orientations in social investigation to answer their research questions. This is a hint for strategists and critical consumers of news. The recent election presents important evidence about why a balanced pragmatic approach that includes multiple methodologies, even for something as statistically-oriented as projecting votes, is vital for understanding human phenomenon and avoiding major mistakes.
The lessons from national politics are equally applicable for local and statewide policy issues about education. If something as relatively straight-forward as voter projections necessitates mixed methods approaches and triangulation, it seems logical that educational research about the state of the public school system, the framing of success and quality schooling, as well as learning outcomes requires more triangulation and mixed methods to ensure phenomena are accurately represented.
Based on the lessons from the elections, I have three suggestions. 1) Policy makers and political strategists would benefit from thinking about hiring ethnographers or trained qualitative researchers to do iterative, comprehensive, but also relatively quick data collection and analysis to compliment polling in order to build narratives by the people and be more responsive to the issues affecting them. 2) Have academics and strategists talk about ways in which qualitative data and polling data can work together in a timely and efficient way in order to gain a more comprehensive understanding of popular opinion. 3) Discuss ways to bring certain types of qualitative data into the considerations of public sentiment in order to counterbalance potential or unseen errors in polling.
Colin Jerolmack and Shamus Khan (2014) Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research 43(2) 178-209
Tracy, S. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837-851.
Merriam, S. B. (2014). Qualitative research: A guide to design and implementation. Hoboken, NJ: John Wiley & Sons.