Survey Says ...

Are political polls flawed, or does the public expect too much of them?

By: Christopher Borick  Wednesday, November 2, 2022 09:53 AM

Illustration by iStock.com/A-Digit

Perspective is a feature of Muhlenberg Magazine. This article originally ran in the Fall 2022 issue.

In the 2016 presidential election, polling famously underrepresented the vote of Donald Trump. However, the polling errors that year weren’t that far outside those of previous elections. For example, in 2012, the polling averages had Barack Obama winning nationally by about a point. He ended up winning by about four points. No one seemed to care about the difference because Obama was leading in the polls and he ultimately won reelection. In the Muhlenberg College Institute of Public Opinion (MCIPO) final poll in 2016, Trump was trailing in Pennsylvania by four points, but he won by less than one point. In both of these cases the polls were off by similar margins, but in 2016, the MCIPO estimate was on the wrong side of the winner and loser line. A similar disparity between polls and results was seen in other states Trump narrowly won, like Wisconsin and Michigan. That raised the possibility that the polling had encountered a systematic problem, and thus pollsters sought to identify the issue and offer remedies.

One change MCIPO and other pollsters made after 2016 was how data is weighted. Pollsters almost always weight the data because random sampling does not usually lead to a sample that matches the voting population parameters. For example, the voting population is usually around 51 or 52 percent female, so if a sample ends up with only 47 percent female respondents, a pollster would weight the sample so the results would align with the population. MCIPO has traditionally weighted for things like gender identification, racial identification and party registration.

Historically, educational attainment didn’t have the same impact on voter choices that factors such as gender and race did. That situation has shifted dramatically since 2016. Now, people with college degrees are increasingly voting Democratic and people without college degrees, particularly white individuals, are increasingly voting Republican. If MCIPO’s 2016 polls had been weighted by education, they would have been more accurate by about two points.

So, MCIPO began weighting for education in the 2018 midterms, and those polls were more accurate. However, in 2020, we again saw similar polling errors that undervalued Trump’s support, from MCIPO and many other pollsters. An array of survey methodologists have studied the 2020 outcomes deeply to see what the issue was, and there is no conclusive finding. 

In “2020 Pre-Election Polling: An Evaluation of the 2020 General Election Polls,” the American Association for Public Opinion Research shared possible explanations for the polling errors. “If the voters most supportive of Trump were least likely to participate in polls then the polling error may be explained as follows: Self-identified Republicans who choose to respond to polls are more likely to support Democrats and those who choose not to respond to polls are more likely to support Republicans,” the article says. “Unfortunately, this hypothesis cannot be directly evaluated without knowing how nonresponders voted.”

MCIPO did not change its weighting for the 2022 midterm elections, which had yet to take place when this article was written, to try to account for potential nonresponse. Attempting to weight for such “nonresponse bias” is guesswork, and the institute’s models were fairly accurate in the last midterm election. Even if MCIPO produces a methodologically sound poll, its estimates could be a few percentage points off the final results, because that is how sampling works. If you flip a coin 100 times, the most likely outcome is that you would get 50 heads and 50 tails, which captures the reality of a two-sided coin. But sometimes you’ll get, say, 55 heads and 45 tails, not because of a methodological failing (e.g. flawed coin flipping) but instead because of simple sampling error. To expect a poll, or even poll aggregates, to mirror election outcomes is placing far too lofty expectations on the methods being employed. If polls show a candidate ahead by three or four points in a race, don’t be shocked if that candidate loses narrowly. Methodological limitations and even modest last-minute changes in voter sentiment could lead to such outcomes.

With increasing challenges facing public opinion researchers, the MCIPO is entering its third decade of operations with important choices ahead. In the Spring 2023 semester, the MCIPO will move into the new building on campus, the Fahy Commons. There, students and I are going to be thinking about what’s next: online platforms that we can design and administer. Polling has changed significantly since I began directing polls in the 1990s. Back then, telephone polls were completely landline. Phone numbers were tagged to the state the phone was in, and response rates were relatively high. Today, there’s a very fragmented communication scene that complicates sampling, and getting individuals to agree to participate in a poll has never been harder.

Certainly web-based polling options will be a major part of the MCIPO’s future, but the transition is not a simple one. The problem with some online polls — those where participants are recruited via email — is that they may not capture a representative sample. There is no comprehensive directory for email addresses that is equivalent to a universe of phone numbers or mailing addresses that serve as the frames for sampling. A potential way around this is called online probability-based surveying. People complete surveys online, but they’re recruited through probability means (by mail or phone).

That’s my vision for where the institute is headed: Once we get to our new facility, we’re going to fully engage in building our online probability panel for Pennsylvania, and maybe a more specialized one for the Lehigh Valley. These investments will position the MCIPO to continue its mission of providing students with opportunities to engage in high quality public opinion research and to produce polls that accurately reflect the attitudes, beliefs and behaviors of the populations we seek to better understand.

Christopher Borick is a professor of political science and the director of the Muhlenberg College Institute of Public Opinion.