Polls are overconfident, just like people

Polls are overconfident, just like people
Image: REUTERS/Dado Ruvic
We may earn a commission from links on this page.

Did the polls fail us again? Less than a week ahead of the election, polls indicated Joe Biden would carry key swing states such as Florida and Ohio by six and four points respectively. In the end, Trump won Florida by 3.4 points and Ohio by 8.2 points. Polls were even more inaccurate for other races. Polls suggested that challenger Sara Gideon would defeat Maine’s incumbent senator Susan Collins by margins of five, eight, or even 12 points. However, on election night Collins claimed an eight-point victory.

The election results echoed 2016, when forecasters had Hillary Clinton as the favorite. How could the polls have been so wrong?

We were struck by the parallels between overconfident polls and overconfidence in human judgment. Humans are consistently overconfident: Sports fans are too sure they know how games will come out. Entrepreneurs are too sure they know how much money they will make. And physicians are too sure of their diagnoses.

When asked to report an uncertain answer to a question using a 95% confidence interval—that is, to provide a range which they think has a 95% chance of containing the true answer—people’s intervals routinely contain the truth less than 50% of the time. (Quartz recently built a tool that lets you test your own overconfidence using this method.)

Polls are no exception—our research shows that polls barely do any better. The last two presidential elections are not exceptional: polls are almost as consistently overconfident as individuals are.

Typically, polls report 95% confidence intervals with margins of error something like plus or minus three percentage points. This is the result of a statistical calculation reflecting the possibility that the results from the poll’s sample could differ from the population of voters, due to chance. The implication is that across a number of polls conducted close to an election we ought to expect the actual vote shares to fall inside the margins of error 95% of the time. Does it?

We tested this question using general and primary election data from 2008, 2012, and 2016, including 1,400 polls. We analyzed how frequently the actual election results fell inside the polls’ margin of error. Polls are not predictions: they are snapshots of what voters say at a moment in time. The further away from the election a poll is taken, the more that can change between the poll and the vote. But even for polls taken immediately before the election the 95% confidence intervals are accurate less than 70% of the time—a clear sign of overconfidence.

The problem is that poll respondents differ systematically from voters. Pollsters do their best to sample from the population of voters, but their sampling is imperfect. They can ask poll respondents if they are registered or if they intend to vote, but we know that more people intend to vote than actually do. Also, many voters refuse to respond to polls, and suspicion of the mainstream media may increase the rate at which they do so.

Smart poll aggregators like FiveThirtyEight attempt to include these considerations in their interpretation of polling results, but their corrections are incomplete. If Trump voters are more likely to hang up on pollsters, then how should a forecast impute the preferences of non-respondents? Historical data may provide a partial guide, but if Trump supporters’ suspicion of the media has increased over the last four years, then historical guidance will be imperfect.

Given these problems, should we abandon our reliance on election polls? No. We believe that it is neither practical nor desirable to give up on election polls. Asking likely voters about their preferences will continue to be one of the best ways to gain insight into the interests and intentions of the voting public. For all their faults, voter surveys help politicians understand which policies and issues the voters care about and they help political campaigns target their efforts. Instead, we ought to get smarter about how we interpret poll results.

Our goal is for those reading about polls to understand their limitations beyond the errors that are traditionally reported, and to adjust their interpretations accordingly. How much would you have to expand a poll’s margin of error so that it included the election result 95% of the time? Our data suggest that, for polls taken the week prior to the election, margins of error would roughly have to double in order to include the actual election result 95% of the time. For earlier polls, the margins of error would have to expand even more.

Poll results are useful, but they should be interpreted with considerable skepticism. They are overconfident for many of the same reasons that humans are. Both are frequently wrong because they do not know what they do not know. Our knowledge of the facts is biased in ways we fail to completely appreciate and correct. We believe too fervently in our own views, including our political views, and are too sure that we are right. All of us, not just pollsters, would do well to expand our margins of error and accept the possibility that we might be mistaken.