Polling is driving us mad, but the alternative would be much worse

What are they thinking?
What are they thinking?
Image: Reuters/Jeenah Moon
We may earn a commission from links on this page.

The experience of American elections has arguably become more stressful thanks to the proliferation of polls. If the only count that matters are the votes on election day, why do we spend so much time trying to figure it out ahead of time?

The answer is because democracy matters (at least for now). The importance of polls reflects the weight that America, home to the world’s most expensive election campaigns, puts on it voters, and media’s efforts to help the public keep up. Opinion research is a powerful tool, like a chainsaw or an Excel spreadsheet—efficient in the right hands, but destructive when not used with care.

Where polling goes wrong is where most things go wrong: When we expect more of a tool than we have any right to do. As a reminder, a poll is simply going around asking people what they think. With a sample that is large enough and random enough, researchers can use statistical methods to assess with reasonable accuracy what a population thinks. However, these methods depend on the assumptions that researchers make about that population.

During an election year, those assumptions include how many people will turn out, and what subgroups will be over- or underrepresented. When reality and polls diverge, look to these assumptions for what went wrong. In 2016, the polls writ large did not fail, but surveys of voters in key states made bad assumptions about how many non-college voters would show up. The result—foreseeable before the election—were narrow victories that delivered the presidency to Donald Trump.

American polling has been a story of nerds fighting. The first famous challenge was issued by George Gallup, who originally developed his surveys for newspapers to see if people liked what they were reading. He predicted the failure of a major 1936 poll performed by a publication called the Literary Digest, which gathered responses from its wide subscriber base—but Gallup realized that its subscribers did not include poorer, New Deal supporting voters who would put Franklin Roosevelt over the top. The polling pioneer announced the Digest would be wrong before the election, and made his name.

Just over a decade later, he would tarnish it with the famously false Dewey beats Truman call. Gallup’s poll had relied on quotas of respondents from different demographic groups, but it had not selected them at random or attempted to mimic the composition of the electorate. It’s smart to think of polls as snapshots in time—people can and do change their minds—but they are also an attempt to take a snapshot of a guess—who in the county will actually vote?

White Houses began using pollsters with FDR, and the practice grew steadily. In the 1960s and 1970s, polls, previously the tools of academics and national media outlets, came into the world of campaigns through Madison Avenue.  Today, abetted by computer technology and plugged into wide-ranging databases of personal information gathered by cable companies and internet giants, campaigns spend millions trying to identify, target, and influence potential voters. In many ways, the political press is trying to keep up with the campaigns so that their decision-making is comprehensible.

This digital voter shuffling may seem gauche, but consider how national elections were previously reported. Prior to our current FiveThirtyEight era, voter intent was judged from more questionable sources. Reporters would take their own sample of voters, often whoever was sitting at a local diner at 11 am, look at one or two recent polls without diving too deep into their methods, and divine the will of the people. (Even today, political reporters keep interviewing random people who turn out to be political operatives.)

The arrival of Nate Silver in 2008, alongside other rigorous statisticians critiquing the media’s use of polls, was a refreshing burst of empiricism. Practices like following aggregated poll averages and tracking the historical record of pollsters should have been no-brainers for the news media, but instead sparked mini feuds between nerds and reporters that helped end Silver’s stint at the New York Times. If the campaign narrative being driven by polling meta-sites like Real Clear Politics and FiveThirtyEight strikes you as maddening, imagine if the news cycle was instead still centered on polls covered solely because they are outliers and whoever was the most quotable senior at the Waukesha Grip-n-Sip.

Aside from omnipresent technical challenges—like finding a properly random sample in a new era of communications technology, or to properly adjust the results for demographic factors—the big issue polling faces today is relativity: Observing something can change it, and there is evidence that suggests (pdf) voter perception of an election’s closeness may affect turnout. If voters think a big win is in the cards, they might not show up; if they think the election is tight, a surge of voters might flip the expected result.

That’s the nature of public discourse in a democracy. And, I cannot stress this enough, polls do not predict the future. What could be done? Beyond better civic education, some countries ban polling or publication of polls in the days ahead of an election, and perhaps those measures make sense. Others have laws limiting the total length of the campaign season, which might help turn America’s punishing 18-plus months presidential slog into something more psychologically manageable.

Yet the importance of survey research in a democracy comes down to who gets to speak for the people. White House pollsters were originally seen as controversial in part because they threatened the House of Representative’s claim as “the people’s house” that would articulate public opinion. Today, campaigns and interest groups use polls to argue that their views are a majority. And that’s why the press desperately needs to responsibly analyze, critique, and produce measures of public opinion.

Inconvenient truths abound, and polls are a key way to spot them. The Trump administration has mocked mask-wearing and insisted since the pandemic began that Americans are too independent to adopt basic health safety measures. Despite the effects of presidential leadership on public opinion, surveys continue to show that 75% of Americans think masks work, and that majorities of Americans would speak to public health officials, share information with them, and quarantine if asked. US inaction in the face of the coronavirus can be blamed on many sources, but not public unwillingness.

Similarly, election polling forces the political process to react. The 2016 poll post-mortem exposed important divisions among voters—distinctions of sex, race, class, and education—that had Democrats scrambling to offer better answers to Trump voters they aimed to win over; just as Republicans briefly considered a revamp of their own approach to wooing Black and other minority voters. That’s why the continued emphasis by Republicans on voter suppression is so worrying: rather than move with the polls, the party may simply be abandoning them.

If it seems relaxing to imagine a world where we don’t have to worry about election polls, consider that a world where public opinion doesn’t matter is a scary one indeed.