Gallup has never been very good at presidential polling

Gallup got this one wrong, too.
Gallup got this one wrong, too.
Image: AP Photo/Byron Rollins
We may earn a commission from links on this page.

The political world is aghast today that Gallup has yet to start polling the 2016 presidential primary and seem unlikely to do so during the 2016 general election, apparently because it lacks faith in its methodology.

While Gallup’s name recognition will inevitably spur speculation that public opinion polling is inherently broken, it’s important to remember that the venerable firm has never actually been that accurate in calling presidential races.

Consider the work of 538’s Nate Silver, who compares pollsters to the results at the end of every presidential cycle. Currently, Gallup gets a C+. In 2012, Gallup was the least accurate of all polls he analyzed, with an average error of 7.2 percentage points. In 2008, Gallup was in the bottom half of the list, with an average error of 2.4 percentage points.

Here’s a chart that shows the poll’s own self-reported deviation after each election for the last eight presidential cycles. While 2012 is the first election where the poll incorrectly predicted the winner (in 2000, its final poll was a tie between the candidates in one of the closest elections in US history), it’s clear that the Gallup poll is not gospel. Indeed, larger deviations in the early nineties were masked by landslide wins that allowed Gallup to be “right” about the race while inaccurately predicting the result.

There’s no doubt that public opinion polling has become more difficult. In part that’s because fewer people have land-line phones, and those who do are less willing to respond to pollsters, making the construction of a representative sample a challenge. But the bigger issue for Gallup and other pollsters is the difficulty of creating an accurate mathematical model, to adjust survey responses based on their predictions for who will actually vote. Making that prediction isn’t easy.

During the 2012 election, Gallup was criticized by the Obama campaign for over-weighting Republican voters and under-counting minorities, renters, and young voters. In a post-mortem after election day, the company essentially admitted that the criticisms were accurate. Based on its decision to sit out 2016, the organization still isn’t confident in its new approach.

It’s this failure of modeling that contributes most to inaccurate polling. Mitt Romney’s campaign staff were surprised to lose in 2012 because they thought that minorities would not turn out in large numbers, and weighted their polling accordingly. The Obama campaign’s internal model turned out to be correct, and his aides knew that he won hours before voting ended.

In the 2014 legislative election, pollsters made the opposite error by over-weighting Democratic turn-out. But they still were able to effectively call the winner of the race. In fact, the average polling error today is much lower than it was in the nineties and early 2000s.

Pollsters are still figuring out how to first figure out who the electorate is, and just as importantly, get in touch with them. Some public opinion firms and news organizations seem to be figuring out the right mix, and efforts by data-focused journalists at 538, Huffington Post, and Real Clear Politics, among others, to aggregate and average these polls has likely increased the accuracy of public opinion polling compared to previous decades.

If Gallup—and the similarly prestigious public opinion research institution Pew—continue to back away from head-to-head candidate polls, it might actually be a good thing. There are many, many pollsters out their trying to gauge who will win. But it might be nice to get away from the horse-race for a while to think (and poll) about more substantive issues.