Raise your hand if you have time to answer surveys for 30 minutes every day for $5.
But those are the people who comprise most survey data nowadays.
When people started dumping their landlines 20 years ago, polling as we knew it began to die. Along came online innovators, luring survey-takers to websites or apps with a financial reward. (Here’s a breakdown of how much—or little—you can earn through doing different online surveys.) But survey demand outstripped the supply of willing participants, so people with the most free time or inclination to earn an extra buck took on most of the load.
And guess what. Most of those people don’t look like you, me, or the rest of the average population. But the problem isn’t just with demographics. (Good statisticians can adjust for that.) The thornier problems are “psychographic” ones, meaning the ones that describe our personalities, values, opinions, attitudes, and interests. For example, our own data tells us that hyper-survey-takers are more brand-aware, coupon-using, and media-obsessed than the average US citizen, among other things. Those kinds of biases are nearly impossible to fix statistically.
Online panels don’t accurately represent the population: They represent the portion of people with the most spare time. The well-documented shortcomings of recent political polls are a good example of this.
When overall consumer sentiment rose steadily during US president Barack Obama’s second term, we just believed that all Americans were giddy over the economy and that, surely, we were hurtling toward an easy win for Hillary Clinton in 2016. Hidden underneath those small sample-size numbers, however, were subsets of the population who didn’t share that mainstream happiness and optimism. And it only took 26% of the eligible voting population to leave the rest of the country (and world) dumbfounded, even to this day.
Or consider the long-standing, survey-based consumer confidence indices, like the Michigan Consumer Sentiment Index, which publishes bi-weekly reads on Americans’ attitudes toward the economy, job market, and such. Gathered by phone, these indices rely on data from a couple thousand respondents every month, which doesn’t afford a lot of granularity to analyze. If we’re not studying groups of people who are large and representative enough every minute of every day, we run great risks of being surprised.
Clearly rigorously collected and reported, the study included over 8,000 interviews conducted among American adults both before and after the Nike campaign featuring Kaepernick launched. Alarmingly, the researchers found that favorability dropped by double digits after the announcement, and that purchase consideration was down, as well. Additionally, the study found that 39% of consumers said it was appropriate to make him the face of the campaign, while 38% said it was inappropriate.
But look what happens when our company asked a similar question to two groups of people: the roughly 14% who do survey panels, and the 86% who don’t.
Notice the difference in the “Yes” column between those who take surveys and those who don’t. Survey guinea pigs are 35% more likely to support Nike and are less likely to have an opinion at all. That’s an enormous variance.
Could that panel study provide some directional insight as to what the American public thinks? Sure. Is it the unassailable truth? Certainly not.
For entrepreneurs starting out in consumer-facing businesses, market research and analytics can be invaluable to understanding the market opportunity, monitoring competitors, and informing business decisions. But market research is not without its flaws.
The industry is making huge strides, but it’s by no means perfect. So, when you see a headline that reads “Survey says…” take the data with a grain of salt. The world is changing so dramatically, so fast. Even the smartest researchers on the planet are just trying to keep up.