When online shopping, most people pick inferior products that have many reviews from consumers over higher quality products that have fewer reviews.
A recent study by Derek Powell, a postdoctoral research fellow at Stanford University, finds that most people fail to do a simple statistical task when viewing online ratings and reviews, leading them to purchase inferior products.
Follow the leaders
When shopping online, consumers engage in a type of social learning by which they become informed from the decisions of others. For example, you’re probably more likely to purchase a book at the top of the New York Times‘ best-sellers list or buy an app that’s been downloaded millions of times.
Observing other people’s choices is only a part of social learning, though. The other is noting the resulting outcomes through mechanisms like online star ratings. How people interpret—or fail to interpret—this data is affecting their decision-making in a negative way.
The researchers presented 138 adults with a series of cellphone cases (in pairs) to purchase. Each case was accompanied by its average star rating and number of reviews. The star ratings varied minimally, but one of the cases always had 125 more reviews than the other.
Across two experiments, the researchers found that participants preferred the case that had more reviews, despite the fact that the way they set up the experiment, that case was likely to be inferior. (The researchers assessed the product’s quality not by stars or reviews alone, but by analyzing millions of reviews on Amazon.com.)
Think about it this way. Twenty-five people review a product and award an average 2.9 rating (out of five stars). While the rating is below average, there’s a possibility that with such few reviews the product may not be as poor as indicated, Powell says.
Now imagine 150 consumers give that same product a 2.9 rating. That’s six times as many people rating the product below average. That should be a stronger signal of the product’s poor quality.
Participants took the high number of reviews as a signal of quality, says Powell, rather than as an indicator of how accurately the review score should reflect the true quality of the product. Instead of conducting a rather simple statistical analysis to arrive at that conclusion, consumers are taking the number of reviews at face value.
“What they’re doing is simply weighing cues,” Powell says. “People seem to have this belief that popularity is good and are willing to use that as an important cue when making decisions.”
Powell and his fellow researchers found evidence of this trend beyond their experiments. They examined 15 million reviews of more than 350,000 actual products on Amazon.com and found that there was no relationship between the number of reviews and its rating.
“It doesn’t necessarily mean that better things don’t become more popular,” says Powell, “but as a consumer, when you’re looking at this data point (number of reviews), it’s not telling you anything.”
Overcoming this bias is difficult, Powell says, because consumers find comfort in popularity.
“There are lots of contexts where following the herd is the rational thing to do,” he says. “If there isn’t enough information available, that can be a smart thing to do.
“But what we’re arguing is that you have more information than just what people did; you also have what happened—did they like it, were they happy or unhappy with their purchase.”
Powell suggests consumers should focus on whether the product’s score is above or below average—product averages usually range from 3.7 to 4, depending on the product’s category, he says—then apply that rating to the number of reviews. Examining those figures in concert should supply consumers with confidence that the product’s rating reflects its true quality.
The study appears in the journal Psychological Science.
The study’s coauthors are from Indiana University and the University of California, Los Angeles.