Facebook’s moves to stamp out “fake news” will solve only a small part of the problem

No, that’s not going to cause any problems, not at all.
No, that’s not going to cause any problems, not at all.
Image: Reuters/Stephen Lam
By
We may earn a commission from links on this page.

What a difference a week makes. On Nov. 10 Facebook’s CEO, Mark Zuckerberg, called it “a pretty crazy idea” that fake news on Facebook might have helped swing the US election towards Donald Trump. In a Facebook post three days later, he was more measured, calling it “extremely unlikely” and insisting that only “a very small amount” of the information on Facebook was fake news and hoaxes. By yesterday (Nov. 18), that amount had become “relatively small” and the dismissive attitude was gone, as he outlined several steps Facebook is taking to weed the stuff out.

The basic problem, though, is this: What counts as fake news?

Zuckerberg’s post describes it as “stories we can confidently classify as misinformation.” That presumably means blatantly false pro-Trump stories, like the ones churned out by a group of young Macedonians who found they could make a fast buck off Facebook ads that way, and by a self-admitted fake-news peddler in Arizona who believes he might have got Trump elected. Even though he claims to “hate Trump,” he seems largely unbothered because people will “post everything, believe anything” and “you wouldn’t believe how much money I make from it.” This is the kind of “news” that, in a recent BuzzFeed analysis, turned out to have considerably higher engagement (shares, likes, and comments) on Facebook than the most popular real news from reputable websites.

Zuckerberg promises that Facebook will crack down on fakery in a number of ways. Those include making it easier for people to report fake stories, better software for detecting likely fakes, and making fakery less lucrative. Facebook and Google have already announced they’ll cut fake-news sites out of their advertising networks.

And that’s all fine. But it tackles only a small part of the problem. Even with all the blatantly fake stuff stripped out, many people live in a “filter bubble” of news they are likely to be sympathetic to, whether delivered via social media, preferred news websites, or preferred TV channels. (A Pew study earlier this year found that 44% of American adults gets news via social media “often” or “sometimes,” with Democrats using social media slightly more than Republicans.)

The Guardian recently asked a handful of liberal and conservative US voters to live on the opposite side’s Facebook stream for a week. All of them found it profoundly uncomfortable; one compared it to waterboarding. (You can have your own taste of the experience—swapping news feeds, that is, not waterboarding—at the Wall Street Journal’s Blue Feed, Red Feed.)

As George Orwell observed long ago, part of the goal of totalitarianism is to destroy the “common basis of agreement, with its implication that human beings are all one species of animal.” Orwell saw this loss of a common basis as a consequence of totalitarianism, but with Donald Trump’s election, we see that it could be a cause instead.

If so, then that is the big problem. Facebook can easily cut out a few Macedonian teens and Arizonan opportunists. But it’s not going to clamp down on the large, partisan sites on the left or especially the right whose news, if not outright fake, is highly selective and frequently full of half-truths or distortions. Facebook, as Zuckerberg wrote, believes in “erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or mistakenly restricting accurate content.” Translation: We need to be careful not to lose too many users or too much revenue.

What would it take to bring the filter bubbles back together? Eli Pariser, a media entrepreneur who coined that term, has started a collaborative document called “fake news design solutions,” which as of this writing was already 24 pages long. Going well beyond what Zuckerberg has promised, it suggests methods both human and algorithmic for demoting or flagging stories of dubious veracity and promoting ones from reputable news outlets, so as to increase that common basis for agreement. The only hitch: That assumes people want one.