Bots aren’t spreading fake news on Facebook; humans are

Responsible for billions.
Responsible for billions.
Image: Reuters/Stephen Lam
By
We may earn a commission from links on this page.

Facebook published a new paper this week outlining how it’s dealing with government-backed attempts to influence politics in other countries, or what it’s calling “information operations.” One of its findings refutes the notion that bots are the primary distributors of fake news. “Most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts,” the report reads.

Academics at Oxford University last year previously found that Twitter bots were relaying messages, some of them false, on behalf of groups on either side of the Brexit campaign, and for the candidates in the US presidential election. Bots programmed to work in favor of Brexit, and of Donald Trump, drowned out their opposition in both their numbers and in the volume of messages, the researchers found.

That’s not what’s happening on Facebook, where it’s people, not bots, that are doing the bulk of the posting. The giveaways are language proficiency and knowledge of a country’s political situation, the paper found. These factors indicate “a higher level of coordination and forethought” among groups assembled for information operations, according to Facebook’s report—suggesting that the spread of fake news on Facebook is not as easy to game as initially thought.

Using the 2016 US presidential election as a case study, the researchers saw evidence of ”malicious actors” distributing messages based on data obtained from hacked email accounts (the messages accrued less than 0.1% of the reach of all “civic content,” posts on the platform that the firm determined were related to civic engagement). Facebook said it could not ascertain who these malicious actors were, although it noted its information didn’t contradict a US intelligence report saying that Russia was involved.

Facebook’s new paper shows that the social media platform is finally taking fake news, and the larger issue of overseas political influence, seriously. It was written by the company’s top cybersecurity executive, Alex Stamos, and two members of its Threat Intelligence Team.

One of Facebook’s solutions, for now, is to target accounts that don’t represent real people and instead are created to amplify false news. It says it’s using machine learning and other techniques to detect these inauthentic accounts, even without accessing the content they post. Examples include changes in the volume of posts, or repeated posting of the same content, it says. Such efforts have already resulted in action being taken against 30,000 fake accounts in France, ahead of the presidential elections there, Facebook says.

One area the authors make no mention of touching is Facebook’s News Feed, the stream of content that confronts the platform’s 1.9 billion users when they log in. On that front, Facebook appears satisfied to let users form a feedback loop with its algorithm to feed their appetites for information, as the Facebook executive in charge of that feature, Adam Mosseri, noted at a conference earlier this month.