When Russia’s infamous troll farm targeted the US 2016 election with disinformation on social media, Twitter could have helped stem the problem on its platform with the help of one of the internet’s most hated tools: CAPTCHAs.
A study of 14 million tweets between May 2016 and March 2017 published today confirmed what researchers on the frontlines have long believed—that bots were central to spreading disinformation during that period. Analyzing the spread of articles from “low credibility” sources, the Indiana University study shows that bots produced a disproportionate amount of misinformation online, and were critical in the early sharing of articles that swiftly became viral.
Correction (Nov. 20, 5:30pm EST): Blocking bots on Twitter would only have been a partial fix, as Russia’s disinformation campaign also mobilized human-run accounts such as the highly-popular fake Tennessee GOP page. But if you delete the 10% of accounts most likely to be bots, that “reduces the amount of [false articles] in the network almost to zero,” says Filippo Menczer, the computer science professor who led the study.
To stop bad actors from coordinating vast numbers of bots tweeting similar or identical messages, Menczer and his five co-authors say there should be a human involved in the tweeting process—something you can ensure by forcing users to fill out a simple reading test, known as a CAPTCHA. “Adding a CAPTCHA would create friction—it would make it much harder for bots to auto-post,” Menczer told Quartz.
Renee DiResta, the director of research at New Knowledge, a cybersecurity firm specializing in disinformation, agrees that CAPTCHAs would be a “fantastic way” to cut down on malicious Twitter bots. Twitter could avoid shutting down benevolent bots, like those used by news organizations or weather alert systems, by verifying them.
DiResta adds that Facebook should create similar extra friction for its “share” button, arguing that an extra step would make people think again before sharing articles with sensationalist headlines and false content. A 2016 Columbia University study showed that 59% of links shared on social media hadn’t been clicked on by the person sharing.
Asked why the tech giants hadn’t already implemented such a simple fix, DiResta laughed: “It’s taken us the better part of a year and a half to get them to even acknowledge the problem.”
A Twitter spokesperson didn’t comment on implementing CAPTCHAs, but pointed out efforts the company has made to combat disinformation, such as deleting “spammy or automated accounts” and engaging academics to study the uses of Twitter.
New problems: Targeting influencers and human helpers
While CAPTCHAs might have eased the problem in 2016, they won’t be a cure-all today, DiResta says. Since then, Twitter has changed its trending algorithm to make it harder for artificially amplified tweets to be noticed by other users. Bad actors have also moved away from simple bots in favor of two different tactics, she says.
The first, which Menczer’s team discusses in their paper, is to direct disinformation at influencers, in the hope that they will share it with their large following. The tactic can be critical in making disinformation go viral, the paper says. A prime example is president Donald Trump’s notorious false claim that millions of people had voted illegally in 2016. The paper finds that a single account tweeted a false Infowars article about the matter at Trump 19 times.
The second tactic is for people planning a disinformation campaign to post a tweet in forums on other social networks like Gab and Discord, and ask users to retweet it or post the same message. That way, they’re still gaming the system but not using automation to do it. DiResta says Twitter has data to show where posts have been pre-planned elsewhere and could develop ways of penalizing these tweets.
This article has been corrected to clarify that bots were only part of Russia’s 2016 misinformation campaign, and that deleting 10% of Twitter accounts “most likely to be bots” would remove nearly all misinformation stemming from low-credibility websites, but not all misinformation.