When fake news found a massive audience on Facebook during the run up to the 2016 US presidential election, it was aided by Facebook’s algorithms that surface stories on topics users are interested in. When a Russian group with ties to the Kremlin used Facebook to launch an ad campaign aimed at influencing American voters, it wasn’t subverting the functionality of Facebook’s ad system, but using it as intended. And when ProPublica discovered that advertisers on the platform could target people who identify as “Jew haters,” it wasn’t unearthing a bug, but confirming the cold precision of a well-working feature.
“We never intended or anticipated this functionality being used this way—and that is on us,” Facebook COO Sheryl Sandberg wrote a post on Sep. 20, referring to the ad-targeting issue. “And we did not find it ourselves—and that is also on us.”
In other words, the problem is not with Facebook’s functionality, but with the way certain people have used its features. CEO Mark Zuckerberg echoed that sentiment the next day, on Sep. 21, in a statement about the Russian ad campaign.
“It has always been against our policies to use any of our tools in a way that breaks the law,” he said, “and we already have many controls in place to prevent this. But we can do more.”
Facebook’s stance recalls a bumper sticker slogan that was popularized in the 1990s by the National Rifle Association: Guns don’t kill people; people kill people. Facebook didn’t spread misinformation and propaganda; the Russians did. No one at Facebook anticipated its product would be used in this way, and in any case, such use has always been against its policies. Under this premise, all Facebook has to do when problems arise is tweak its safety mechanisms.
It typically does that by offloading the responsibility of catching nefarious activity to automated systems and, through reporting features to flag inappropriate material, its users. This approach allows the platform to remain a conduit, a pipe, a machine that is harmless when used as directed. The more Facebook can keep human employees out of these processes, the easier it is to shift accountability away from itself and onto the parties who use or misuse the tools it provides. Plus, workers are expensive, and hiring vast armies of them to moderate content would threaten the economics that have turned Facebook into a $500 billion company.
Machine learning and natural language processing technologies, however, are not up to the task of catching inappropriate content with 100% accuracy. These methods, combined with community self-policing, may be the most efficient way Facebook can moderate the content generated by its 2 billion monthly users, but none of it prevented the Russian ad campaign from slipping through the cracks. Zuckerberg acknowledged that in his statement, but in the same breath said the company will not inject more humans into the sales process.
“Most ads are bought programmatically through our apps and website without the advertiser ever speaking to anyone at Facebook. That’s what happened here,” he said. “But even without our employees involved in the sales, we can do better.”
The only new hires Facebook plans to make around this issue is in the ambiguous area of “election integrity,” Zuckerberg said, where the company plans to double its current staff of 250 workers. He did not elaborate on what departments they’d work in or what they would be doing. In Sandberg’s previous post about antisemitic ad targeting, she said the company would add “more human review” to the process. We asked Facebook how many workers would be assigned to the issue, where they would work, and whether they’d be employed on a full-time or contract basis. Facebook declined to answer those questions, and referred us back to Sandberg’s post.
In its investigation into how advertising on Facebook affected the US election, Zuckerberg said the company is continuing to look into “foreign actors, including additional Russian groups and other former Soviet states, as well as organizations like the campaigns, to further our understanding of how they used our tools.”
Lumping American political campaigns into that group suggests there may be more at work here than the illegal misuse of Facebook’s tools. The Russian operation to target ads at voters, based on the details Facebook has released, was minuscule in comparison to the reach legitimate campaigns had on Facebook in the months before the election. If Facebook suspects foul play among those groups, it could lend credence to concerns that data analytics were used by certain campaigns to an extent that was inappropriate.
In 2016 both Ted Cruz and Donald Trump hired Cambridge Analytica, a data firm that can use behavioral modeling to “predict the personality of every single adult in the United States of America,” its chief officer once said, using vast amounts of personal data the company has purchased and collected. Some of that data was reportedly harvested from tens of millions of Facebook users without their consent. The firm is currently being probed by congressional investigators examining potential ties between it “and right-wing web personalities based in Eastern Europe who the US believes are Russian fronts,” Time reported in May.
Following Trump’s surprise win in the 2016 election, Cambridge Analytica CEO Alexander Nix claimed substantial credit in a press release: “We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump’s extraordinary win.”
Of course, using data analytics and psychological profiling in conjunction with Facebook’s tools to target voters and potentially influence their behavior does not violate any of Facebook’s policies. Indeed, Facebook’s micro-targeting features made the company the giant that it is. This would not be a case of someone misusing Facebook’s tools to nefarious ends, but one of using the tools exactly as intended.