Four times Facebook could have taken its Russia problem seriously, but didn’t

After a long silence…
After a long silence…
Image: Reuters/Stephen Lam
By
We may earn a commission from links on this page.

It’s getting uncomfortably warm for Facebook under the spotlight in Washington, DC. As Congressional committees and the FBI probe the social network’s inadvertent role in spreading propaganda and disinformation from Russian-government-linked accounts during the last US presidential election, the company’s stance has shifted radically—from denying there could be a problem earlier this year to saying, last week, that it’s “open to reviewing” proposed legislation.

Hindsight, as the old adage goes, is always 20/20. But in Facebook’s case the company appears to have missed not one, but several opportunities to act faster or more forcefully to address a problem that has impacted democracies around the world. Taken together, they add to the growing concerns about the oversight and management at a company that directly impacts more than one-quarter of the world’s people.

Taking president Obama seriously

Days after Donald Trump’s surprise election victory in November, president Barack Obama pulled Facebook CEO Mark Zuckerberg aside during a conference in Peru, the Washington Post reported recently (paywall), and asked him to take fake news and political disinformation on his platform seriously. Zuckerberg told Obama “those messages weren’t widespread on Facebook, and there was no easy remedy,” the paper reports. Three weeks ago, on Sept. 6, the company said it had identified $100,000 in ad spending from June of 2015 to May of 2017 that violated its policies, and was linked to a Russian troll farm.

Facebook rejects the notion that the conversation with Obama should have served as a warning, because the damage was already done. “We appreciated President Obama’s attention to these issues,” Facebook said in a statement. “Their conversation was about misinformation and false news, which Mark had addressed the previous day in a post that outlined specific steps Facebook was taking to combat these challenges. The discussion did not include any references to possible foreign interference or suggestions about confronting threats to Facebook.”

Searching for related ads promptly

The FBI investigation into Russian election meddling began more than a year ago, and Obama pulled Zuckerberg aside in November. Yet it wasn’t until the spring of this year that Facebook began a search in earnest for Russian-backed accounts that may have used advertisements to spread fake political news in the US on its platform, according to officials briefed on the company’s conversations with congressional investigation committees, and the company’s own statements.

Facebook’s own April 2017 white paper (pdf) on the company’s understanding of how “information operations” and fake news spread on its platform doesn’t even mention advertising. Asked why not, the company referred us to its Sept. 6 statement, which says “a connection between the Russian efforts and ads purchased on Facebook” only emerged after its April white paper.

Facebook did identify “nefarious actors” on its platform that it believed were part of a Russian espionage operation last year, and contacted the FBI twice in June of 2016 with the information, a person familiar with the situation said.

Employing more people to police election integrity worldwide

Facebook’s 2 billion monthly users around the world helped the company sell nearly $27 billion in advertising last year. Still, as of last week, the company had fewer than 250 employees devoted to “election integrity” worldwide, Zuckerberg indicated in his Sept. 21 statement. “In the next year, we will more than double the team working on election integrity,” he said. “In total, we’ll add more than 250 people across all our teams focused on security and safety for our community.”

Facebook scrambled earlier this year to fight fake news ahead of elections in France and Kenya, among other countries, but didn’t ramp up staff then. Instead it opted to refine its automated systems for detecting suspicious material, which led it to take down tens of thousands of fake accounts in France alone.

Why did the company wait until now to add more human staff? To answer that, a spokesman directed us back to Zuckerberg’s statement. “Our sophistication in handling these threats is growing and improving quickly,” Zuckerberg said.

Taking blame for its mistakes

Even as Facebook’s outlook on its role in elections is changing, the company still seems to want to distance itself from the way its money-making platform is being manipulated. “It has always been against our policies to use any of our tools in a way that breaks the law,” Zuckerberg said on Sept. 21, “and we already have many controls in place to prevent this.”

It is, as Quartz wrote earlier, a lot like the gun industry saying “Guns don’t kill people, people kill people.” Facebook is still hoping to shunt much of the responsibility for its problems to automated algorithms, and any negative consequences to its users, while continuing to earn billions every year.