Facebook is revealing data on how good it is at moderating content, but the numbers have holes

What’s the score?
What’s the score?
Image: Reuters/Robert Galbraith
By
We may earn a commission from links on this page.

Facebook is new to this transparency thing. It’s going to take some time to get it right.

Two weeks ago, Facebook unveiled the hyper-specific content moderation rules that it had been using internally to police its platforms. Today, May 15, it’s publishing the metrics it uses to judge how good it is at dealing with content that violates these rules.

The result is a report card of sorts, which shows where the company is most active, and where it has more work to do. It illustrates the massive scale of Facebook’s content review operation—for example, the platform says it took down more than 800 million pieces of spam in the first quarter of 2018. But what it also reveals is that global data isn’t particularly helpful to hold the company accountable for serious local problems it has caused or exacerbated.

What is Facebook telling us

In its 86-page report, the company divides the content that it takes action on—whether it’s removing it or slapping a warning message on it because of its disturbing nature—into six broad categories: graphic violence, nudity, terrorism, spam, hate speech, and fake accounts. It outlines how much content it moderates, how prevalent objectionable content is on its site, and how much of that Facebook finds by itself, versus users flagging it to moderators. Not all of these metrics are available for every category.

Facebook plans to publish this report twice a year, breaking down the information on a quarterly basis.

“We believe that increased transparency tends to lead to increased accountability and responsibility over time and publishing this information will push us to improve more quickly too,” Guy Rosen, VP of product management at Facebook, wrote in a post. “This is the same data we use to measure our progress internally — and you can now see it to judge our progress for yourselves.”

What we don’t know

The numbers show that Facebook is very good at finding terrorist content, nudity, and fake accounts, and quite good at identifying graphic violence. By Facebook’s own admission, hate speech is the hardest for its systems to pin down. “Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue,” Rosen writes.

Because it’s so hard to define and measure, Facebook is not reporting the prevalence of hate speech on the platform.

During a call with reporters last week, Rosen and Alex Schultz, head of analytics for the company, explained that Facebook prioritizes content that’s especially urgent. For example, posts or videos that refer to self-harm and suicide have to be dealt with in a matter of minutes. Terrorism also falls into this category.

Beyond that, the order in which content is addressed is largely based on its reach. The more people that see a harmful post or a video, the faster moderators will deal with it.

But, reporters asked, doesn’t that mean Facebook’s biggest audiences, with the most common languages will be moderated more closely? What about a smaller country, like Sri Lanka, where people have died as a result of hate speech spread on Facebook? Rosen and Schultz said the moderators also receive “qualitative feedback,” from Facebook representatives on the ground, which also affects how the moderation is prioritized.

The company does not offer a country-by-country breakdown of the statistics or even regional numbers in the report. Internally, it analyzes “deviations” from the norm, Schultz said, which show, for instance, spikes in graphic violence when a war breaks out. Rosen said Facebook first has to get these metrics “right,” before releasing them to the public. More broadly, they emphasized the metrics are very much a work in progress.

The new numbers also don’t cover fake news. That’s because Facebook usually doesn’t take down misinformation, but demotes it in the News Feed, Rosen and Schultz said. It also doesn’t use its content moderators to deal with this issue, instead relying on third-party fact-checkers.  

Today’s report also only covers data on Facebook’s main app, and not others it owns, such as WhatsApp, Messenger, and Instagram.

Facebook isn’t disclosing how long it takes for moderators to remove violating content. “We don’t believe time is the best metric for effectively policing our service,” writes Schultz in a blog post. A spokesperson explained to Quartz that ultimately reach is what matters—a viral video might have a bigger impact in an hour than another video would have in years. However, the company is designing a metric that will measure how quickly it reacts to violations, and will release it in future reports, the spokesperson said.

There’s also reason to question the statistics Facebook offers on terrorism, as Bloomberg reported last week. The company said it caught 99.5% of terrorist content in the first quarter of 2018 before it was flagged by users, a figure Mark Zuckerberg has touted in his Congressional hearings. Indeed, Facebook’s systems have gotten very good at detecting content from al-Qaeda and ISIS, but not other organizations that have been designated by the US as terrorist groups, like Boko Haram and Hezbollah, according to Bloomberg.

Global rules, local problems

Content moderation is a tricky problem for Facebook. The company tries to impose the same standards on a community of 2.2 billion people in very different contexts and across a myriad of cultures. It has to accommodate these differences, and adhere to local laws.

The problem with the statistics Facebook released today is that not only are they incomplete, but they also don’t offer a nuanced picture of how the platform is policed. The figures made the company look pretty good when it came to catching terrorism and spam. But how good is the company at catching hate speech in Sri Lanka or Myanmar? How good is it at finding misinformation in the United States? Whether or not Facebook calls them violations of its standards, this modern-day propaganda has real-life consequences from influencing elections to deadly violence. Transparency is laudable, but this is just a first step.