Skip to navigationSkip to content
Reuters/Andrew Biraj
2012: Bangladesh accused Muslim Rohingya refugees from Myanmar on Monday of involvement in attacks on Buddhist temples and homes in the southeast and said the violence was triggered by a photo posted on Facebook that insulted Islam.
FAKE OUT

Facebook is actually going to start removing fake news—or at least some of it

Hanna Kozlowska
By Hanna Kozlowska

Investigative reporter

Facebook says it does not want to be the arbiter of truth, and in recent days, various executives, including founder Mark Zuckerberg, have been adamantly defending its policy of letting fake news live on the platform. On Wednesday (July 18), however, the company announced that it will start to take down some forms of fake news—specifically, content that could result in physical harm to real people.

The policy was announced during an event for non-US press, and the company confirmed to Quartz that it will for now be rolled out to countries where there is ongoing violence. It did not specify which countries, but Facebook has gotten into hot water for its role in spreading misinformation that has led to violence in Myanmar and Sri Lanka.

A Facebook spokesperson told Quartz that the company removed content in Sri Lanka last month under the new rule which it only publicly announced today. An offending post had claimed that Muslims were poisoning food intended for Buddhists. While there were no immediate indications of violence on the ground, Facebook contacted an unspecified local partner, which it said confirmed that the content could potentially lead to violence.

“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community,” the Facebook spokesperson said. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down.”

It’s not clear what Facebook defines as “physical harm.”

The company is collaborating with local non-governmental organizations and other partners who might be the first to notice potentially harmful content. The risk of harm has to be urgent, Facebook said. It added that it will also remove similar content that its AI systems flag.

Local organizations that Facebook has partnered with in the past have said that the company has failed to live up to its commitments to limit dangerous hate speech. Activists said their calls for more robust content moderation were ignored.

The difference between fake-news content, which Facebook generally leaves up on the platform, and posts that violate its community standards is far from clear. For example, Facebook will leave up conspiracy theories that claim the deadly 2012 elementary school shooting in Sandy Hook did not happen, such as those promoted by InfoWars’ Alex Jones. But, as Zuckerberg noted today in an interview with Recode, if someone specifically says a grieving parent of one of the victims is are a liar, the platform will classify that post as harassment and will remove it.

Update: This post was updated with Facebook’s confirmation that the policy will be only rolled out to countries where there is ongoing violence. 

Subscribe to the Daily Brief, our morning email with news and insights you need to understand our changing world.