“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community,” the Facebook spokesperson said. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down.”

It’s not clear what Facebook defines as “physical harm.”

The company is collaborating with local non-governmental organizations and other partners who might be the first to notice potentially harmful content. The risk of harm has to be urgent, Facebook said. It added that it will also remove similar content that its AI systems flag.

Local organizations that Facebook has partnered with in the past have said that the company has failed to live up to its commitments to limit dangerous hate speech. Activists said their calls for more robust content moderation were ignored.

The difference between fake-news content, which Facebook generally leaves up on the platform, and posts that violate its community standards is far from clear. For example, Facebook will leave up conspiracy theories that claim the deadly 2012 elementary school shooting in Sandy Hook did not happen, such as those promoted by InfoWars’ Alex Jones. But, as Zuckerberg noted today in an interview with Recode, if someone specifically says a grieving parent of one of the victims is are a liar, the platform will classify that post as harassment and will remove it.

Update: This post was updated with Facebook’s confirmation that the policy will be only rolled out to countries where there is ongoing violence. 

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.