Facebook has been widely criticized for its role in spreading propaganda, hyper-partisan content, and demonstrably false stories ahead of the US presidential election. While Facebook CEO Mark Zuckerberg initially shrugged off the idea that such ”fake news” could have influenced the election, he later vowed to “take misinformation seriously.” Adam Mosseri, Facebook’s VP of news feed, recently outlined steps the company is taking to eliminate the most egregious hoaxes from its site.

With that backdrop, it was easy to believe the company had erred again in deploying safety check in Bangkok on Dec. 27. The truth of the situation—that one misleading story had become tangled up with still-developing reports of separate explosions in the city—is more complicated. But for a company with 1.8 billion monthly active users and the power to alert them to crises often before even local media outlets have caught on, there are bound to be ancillary and sometimes negative consequences.

Facebook introduced safety check in October 2014 in a clear attempt at corporate do-goodery. “Over the last few years there have been many disasters and crises where people have turned to the Internet for help,” Zuckerberg wrote at the time. “Connecting with people is always valuable, but these are the moments when it matters most.”

Originally, Facebook employed a team of people who decided when safety check should be turned on. Then, in late 2015, the company backed away from that strategy after being accused of western bias for activating safety check in response to the Paris attacks, but not following a similar bombing in Beirut. This November, Facebook announced it had handed over more control of safety check and “community help,” another crisis response feature, to its users.

The updated safety check, Wired reported that month, “begins with an algorithm that monitors an emergency newswire—a third-party program that aggregates information directly from police departments, weather services, and the like.” The program can detect events long before they are reported in the media. Facebook then combs its platform to see whether people in the area are discussing the possible incident. If enough are, Facebook prompts them to check in as safe.

This hands-off approach is based on a critical assumption: News of real emergencies will spread organically, and false alarms will fizzle out. When that assumption holds true, the results can be arresting. On the night of the mass shooting in Orlando, Florida, this summer, Facebook’s algorithms activated safety check 11 minutes before police officially announced the attack. When it fails—as happened in Bangkok—or something else goes awry, safety check can provoke confusion, anxiety, and alarm much like any other breaking news alert.

Zuckerberg has pushed back on the idea that Facebook is a media company that should adhere to editorial standards, preferring to see it as a neutral space for “public discourse.” Safety check, a feature that can function simultaneously as a public good and a breaking news service, is one area where the line between media company and technology platform is particularly thin. Facebook’s power is unprecedented in its sheer size and influence over what information people see. All it takes is one false story to start a bomb scare.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.