Facebook is rolling out AI technology to detect posts that hint at suicidal intentions. The tool, which the company has been testing for months, will be used for all types of Facebook content, and is meant to significantly speed up the process of reporting concerning behavior to first responders.
The AI will be eventually available everywhere Facebook is, except in the European Union, which has strict privacy regulations about profiling users. Facebook’s tool will prioritize potentially suicidal posts when flagging content to human moderators, and highlight the relevant content in those posts so moderators can react faster, Facebook told Tech Crunch. That’s particularly important in drawn-out Facebook Live broadcasts, which some have used to broadcast their suicide attempts in real time.
When posts are flagged, moderators can respond by displaying information about local mental health resources on the user’s screen, or by contacting first responders to help them locate the person in question. They can also suggest a friend to reach out to.
Writing in a Facebook post, Mark Zuckerberg said that in the past month the AI has helped Facebook quickly connect with first responders more than 100 times. “With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today,” he wrote.
Facebook has been long trying to deal with people posting their suicide intentions on the platform, and artificial intelligence is getting very good at predicting self-harming behaviors. The new AI uses bases its decisions on text patterns from posts identified as suicidal that had been previously reported by users, or comments under posts that say “Are you ok?” or “Can I help?” (This, of course, introduces another question: What happens when a cry for help defies known patterns?)
Facebook has even broader plans for this technology. ”In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate,” Zuckerberg wrote in his post. But others pointed out it could also have other, more worrying applications: