Mark Zuckerberg said today (April 10) in his testimony before the US Congress that he could see AI taking a primary role in automatically detecting hate speech on Facebook in five to 10 years.
The technology isn’t ready to deploy yet, the Facebook CEO says, because of the limitations to artificial intelligence. Hate speech detection remains a reactive process, and users need to flag it to the social media platform for it to be manually deleted, Zuckerberg said. It’s particularly difficult to find because it’s communicated in many different languages.
“Hate speech—I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate, to be flagging things to our systems, but today we’re just not there on that,” Zuckerberg said. “Until we get it automated, there’s a higher error rate than I’m happy with.”
As Quartz has reported, modern artificial intelligence has proven useful in detecting patterns—whether that be in images for facial recognition or audio for speech regulation. But language is fluid, and as Zuckerberg notes, hate speech can be heavily dependent on the context around the hateful words. Some terms found in hate speech are slang—not part of the common vernacular used to train AI.
However, Facebook does have AI tools in use today trying to detect hate speech, as well as unlawful content like revenge and child pornography. It’s also using AI tools to offer help for users that it detects could be contemplating suicide.
For more coverage of Zuckerberg’s trip to Washington, DC, follow Quartz reporter Heather Timmons live at the hearing.