Facebook says it can stop the sharing of most terror-related posts within an hour of creation

On the lookout.
On the lookout.
Image: Facebook/Dado Ruvic
We may earn a commission from links on this page.

Facebook is trying to fight the perception that it’s not doing enough to fight terrorism.

Yesterday, (Nov. 28) the company published a blog post with an update on its efforts to curb the spread of content promoting terrorist groups like ISIL and al-Qaeda.

According to Monika Bickert, Facebook’s head of global policy management, and Brian Fishman, its head of counterterrorism policy, the company is increasingly relying on AI to spot posts related to terror, rather than flagging from humans. They claim that Facebook can detect 99% of posts pertaining to ISIL and al-Qaeda before a user manually reports it, and can also remove 83% of copies of the posts within one hour after they’re uploaded.

In June, Facebook outlined the ways it was using AI and machine learning to automate its crackdown on content that promotes terrorism. Its tactics include image matching, wherein posts with images that match a previously removed post will themselves get removed, and language understanding, in which the company trains algorithms to detect text that promotes terrorism. The company also looks out for pages and accounts linked to terrorism by examining the social graphs connected to them. An account might be terminated, for example, if it “is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”

Facebook’s algorithms are perfected with the help of human content-reviewers, who train algorithms by confirming whether posts brought forth violate Facebook’s guidelines. According to the Wall Street Journal, Facebook intends to employ 7,000 content reviewers by the end of the year, up from 4,500 in May (paywall). This requires recruiting testers across a number of different languages and cultures.

Facebook’s announcement comes as the company prepares to meet with European Union regulators next week. On Monday (Nov. 27), the head of Germany’s domestic intelligence agency accused Facebook and its peers of not doing enough to remove hateful posts and fake news. Germany passed a law in June that imposes fines of up to $57 million on social-media companies that are caught spreading hate speech. In the US and UK, Facebook is under pressure to reveal its role in helping spread Russian-backed propaganda supporting Brexit and Donald Trump’s presidential campaign.