Facebook waded deeper into the political debate around refugees in Europe with the launch of an initiative to counter hate speech in Germany yesterday (Jan. 18). But the social network still appears unwilling to embrace a role as arbiter of its users’ conversations.
The company is putting up €1 million ($1.1 million) and partnering with European non-profits and governments to develop ways to counter “online extremism” on the continent. The program is called the Initiative for Civil Courage, and it’s based in Berlin, with support from the German government.
Why Berlin? Facebook caved after months of complaints from German politicians, activists and celebrities that the firm wasn’t doing enough to restrict xenophobic and racist comments from users triggered by an inflow of asylum seekers to the country. German prosecutors even started a probe in November 2015 into Facebook’s top executive there for allegedly facilitating the incitement of racial hatred after a spike in anti-immigrant posts.
Germany’s justice minister requested—and received—a meeting with Facebook after reminding the social network that it was legally compelled to delete racist posts. The minister said he had received complaints that users flagging these posts were being told that the content had been checked, and that they didn’t break any of Facebook’s house rules.
The Civil Courage program is the latest in a series of public measures Facebook has taken to show it is taking the complaints seriously. On Jan. 15 it said it had hired an outsourcing agency to put more than 100 people to work finding and deleting racist content on the platform. That came after a September announcement that the German justice ministry would form a task force with social media platforms, including Facebook, to deal with online hate speech. In December, Facebook, Google and Twitter agreed to remove hate speech posted on their platforms within 24 hours, using German law, not their own terms of service, as a benchmark.
But for all its new initiatives and deals, Facebook’s actions are those of a company that’s hedging against getting more directly involved in fundamental debates around free speech and its restrictions. The platform has rules governing the limits of speech that its users may enjoy. Enforcement has always been the problem.
Facebook has an in-house “community operations” team tasked with monitoring user behavior but, under pressure, it’s apparently seen fit to spend its resources on contractors and third-parties. In other words, Facebook is putting the onus on monitoring hate-speech on non-profits and an outsourcing company instead of addressing the problem in-house.
The company didn’t say if it has increased the number of full-time community operations staff when contacted by Quartz. In a statement, a Facebook spokesperson said that it would “keep investing” in teams and working with partners to enforce its speech rules.
Perhaps Facebook is proceeding cautiously because previous attempts to pass judgment on current events were criticized. Recently, the company faced backlash over its decision to change its Safety Check policy and activate it during the Paris attacks in Dec. 2015, instead of only during natural disasters. This decision immediately raised questions about why the company hadn’t turned the feature on during bombings in Beirut days earlier. It continues to be criticized for its opaque deployment of the feature. It didn’t turn on Safety Check despite multiple attacks in Jakarta last week, for instance.
If Facebook is hoping its new measures in Germany will be enough to get it off the hook from politicians, activists and its own users, it may be disappointed. Calls for greater responsibility from Facebook over the use of the network for terrorism recruitment are now growing in the US. Time will tell if its actions so far in Germany will be enough to quell dissatisfaction there.