Facebook and YouTube have long allowed users to flag offensive content. In response to the rise of propaganda from extremist groups like ISIL, German xenophobes, and far-right radicals in the US, the platforms are now giving outsiders unprecedented importance in countering extremist efforts.
Facebook has been offering free advertising credits to online activists who help counteract hateful or extremist speech on the platform as part of a pilot program in Germany, France, and the UK since the start of this year. The program is part of its Berlin-based Online Civil Courage Initiative, which has given over €10,000 ($11,152) in advertising credits to participating groups since it was founded in January 2016, according to the Wall Street Journal.
On Wednesday (Sept. 21), the social media giant said that it plans to broaden the reward system to include more organizations, ranging from think tanks to activists to tech companies, and will contribute another €1 million ($1.1 million) worth of ad credits to the program, the Journal reported.
YouTube is giving the power to even more people—all its users.
The video sharing platform is giving perks to volunteers, dubbed “YouTube Heroes,” who add captions and subtitles to videos, share their knowledge about practices on the platform with other users on YouTube’s Help forum, and flag inappropriate videos.
YouTube has relied on user reports for years to track down videos that break its terms of service. But by rewarding those who report content, YouTube may be courting disaster. Critics worry that it may promote a “snitching” culture and some users might even abuse their power by constantly reporting things, producing a chilling effect on free speech.
Others say the program is essentially making people provide free labor because the perks include things like being able to directly contact YouTube staff—something any YouTube creator who has encountered tech difficulties knows is invaluable. These critics worry that Google, YouTube’s parent company, is unfairly capitalizing on its popularity.
Meanwhile, creators on the site have long said the flagging system doesn’t work well and amplifying its presence is not the appropriate solution.
Both programs are examples of how tech companies, in a desperate effort to curb hate speech, are letting users control the dialogue. YouTube and Facebook aren’t the only ones struggling with moderation. While Twitter has removed over 360,000 ISIL-related accounts from its platform, homegrown white nationalist accounts are proliferating. To deter potential ISIL recruits, Jigsaw, a tech incubator that shares a parent company with Google, is testing counter-terrorism ads and videos that show up in search results for ISIL-related keywords. Unfortunately, despite these efforts, such accounts keep springing, either with new usernames or on new platforms like Whatsapp and Telegram.