How Covid-19 lockdowns weakened Facebook’s content moderation algorithms

Fix the processes.
Fix the processes.
Image: Reuters/Dado Ruvic/Illustration
We may earn a commission from links on this page.

Facebook sent thousands of content moderators home due to coronavirus—and its algorithms suffered.

During a company briefing today (Feb. 24), Facebook’s organic content policy manager Varun Reddy acknowledged that because a lot of human reviewers across the globe had to be sent home during the early months of the pandemic, the feedback loop for monitoring content was fractured. The AI learns from human moderators, he explained, adding that the reduction in human vetting volumes has changed “how effective the AI is over time,” he said.

On the surface, things look hunky-dory. In the last quarter of 2020, Facebook posted a massive drop in hate speech globally. As reviewer capacity ticked up, Facebook-owned Instagram removed 3.4 million pieces of suicide and self-injury content, up from 1.3 million in the third quarter. (The company doesn’t provide country-wise data.) While some of this crackdown can be attributed to better reviewing by people and technology, there is more context to the changing values.

The machinery behind identifying problematic content may be less fool-proof now owing to “last year’s Covid-related constraints on human reviewers and the impact it’s had on machine learning,” Reddy said.

On paper, Facebook has 35,000 people in safety and security globally, of which 15,000 are content moderators. But they’re not all back in office even now.

“We’re working with partners to get as much capacity back online as we can,” Reddy said. “We’re not there yet but it has improved significantly since lockdown began (on March 25). In the coming weeks and months, we are hopeful the systems will come back to full efficacy,” Reddy said.

Even if they all come back to work, it won’t be enough.

Fixing Facebook moderators

When the first community standards report was released in May 2018, Facebook only tracked six violations once in six months on Facebook. Today, it shares metrics for 12 different types of abuses across Instagram and Facebook once every three months.

Recently, the social media giant has been “improving methodology, investments, and how we’re reporting metrics” around safety policies, Reddy said. It’s also looking to identify a third-party vendor to audit its work, Reddy said.

When asked about the hiring and training criteria for reviewers, Reddy said, “We hire for competency and fit for the job, and also hire for people with local sociopolitical context and language.”

Despite the nuanced hiring Facebook claims, errors are still rife. For instance, on the call, a representative from Facebook group GurgaonMoms—a community for parents in the northern Indian city of Gurugram with over 30,000 likes and follows—said their report of a sexually-explicit post with children’s cartoon character Chhota Bheem was not deemed problematic. Only when they reached out to a known source at Facebook directly was the content taken down. This process is not accessible or feasible for Facebook’s 320 million and Instagram’s 120 million Indian users.

Critics say the team falls short when it comes to quality and quantity.

For one, Facebook outsources much of its moderating needs—with low salaries and little compassion. In 2019, Facebook moderators in India complained of being underpaid and overburdened. In October, Facebook moderators at third-party contractor Genpact’s Hyderabad office were forced to return to work against their wishes. And this isn’t unique to India. An open letter by reviewers in the US in November 2020 stated, “Moderators who secure a doctors’ note about a personal Covid risk have been excused from attending in person. Moderators with vulnerable relatives, who might die were they to contract Covid from us, have not.”

A June 2020 report from New York University’s (NYU)  Stern Center for Business and Human Rights suggests moving content moderations in-house. “Content moderation will improve if the people doing it are treated as full-fledged employees of the platforms,” wrote Paul M. Barrett, report author and deputy director of the centre.

Additionally, the NYU report said Facebook needs to double its moderator workforce to at least 30,000. “(I)t could give the expanded review workforce more time to consider difficult content decisions while still making these calls in a prompt manner,” the NYU report noted. Also, a bigger workforce would allow the moderators to rotate and take more time off when the job gets psychologically taxing—which it does, considering Facebook agreed to pay $52 million in damages to 11,250 US-based moderators and provide more counselling to them last May.

Of course, all of this would be an expensive undertaking. But then again, Facebook is a $720 billion behemoth that rakes in billions of dollars in profit every year. In mid-August 2020, Facebook founder and CEO Mark Zuckerberg’s wealth was estimated at $95.5 billion, up 75% from mid-March.

With these deep pockets, Facebook already has the way to fix content moderation—if only it finds the will.