This could be among the last few articles you ever read about Facebook.
Or about a company called Facebook, to be more precise. On Oct. 28, Mark Zuckerberg will announce a new brand name for Facebook, to signal his firm’s ambitions beyond the platform that he started in 2004. Implicit in this move is an attempt to disengage the public image of his company from the many problems that plague Facebook and other social media—the kind of problems that Frances Haugen, the Facebook whistleblower, spelled out in testimony to the US Congress earlier this month.
But a rebranding won’t eliminate, for instance, the troubling posts that are rife on Facebook: posts that circulate fake news, political propaganda, misogyny, and racist hate speech. In her testimony, Haugen said that Facebook routinely understaffs the teams that screen such posts. Speaking about one example, Haugen said: “I believe Facebook’s consistent understaffing of the counterespionage information operations and counter-terrorism teams is a national security issue.”
To people outside Facebook, this can sound mystifying. Last year, Facebook earned $86 billion. It can certainly afford to pay more people to pick out and block the kind of content that earns it so much bad press. Is Facebook’s misinformation and hate speech crisis simply an HR crisis in disguise?
Why doesn’t Facebook hire more people to screen its posts?
For the most part, Facebook’s own employees don’t moderate posts on the platform at all. This work has instead been outsourced—to consulting firms like Accenture, or to little-known second-tier subcontractors in places like Dublin and Manila. Facebook has said that farming the work out “lets us scale globally, covering every time zone and over 50 languages.” But it is an illogical arrangement, said Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business.
Content is core to Facebook’s operations, Barrett said. “It’s not like it’s a help desk. It’s not like janitorial or catering services. And if it’s core, it should be under the supervision of the company itself.” Bringing content moderation in-house will not only bring posts under Facebook’s direct purview, Barrett said. It will also force the company to address the psychological trauma that moderators experience after being exposed every day to posts featuring violence, hate speech, child abuse, and other kinds of gruesome content.
Adding more qualified moderators, “having the ability to exercise more human judgment,” Barrett said, “is potentially a way to tackle this problem.” Facebook should double the number of moderators it uses, he said at first, then added that his estimate was arbitrary: “For all I know, it needs 10 times as many as it has now.” But if staffing is an issue, he said, it isn’t the only one. “You can’t just respond by saying: ‘Add another 5,000 people.’ We’re not mining coal here, or working an assembly line at an Amazon warehouse.”
Facebook needs better content moderation algorithms, not a rebrand
The sprawl of content on Facebook—the sheer scale of it—is complicated further by the algorithms that recommend posts, often bringing obscure but inflammatory media into users’ feeds. The effects of these “recommender systems” need to be dealt with by “disproportionately more staff,” said Frederike Kaltheuner, director of the European AI Fund, a philanthropy that seeks to shape the evolution of artificial intelligence. “And even then, the task might not be possible at this scale and speed.”
Opinions are divided on whether AI can replace humans in their roles as moderators. Haugen told Congress by way of an example that, in its bid to stanch the flow of vaccine misinformation, Facebook is “overly reliant on artificial intelligence systems that they themselves say, will likely never get more than 10 to 20% of content.” Kaltheuner pointed out that the kind of nuanced decision-making that moderation demands—distinguishing, say, between Old Master nudes and pornography, or between real and deceitful commentary—is beyond AI’s capabilities right now. We may already be in a dead end with Facebook, in which it’s impossible to run “an automated recommender system at the scale that Facebook does without causing harm,” Kaltheuner suggested.
But Ravi Bapna, a University of Minnesota professor who studies social media and big data, said that machine-learning tools can do volume well—that they can catch most fake news more effectively than people. “Five years ago, maybe the tech wasn’t there,” he said. “Today it is.” He pointed to a study in which a panel of humans, given a mixed set of genuine and fake news pieces, sorted them with a 60-65% accuracy rate. If he asked his students to build an algorithm that performed the same task of news triage, Bapna said, “they can use machine learning and reach 85% accuracy.”
Bapna believes that Facebook already has the talent to build algorithms that can screen content better. “If they want to, they can switch that on. But they have to want to switch it on. The question is: Does Facebook really care about doing this?”
Barrett thinks Facebook’s executives are too obsessed with user growth and engagement, to the point that they don’t really care about moderation. Haugen said the same thing in her testimony. A Facebook spokesperson dismissed the contention that profits and numbers were more important to the company than protecting users, and said that Facebook has spent $13 billion on security since 2016 and employed a staff of 40,000 to work on safety issues. “To say we turn a blind eye to feedback ignores these investments,” the spokesperson said in a statement to Quartz.
“In some ways, you have to go to the very highest levels of the company—to the CEO and his immediate circle of lieutenants—to learn if the company is determined to stamp out certain types of abuse on its platform,” Barrett said. This will matter even more in the metaverse, the online environment that Facebook wants its users to inhabit. Per Facebook’s plan, people will live, work, and spend even more of their days in the metaverse than they do on Facebook, which means that the potential for damaging content is higher still.
Until Facebook’s executives “embrace the idea at a deep level that it’s their responsibility to sort this out,” Barrett said, or until the executives are replaced by those who do understand the urgency of this crisis, nothing will change. “In that sense,” he said, “all the staffing in the world won’t solve it.”