What if Facebook is lying to its own Oversight Board?

Facebook’s Oversight Board feels duped.
Facebook’s Oversight Board feels duped.
Image: REUTERS/Stephen Lam/File Photo
We may earn a commission from links on this page.

Facebook’s Oversight Board styles itself as a crucial check on one of the world’s most powerful companies, but its efficacy is in doubt. While Facebook would like its $130 million corporate high court to have that legitimacy, the board is questioning whether Facebook has been honest about its own content moderation practices.

The Oversight Board said in a statement on Tuesday (Sept. 21) it is investigating whether Facebook has been “fully forthcoming” in discussions with the board about a program that gave celebrities and politicians greater leeway to post rule-breaking content.

The Oversight Board, announced in 2018, is a democratic reform Facebook implemented in 2020 to serve as an independent check on content moderation decisions. It was installed at a time when the public critique of Facebook was at a fever pitch. Facebook critics blame it for degrading democracy, paving the way for Donald Trump and other authoritarian leaders, facilitating genocide and human rights abuses, and hosting dangerous misinformation about health and politics.

The board’s membership includes law professors, human rights activists, think tank scholars, and even the former Danish prime minister. But the body has a limited scope: It only weighs in on whether Facebook or Instagram have wrongly removed or left up certain posts (originally it only reviewed taken down content). It can also issue, but not enforce, policy recommendations. Out of 500,000 appeals, the Oversight Board has taken 20 cases, issued 15 decisions, and overturned Facebook 11 times.

But the board’s legitimacy depends on Facebook being fully transparent about how it makes content-related decisions. Facebook’s ostensible failure to do just that raises questions about whether the Oversight Board can fulfill its simple mission—and whether Facebook is even governable at all.

Revelations about XCheck

The most recent controversy comes after The Wall Street Journal detailed a program called XCheck (or cross-check), whereby high-profile accounts like celebrities and politicians are shielded from some content moderation practices when they break Facebook rules (for example, spreading vaccine misinformation.) Some users are “whitelisted,” or immune from content decisions, under the system.

The Journal reported that Facebook executives misled the Oversight Board by claiming that XCheck was only used in a small number of cases. Leaked documents showed that 5.8 million users were covered by XCheck in 2020.

In response to previous board queries about XCheck, Facebook had explained the program but “did not elaborate criteria for adding pages and accounts to the system and declined to provide reporting on error rates,” the board said. Last week, the board called on Facebook to be “fully transparent,” and said it expects to be briefed again by Facebook and will report its findings to the public in October.

“It’s becoming clear that when the board asks Facebook questions, it is not getting complete information,” said Chinmayi Arun, a resident fellow at Yale Law School and an affiliate of Harvard’s Berkman Klein Center For Internet & Society.

Prevailing research shows that political elites are central to how misinformation spreads on social media. To that end, Shannon McGregor, an assistant professor at the University of North Carolina’s Chapel Hill’s Hussman School of Journalism and Media, said that high-profile accounts should be distinguished from regular users in content moderation decisions, but not in the way Facebook had been doing.

“People with this amount of following shouldn’t be treated the same as other users,” McGregor told Quartz. “They shouldn’t be exempted from fact-checks [and other moderation decisions]—in fact, they should be held to a higher standard.”

Facebook defers responsibility

The ordeal points to a problem with the board’s design. It was ostensibly set up  to provide independent oversight and accountability. But it is funded by Facebook and its membership of outsiders depends on Facebook executives’ cooperation and goodwill in order to perform its job.

Corynne McSherry, the legal director of the digital rights group Electronic Frontier Foundation, said the Oversight Board cannot do its job unless Facebook is honest with it. “Given this week’s revelations I can’t help but wonder what else Facebook has failed to disclose, and whether the Oversight Board can trust Facebook’s responses,” she told Quartz.

A Facebook spokesperson did not address specific questions about allegations that it wasn’t honest with the Oversight Board. He said the company shared more information about Xcheck in response to the board’s queries.

The Journal’s reporting sheds new light on the Oversight Board’s biggest decision to date, on Facebook’s ban of Donald Trump from the platform. Facebook asked the Oversight Board to review Trump’s expulsion, but while the board upheld the decision it also criticized the vague and indefinite nature of the policy. “In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities,” the board wrote in May. Facebook public policy chief Nick Clegg then announced a pathway back for Trump if he isn’t judged to be a “risk to public safety” after Jan. 7, 2023.

As it turns out, Trump was exempted by XCheck for two years before his ban, the Journal reported, and in its May decision, the board asked Facebook publicly to explain the “rationale, standards and processes” on XCheck. Arun said that, in retrospect, understanding XCheck was crucial to making a judgment on the Trump case.

“You would think that something like whitelisting would be relevant in the context of the Trump ban case,” she said. “If I were on the Oversight Board I would ask myself why that didn’t come up.”