In the US and abroad, the company faces intense pressure from the right, largely because of a 2016 scandal, when reports revealed that a team that curated the now-defunct “trending news” section of the site was suppressing conservative sites.

As a result, lawmakers have pressed Facebook, YouTube, and Twitter on anti-conservative bias in various hearings, repeatedly bringing up the same examples and anecdotes. The most popular cause was the case of Diamond and Silk, African-American pro-Trump social media personalities that were labeled by the Facebook as “unsafe.”

During a recent hearing called specifically to discuss bias on social media platforms, Monika Bickert, Facebook’s global head of policy, devoted an entire section of her prepared remarks to a public apology to the duo.

Democratic lawmakers in the US have repeatedly called out these claims of anti-conservative bias as unfounded. “It is a made-up narrative pushed by the conservative propaganda machine to convince voters of a conspiracy that does not exist,” representative David Cicilline said during the hearing.

And right-wing content is doing just fine on Facebook. A study by social-media tracking firm NewsWhip shows that during the 2016 presidential election, top conservative publishers had higher user engagement than liberal ones. Research from the left-leaning ThinkProgress has shown Facebook’s recent algorithm changes affected everyone, regardless of political stripe. And you can just as easily bring up anecdotal evidence of social media censorship on all along the ideological spectrum.

Cicilline accused Facebook of “bending over backwards” to placate conservative accusations.

This shouldn’t be a surprise.

Popular pages, especially those with engaged users, are valuable customers for Facebook. And the top spender on political ads on Facebook is… Donald Trump.

When does content cross the line?

Here’s what Facebook says it does remove:

The line between what the platform determines to be permissible fake news and a violation of its rules is not always clear. For example, it told Quartz in February that it was removing false claims that the survivors of the Parkland shooting were “crisis actors,” labeling them as attacks against the survivors.

Claims that the Sandy Hook elementary school shooting was a hoax are allowed to stay on the platform, but, as CEO Mark Zuckerberg said himself in an an interview with Recode last week, a claim that a grieving parent of one of the victims was lying would be classified as harassment, and taken down (even this, however, seems to be a new policy, NBC reported).

Zuckerberg got himself into hot water by trying to explain Facebook’s reasoning on conspiracy theories further,  bringing up the example of Holocaust deniers, which he said he found offensive. “But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong,” he said, adding that he didn’t think it was right to take people off the platform “if they get things wrong, even multiple times.”

After backlash, Zuckerberg clarified in an email to Recode that he “absolutely didn’t intend to defend the intent of people who deny that.” But he repeated his belief that Facebook should not be taking down fake news.  

Content moderators hired by firms that Facebook contracts make similarly head-scratching distinctions on a daily basis. Recently leaked training documents revealed that Facebook has discerned between white supremacy, white nationalism, and white separatism, for example. This essentially means the company bans blatant racism, but allows it even if it is slightly veiled.

A documentary from the UK’s Channel 4 released last week, showed a reporter going undercover as a content moderator hired by a Facebook contractor, reveals even more perplexing dissections. According to Facebook’s rules, “Muslims” are a “protected” group, which cannot be attacked, but “Muslim immigrants,” are not, one of the other moderators says. Gruesome images of self-harm, which is not allowed on the platform, are left up if the moderator determines that the image is an “admission” of self-harm, but will take it down if the post praises the action. A video of a child getting brutally beaten is allowed to stay on the platform, and it’s only marked as “disturbing.”  

It’s also unclear what it takes to get a page banned for violating Facebook’s community standards. During the House hearing, Facebook’s Bickert told lawmakers that the threshold of violating posts varies—which raised some eyebrows about the company’s transparency. Facebook told Quartz that “the consequences for violating our Community Standards vary depending on the severity of the violation and a person’s history on the platform.” For example, if someone shares an image of child exploitation, they will be removed without a warning, but if it’s just a nude photo, they will get more chances. 

A document leaked to Motherboard revealed that for these lesser violations—in the case of hate speech and sexual content—the company does in fact have a hard-and-fast rule. It takes down pages if they’ve exceeded five offending posts in 90 days, or if 30% of the content posted on the page by others violates community standards.

But the Channel 4 documentary showed that certain pages—specifically, it mentions popular UK far-right figure Tommy Robinson, and the now-defunct page of his organization, Britain First—are “shielded.” Instead of taking a page down after it passes the threshold, contracted content moderators send these popular pages to Facebook employees to deal with. And the reason why may be simple: “they have a lot of followers, so they’re generating a lot of revenue for Facebook,” one of the moderators says in the documentary.

Facebook vehemently disputes the claim that it considers revenue when making content moderation decisions, and that it had a policy of “shielded review.”

Zuckerberg admits that Facebook has mishandled many problems related to bad content. He says that it will take three years to deal with all the different issues the company created for itself—and that it’s already halfway through this process. But when it comes to policing content, it seems that no amount of one-off fixes will be enough, unless the company fundamentally re-thinks how to approach the unending flood of awfulness that the internet provides us. It seems like a tall order for a man who frequently describes himself as an optimist and idealist.

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.