Big tech goes to court

Why the US Supreme Court is struggling with a case about YouTube's algorithms

In Gonzalez v. Google, the Supreme Court justices struggled with questions of algorithmic neutrality—because it doesn't really exist
Court-goers wait in line for Gonzalez v. Google's oral arguments
Court-goers wait in line for Gonzalez v. Google's oral arguments
Photo: Drew Angerer (Getty Images)
We may earn a commission from links on this page.

Clarence Thomas has a question. Thomas, an associate justice of the US Supreme Court, was once notorious for his silence during the court’s oral arguments, but in recent years has become quite talkative. His question—in this case about the internet’s recommendation algorithms—involves light jazz, rice pilaf, and the terror group ISIS.

“If you’re interested in cooking, you don’t want [to see YouTube recommendations for] light jazz,” Thomas started. “Say you get interested in rice pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana… Are we talking about the neutral application of an algorithm that works generically for pilaf and also works in a similar way for ISIS videos?” In other words, does YouTube’s algorithm recommend pro-ISIS propaganda in the same way it recommends videos about making rice pilaf?

In the case at hand, Gonzalez v. Google, for which the Supreme Court heard oral arguments (pdf) on Feb. 21, the nine justices are considering Section 230 of the Communications Decency Act, an integral law in internet history that generally protects the owners of websites from being sued for content posted by third-party users. YouTube, for example, can host massive amounts of user-generated content—videos, comments, et cetera—without worrying about whether that content could implicate the company in defamation or other civil wrongs. (Criminal content, such as child pornography or copyright violations, is not protected.)

In this court case, the family of Nohemi Gonzalez, an American student killed in an ISIS attack in Paris in 2015, sued Google, claiming that its algorithms violated the US Anti-Terrorism Act by recommending ISIS propaganda to users. (A related case, heard on Feb. 22, probes whether Twitter is liable for ISIS recruiting efforts on its site.)

Thomas’ questions about jazz, pilaf, and ISIS were getting at the central crux of the case: Are internet companies still protected under Section 230 if they employ algorithms that recommend harmful content? But Thomas also alluded to the idea that an algorithm can somehow be neutral, a central theme of both the day’s discussion and of a lower court decision on the matter. But the problem for the court to grapple with is that the internet is now inseparable from its algorithms, and its algorithms are never truly neutral.

Section 230 helped create the modern internet

Section 230 has facilitated the rise of the internet economy: Not only does it protect the largest social media websites from a barrage of never-ending lawsuits, but it also protects the New York Times when people comment on its cooking recipes, and it protects Yelp from scathing reviews posted by embittered restaurant-goers, among countless other examples.

Justice Elena Kagan seemed to appreciate the law’s importance when she tried to clarify Thomas’ question. “I think what was lying underneath Justice Thomas’ question was a suggestion that algorithms are endemic to the Internet,” she said, “that every time anybody looks at anything on the Internet, there is an algorithm involved, whether it’s a Google search engine or whether it’s this YouTube site or a Twitter account or countless other things, that everything involves ways of organizing and prioritizing material.”

But the justices repeatedly stumbled over questions of algorithmic neutrality, which Thomas alluded to in his pilaf question, struggling to divine what a “neutral” algorithm actually is. (In the court’s defense, Justice Kagan conceded at one point, “these are not like the nine greatest experts on the Internet.”)

“Is an algorithm always neutral?” Justice Neil Gorsuch asked, calling into the discussion due to illness. “Don’t many of them seek to profit-maximize or promote their own products? Some might even prefer one point of view over another.” Gorsuch has a point, but because this legal distinction comes from the lower court decision in Gonzalez, the justices are tasked with figuring out whether it should apply to the statute.

The “neutral tools” framework

In deciding the Gonzalez case, a federal appeals judge in San Francisco, Morgan Christen, used a legal framework derived from a 2008 decision by the same circuit court. In that case, Ninth Circuit ruled that, a website devoted to matching roommates with one another, was not shielded from lawsuits claiming violations of the Fair Housing Act because it asked users to list their preference of the sex, sexual orientation, and family status of potential roommates.

The court held that was “inducing third parties to express illegal preferences” and therefore could be found liable. (In a later ruling, the court ultimately found that the questions didn’t violate the Fair Housing Act.) But the court at the time clarified that if the website listed a general comment section and didn’t actively solicit potentially discriminatory content, the site would be protected from such lawsuits under 230. Such a website is therefore just providing “neutral tools” to every user.

But, this “neutral tools” framework was a large part of the Ninth Circuit’s decision (pdf) in Gonzalez. In the court’s view, Google’s algorithm is a neutral tool, applied evenly to each user. Put another way, users can fall down YouTube rabbit holes based on their interests, but some lead to rice pilaf and others lead to ISIS propaganda videos.

In an amicus brief (pdf), filed in support of neither party, a civil rights group called the Lawyers’ Committee for Civil Rights Under Law urged the Supreme Court against adopting the neutral tools test because it fails to appreciate the way modern recommendation algorithms actually work.

“There is nothing neutral about a recommendation algorithm that takes different data about different people in different contexts and provides those people with different outcomes—as its human designers instructed it to do,” the civil rights group wrote. “When an algorithm is employed to make these decisions at the scale of the internet, with trillions of data points to draw upon and millions of users to evaluate, the potential harm from discrimination is devastating.”

Algorithmic neutrality doesn’t really exist

Algorithms are, at their simplest form, sets of instructions. But those instructions are human-made and reflect the biases of their makers and controllers—even when powered by machine-learning technologies. The central problem with the Ninth Circuit’s logic is that it’s premised on a faulty assumption that algorithms can ever be neutral.

“Algorithms are never neutral,” Eric Goldman, a Santa Clara University law professor, wrote in an email to Quartz. “By definition, they prioritize some items over others. Take something like reverse chronological ordering as an ‘algorithm.’ It prioritizes recency over relevancy. So the concept of ‘neutral algorithms’ may appeal to the justices, but it’s not an intellectually rigorous concept, and any legal doctrine predicated on it will ultimately collapse.“

Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who wrote a book about Section 230 called “The Twenty-Six Words That Created the Internet,” said in a phone interview internet companies make choices about the content they show users. That’s true even for spam filtering, which is a choice, or displaying a reverse chronological feed, another choice.

But while the neutrality issue is a strange one, Kosseff thinks he understands what they’re getting at. “The justices were basically trying to say that there’s gotta be some point where it’s not the liability for third-party content, but it’s the liability for forcing harmful third-party content on people,” he said.

Jennifer Granick, a surveillance and cybersecurity lawyer at the American Civil Liberties Union, said, “The very purpose of content moderation algorithms is to prioritize some content over others. I think the ‘neutral’ framing is a way of expressing the idea that a company is not prioritizing or intentionally promoting terrorist content.”

In that vein, Thomas was probing whether YouTube was tipping the scales for pro-ISIS videos—or rice pilaf videos—in some sort of affirmative act. But it’s not clear whether tipping the scales, however, would exceed the video site’s statutory protections under Section 230, or the company’s speech rights under the First Amendment.

Algorithms, ubiquitous on the internet, cannot be neutral. They are the products of human decision making, business priorities, and editorial discretion. The justices stumbled through their questions about algorithmic neutrality for a reason—it doesn’t exist.