The US Supreme Court had its first opportunity to weigh in on Section 230 of the Communications Decency Act, a pivotal internet law that provides liability protection to websites for user-generated content they host. But the justices sidestepped the issue, ruling in favor of Twitter and Google in two bellwether cases without making any judgment about the constitutionality of Section 230.
Both cases, Twitter v. Taamneh (pdf) and Gonzalez v. Google (pdf), related to whether Section 230 shielded social media companies from liability under federal anti-terrorism laws.
In the Twitter case, the Court heard arguments from a plaintiff who claimed the platform should be liable for aiding and abetting the Islamic State in the killing of a Jordanian citizen in Istanbul, Turkey. In a unanimous decision, the Court found that the plaintiff did not demonstrate that Twitter met the criteria for aiding and abetting terrorists and ruled in favor of Twitter. Clarence Thomas, who has in recent years become perhaps the court’s most vocal opponent of Section 230, wrote the majority opinion.
The other case was brought by the parents of Nohemi Gonzalez, an American exchange student killed in Islamic State bombings in Paris. They sued Google for algorithmically promoting Islamic State recruitment videos on its YouTube platform.
The justices decided that their ruling in Twitter negated their need to rule on the Google case, or weigh in on the merits of Section 230. “We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief,” the justices wrote in a separate opinion dismissing the case.
A close call for Section 230
Section 230 has largely facilitated the rise of the modern internet. It not only protects large social media companies from a deluge of frivolous defamation lawsuits, but also protects news websites when they host user comments on articles, or recommendation services like Yelp when you post a scathing restaurant review.
In oral arguments in the Section 230 cases in February, the justices repeatedly struggled to understand the claims that the plaintiffs were trying to make. At the heart of the discussion was whether social media companies are responsible for content that their proprietary algorithms recommend to users. In other words, do they lose their Section 230 protections when they do more than merely host potentially harmful content, but algorithmically sort it into users feed—or even prioritize it?
In the Twitter case, the court “correctly recognized...that the platforms’ alleged conduct was too attenuated and passive to rise to the level of aiding and abetting,” said Anna Diakun, staff attorney at the Knight First Amendment Institute at Columbia University.
But “the Court will eventually have to answer some important questions that it avoided in today’s opinions. Questions about the scope of platforms’ immunity under Section 230 are consequential and will certainly come up soon in other cases,” she said.
An “unambiguous victory for online speech”
Jess Miers of the tech industry group Chamber of Progress called the decision to leave 230 untouched an “unambiguous victory for online speech and content moderation.”
Patrick Toomey of the American Civil Liberties Union was similarly pleased, saying, “Today’s decisions should be commended for recognizing that the rules we apply to the internet should foster free expression, not suppress it.”
Chris Marchese of NetChoice, a tech industry group, said the court’s actions amounted to a “huge win for free speech on the internet,” noting that while content moderation efforts on social media sites are imperfect, they are a “vital...tool in keeping users safe and the internet functioning.” NetChoice has brought high-profile lawsuits against state social media laws in Florida and Texas, which prohibit social media companies from removing users’ posts based on political expressions, a matter widely expected to come before the Supreme Court.