In the hours following the deadly Jan. 6 ambush of the US Capitol building, Twitter and Facebook both temporarily blocked US president Donald Trump from their sites, out of concerns that the “inciter-in-chief” would spark more violence with his continued questioning of Joe Biden’s election victory
Twitter also started a chain reaction. By the time it removed Trump’s access to his digital megaphone, millions of the president’s supporters had set up accounts on Parler, an alternate social media platform known for being friendly to conspiracy theorists and right-wing extremists. But Twitter’s complete banning of Trump made Parler a household name, as did revelations that many of the rioters had discussed the storming of the capitol on the lesser-known site. That forced other Silicon Valley companies to distance themselves from Parler and block their customers from accessing the site, whether to protect their corporate reputations or for the sake of public safety, or both.
Google and Apple saw fit to kick Parler out of their app stores after first warning Parler’s leadership to clean up the site and block calls for violence. The next serious blow came from Amazon, which kicked Parler off of its web servers, leaving it without anywhere to live online. Parler, in response, announced on Jan.11 that it was suing Amazon Web Services for violating antitrust laws, while Parler users joked about responding to Amazon with explosives rather than a legal battle.
AWS says there is no merit to the claims Parler made in its lawsuit. “We made our concerns known to Parler over a number of weeks,” a spokesperson told Quartz, “and during that time we saw a significant increase in this type of dangerous content, not a decrease, which led to our suspension of their services Sunday evening.”
Does it even matter how web companies label themselves?
All of the decisions that led to Parler going dark (at least as it rebuilds, according to its CEO) have raised complex questions about forums for free speech and the best ways for companies and law enforcement to police online discourse. But there’s at least one question that has been answered already, and that’s whether there’s such a thing on the internet as being merely a platform or a piece of infrastructure.
Whether you are, like Facebook and Twitter, creating the social media tools for people to publish hateful messages, or, like Google and Apple, allowing an app that contains those tools to exist on your company’s devices, or, like Amazon, Microsoft Azure, or Google Cloud Platform, supplying the invisible backbone for the companies behind the tools, you cannot claim neutrality.
These firms have demonstrated that they can and will draw lines around who is allowed to be one of their clients, something that’s completely within their rights to do as private firms.
Whether or not you agree with Twitter’s decision to ban Trump, it’s undeniable that the company was contradicting the argument it and other social media sites have made in the past when they’ve been pressured to delete tweets from hate groups or the president’s account—as Trump encouraged violence against Black Lives Matter protesters or spread misinformation, for example. That is, they see themselves as platforms and not publishers, so they would not make editorial decisions about content that wasn’t explicitly illegal, though they sometimes provide “context” with fact-checking labels or other filters. Their actions in the wake of the Jan. 6 insurrection suggests they acknowledge they are more than what they have claimed to be.
Taking a neutral public utility stance was merely “a ruse to protect their own bottom lines,” says Britt Paris, an assistant professor of library and information science at Rutgers University. “They’re not going to let anything fly that is overly detrimental to their profit model.”
Researchers Suzanne van Geuns and Corinne Cath-Speth picked apart the “neutral” platform or pipeline issue in an essay for the Brookings Institution in August, months before Trump and his supporters took to social media sites to make the baseless claims that lead to the breach at the Capitol. At that time, the pair examined the “we’re just infrastructure” stance as it related to Cloudflare, a site that both enables data to move around the net with higher speeds and protects sites from DDOS attacks.
In 2017, Cloudflare buckled to sustained public pressure to kick the neo-nazi site The Daily Stormer off its servers, though at the time CEO Matthew Prince said the company was reluctant to make value judgments about the websites that are its customers. Two years later, Cloudflare cut ties with the site 8chan, calling it “a cesspool of hate,” after learning that the gunman behind the 2019 El Paso massacre posted his manifesto to 8chan before murdering 20 people. Again, Prince opined about his discomfort with setting that precedent because Cloudflare was merely a conduit for the flow of data. We didn’t want a world where someone like him could wake up one day and kick a site off the internet, removing it from a global town square on a whim, he wrote.
However, last summer, during the Black Lives Matter protests sparked by George Floyd’s murder, Cloudflare “jumped at the chance to publicly discuss its governance choices,” Van Geuns and Cath-Speth report. “Writing that the protests can be catalysts for change, but only if they can be heard, Cloudflare proudly claim[ed] to make this possible by offering activists protection from cyberattacks.”
Obviously, the company can’t have it both ways. It can’t position itself as an apolitical service when pressed to control nasty groups spouting conspiracy theories, while trumpeting itself as an ally to righteous causes when it sees an opportunity. That double standard endangers the future of information-sharing for all groups; what happens if the next Cloudflare wants to suppress BLM organizing?
But even putting that question aside, Cloudflare, Amazon, and web hosting sites like Go Daddy or other fairly invisible infrastructure companies simply can’t pretend to be working for the public good when they are privately owned by corporations serving private customers (the websites), rather than web users, Van Geuns and Cath-Speth argue.
“Actual public utility companies, like water providers, are almost always publicly owned and run,” they write. “Even when they are private businesses, electric companies are subject to civil oversight by public utilities commissions.” Without that, it’s impossible for the public to hold them accountable.
Likewise, Paris says of the current crisis that it’s “overall beneficial for corporations like Amazon to attempt to mitigate the spread of messages encouraging violence and death in the name of white supremacy.” However, “there are larger questions to be asked about how it has become the responsibility of corporations to control the public sphere, and what we might do to ameliorate this situation in ways that meaningfully benefit the public interest.”
One possibility, she tells Quartz, is to look to public utilities run by cooperatives. Creating a similar ownership model for internet infrastructure companies would give people who are members of the co-ops “direct say in the governance and roll out or deployment of these public utility infrastructures,” says Paris. “And then they [would] get monetary kickbacks for any money that the company makes.” That is just one plausible solution among many discussed by scholars and research groups over the past several years, she emphasizes.
It’s past time to act
It’s encouraging to see all of the ways that companies are upholding civic values following the Capitol’s attack. Cumulus Media warned its right-wing radio talk show hosts not to repeat false claims about a stolen election or risk being terminated. A steady trickle of large companies have announced they would no longer make campaign contributions to politicians who supported the president’s efforts to overturn legitimate election results. Airbnb said it will block reservations for people who may be booking stays with plans to join additional violent ambushes.
It’s also heartening to see the way regular consumers are paying close attention to the entire network of businesses connected to tech giants like Apple, Google, Facebook, and Twitter, or lesser-known sites like Parler. The leaders of these companies know that people are not only measuring their actions as publishers or platforms—take your pick—but also as powerful partners and enablers to problematic clientele.
Still, nearly 40 years after the internet was created, we still don’t have great answers regarding how and when it’s appropriate for private firms to use their power to control the flow of information and ideas. And the events of last week have demonstrated how critical it is that the public consider additional regulation and oversight of the internet’s infrastructure and push for transparency about the operating decisions made by companies behind consumer sites—or create real public utilities, as Paris and others suggest. If maintaining the world’s digital town square is seen as a public service, we need to treat it like one in every sense.
And we should move fast. Even right now, additional attacks are being planned online, as Paris notes, so although the de-platforming that we’ve seen so far has been useful, it addresses neither the root causes of extremist views nor the rules and norms that have allowed internet companies to enrich themselves for years while those proliferated on their sites and servers.
This story has been updated to include a statement from an AWS spokesperson.