But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.
These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.
With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.
In his first appearance since the Capitol attack, Trump described the impeachment process as ‘a continuation of the greatest witch hunt in the history of politics’.
With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.
Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.
This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.
For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.
The internet is also notoriously hard to police—it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.
It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:
I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.
As far as efforts to quash such extremism go, they’re usually too little, too late.
Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.
The psychology of hate
Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.
Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.
It’s important to remember hate speech, racism, and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.
And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.
When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshiping elites from within government, business, and media are running a “deep state” against Trump.
Social media and white supremacy: a love story
The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.
Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.
It recognizes social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favor “free speech” over the safety of vulnerable communities online.
For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favored the offenders over Indigenous people” by tending to lean in favor of free speech.
Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.
On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism—not until not doing so negatively impacts profits.
After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.
The need for a paradigm shift
When it comes to imagining a future with less hate, racism, and violence, a key mistake is looking for solutions within the existing structure.
Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.
But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.
Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.