We’re the reason we can’t have nice things on the internet

We’re all responsible for contributing to a toxic online culture.
We’re all responsible for contributing to a toxic online culture.
Image: Fanqiao Wang for Quartz
We may earn a commission from links on this page.

In a recent New York Times piece, Farhad Manjoo laments the increasingly shrill tone of political discourse. “If you’ve logged on to Twitter and Facebook in the waning weeks of 2015, you’ve surely noticed that the internet now seems to be on constant boil,” he writes.

But has the online world really entered a phase of permanent froth?

Vitriolic content may be par for the course in certain political circles. But not every story of online sparring in 2015 ended badly. On Twitter, a Jewish man befriended a member of the notoriously rancorous Westboro Baptist Church because, in his experience, “relating to hateful people on a human level” is “the best way to deal with them.” Using the same platform, a digital activist reached out to an Islamic State sympathizer and, through pointed, thoughtful engagement, convinced him to think differently. Feminist writer Lindy West engaged with one of her most mean-spirited online antagonists, and in the process came to see his humanity as clearly as he came to see hers.

These are inspiring stories. On the surface, they seem to provide heartening counterexamples to Manjoo’s claims. When placed in context, however, they prove to be the exception rather than the rule. The majority of stories about online harassment resist happy endings. Their conclusions tend to be unsatisfying or upsetting, if they end at all.

Women in the gaming and tech industrieswomen generally, though queer women, trans women, women of color and disabled women are particularly at risk–are ruthlessly targeted on a variety of platforms. School campuses are threatened with yet another shooting spree. Internet users face identity-based harassment and libel and find themselves on the receiving end of unconscionably racist and vitriolic content. The unluckiest of the bunch are subjected to unwarranted police raids.

Sometimes the police identify the most extreme online harassers. (Several of the above cases resulted in arrests.) And sometimes online antagonists are open about their identities. But very frequently, the people subjected to online abuse don’t know who is responsible, or even why they’re being targeted.

Maybe they’re not being specifically targeted at all. Maybe the abusive behavior is more diffuse, directed at women or people of color generally. Maybe the behavior is targeted but doesn’t meet the legal threshold of harassment. As I braced myself for the response to the publication of my book on trolls, for example, I met with the chief of campus police to see what preemptive security measures we could take and who to call if something happened. I was told that an email containing the threat “I am going to rape you” was legally actionable—potentially, assuming that the person or IP address could be traced. An email that said “I hope you get raped” was not. Obviously both were, you know, bad. But as the chief explained, making a specific threat is different–legally speaking–than being vaguely threatening.

The fact that there are so many ways to be antagonized online, and so many different kinds of antagonizers, makes it difficult if not outright inadvisable to forward a universally applicable set of best practices in response to online harrassment. What might be appropriate in one case (naming and shaming; counter-antagonizing; refusing to engage; minimizing impact; maximizing impact) might be counter-productive, ineffective, or dangerous in another.

In some cases, the preferred response is logistically impossible. We might, for example, wish that we could relate to a harasser on a human level. But what if there’s no hint of who the harasser might be, or what shred of humanity we might try to speak to? Where would we even start?

This is not to say that it’s impossible to take action against online antagonism. It’s critical to talk about what can be done to minimize or mitigate its impact. To this point, Anita Sarkeesian, Renee Bracey Sherman, and Jaclyn Friedman recently teamed up to create an online safety guide aimed at addressing and hopefully preventing the most damaging and persistent forms of harassment. These are necessary–if depressing–conversations to have.

That said, focusing only on individual instances of bad online behaviors, and only on the guilty parties, risks framing the issue of online harassment in terms of a “them” who harasses and an “us” who does not.

On the surface, the distinction between “us” and “them” is apparent. Certain behaviors are just gross; certain people are just mean. If only we could figure out how to deal with those specific individuals, and their awful behavior.

The issue is that they aren’t the only problem. Moreover, they are able to thrive in so many contexts, from politics to sports to entertainment, to say nothing of the online bullying and harassment of everyday people. As much as we might condemn these behaviors, online instigators have certainly gone forth and multiplied. If it really is the case that online harassers are fundamentally different than the mainstream “us,” then why are antagonistic behaviors so common, online and off? Why do we sometimes find ourselves slipping into more subtle versions of precisely the behaviors we condemn in them?

To use a gardening metaphor, it’s not just the specific weeds that are at issue here. It’s also the soil that nourishes those weeds. That soil nourishes everyone, as Amanda Hess emphasizes in her discussion of the gendered and raced–in other words, embodied, offline–history of internet culture.

In order to change the ugly tenor of online conversations, we need to think collectively about how we might make the soil less hospitable to invasive species. This process begins with the acknowledgment that none of us are above self-reflection—and that we all have a part to play in improving the health of the garden. With that in mind, here are the steps we need to take to tackle our hate-filled online culture.

1. Rethink the “trolling” umbrella

I began researching and writing about subcultural trolls–those who self-identify as such and who partake in highly stylized language and behaviors–in 2008. In early 2015, MIT Press published my book on the subject. Before the book was even at the proofreading stage, I had grown wary of the term “trolling,” at least when used as a vague behavioral catch-all.

By then, “trolling” had taken on more connotations and meanings than could reasonably be contained by a single term–everything from the kinds of ritualized trolling behaviors I’d been researching to mean tweets to holding an opinion someone else disagrees with to taunting the police to being a horrible roommate to outright harassment. The term had become so unwieldy that it was essentially meaningless. I wouldn’t know how to respond when journalists asked (and they always asked) what trolling was and why people did it.

But as I explain in this article, the imprecision of the term is the least of its sins. For one thing, the term trolling provides an all-too-convenient rhetorical out for aggressors: “I was just trolling, I didn’t really mean those racist or misogynist things I said.” In other words, “Stop being a baby or forcing me to be held responsible for my own actions, god.”

As it’s frequently used, “trolling” thus implies that participants are somehow playing, and that the antagonistic interaction is a game–one with rules dictated by the aggressor, and which only the aggressor can win. Both figuratively and literally, the aggressor is always the subject of the sentence. Everyone else is their object.

Furthermore, the implication that trolling is playful, disruptive for disruption’s sake, or fundamentally trivial (an attitude reflected in various year-end compilations of the “best trolls” of 2015) minimizes the experiences of those caught in the crosshairs of online harassers.

This problem is most conspicuous in the wake of the unmitigated shitshow that was GamerGate. Somehow, a year’s worth of cacophonous, horrific, violently misogynist attacks against women in the games and technology industries was “trolling,” a term also applied to silly comments posted in response to news articles. As Anita Sarkeesian, one of GamerGate’s most high-profile targets, notes, this framing obscures what was actually happening: toxic, abusive, violent misogyny. Harassment. Hell for the women involved.

We need to stop framing online harassment with the aggressors’ chosen terms, deferring to how aggressors prefer to be described and understood. We need to describe behaviors based on the impact they have. Highlight harm, not intent. Whistleblow, don’t whitewash. So: if a person is engaging in violently misogynist behaviors, then call it violent misogyny. I don’t care if the person responsible claims they were “just trolling.” If a person is so damn worried about being labeled a violent misogynist, then how about not engaging in violently misogynist behaviors, hmm?

This seemingly small rhetorical shift won’t undo harm. Whatever you call these behaviors, they can be devastating. But rethinking the trolling framework will help validate the experiences of people who are targeted by online harassers, preempt bogus victim-blaming logic, and empower individuals to tell their own stories–three important steps toward rewriting the rules of online discourse.

2. Stop incentivizing problematic online behavior

There is a symbiotic relationship between subcultural trolls–here I am using that term very specifically, referring to past research–and mainstream media outlets. During the “golden age” of subcultural trolling, which lasted from about 2008-2011, self-identifying trolls on and around 4chan’s /b/ board benefited from sensationalist, emotionally exploitative media coverage. Meanwhile, media outlets benefited from subcultural trolls’ sensationalist, emotionally exploitative behaviors. They were, in so many ways, perfect bedfellows.

Although subcultural trolling has since undergone a profound shift, the same basic argument holds. The primary reason that so many people engage in outrageous, exploitative, aggressive and damaging behaviors on the internet is that outrageous, exploitative, aggressive and damaging behaviors on the internet get the most attention. Attention means amplification. Which means more eyeballs glued to a story–and to that person’s hatefulness and delusions.

People engage in atrocious behavior, in other words, because it’s worth their time and energy to do so. It works. The reanimated corpse of P.T. Barnum that we refer to as Donald Trump knows, for example, that when he says something ugly and racist about Muslims (again), all anyone will talk about is the ugly, racist thing Donald Trump said about Muslims (again). Mass shooters know that before the body count is even confirmed, all anyone will talk about is every little goddamned thing they ever posted to social media, and that for the next week, month, year, they will be the subject of endless speculation and attention. A star is born, thus begetting future stars.

It should go without saying that thoughtless amplification of incendiary content can have a devastating impact on the people affected. It should also go without saying that journalists have a job to do; they can’t not cover the news, even when the subject is, in a word, disgusting. There is an inherent tension between these two principles–a tension this article also navigates. While there are no perfect or easy solutions, there is a difference between engaging with the facts of a story and sensationalizing a story, flattening its subjects into fetishized objects, and essentially converting bad (or tragic, or just plain gross) news into an opportunity to sell more ads.

Concerns about amplification shouldn’t be restricted to media professionals. Take online disaster humor, which is often created and spread by individual internet users, and then further amplified by journalists covering the story, all but ensuring the jokes’ long and healthy half-life.

“Funny” memes in response to shooting sprees might feel like harmless jokes to participants. But they can be profoundly re-traumatizing for survivors and victims’ friends and family. That’s true even if participants have no intentions of harming anyone. Online content, after all, is always just a hotlink away from reaching far more people than planned. The Facebook page of one of the people who died. Someone’s mother’s Twitter feed.

Ryan Milner, my Between Play and Hate co-author, makes a similar point in his exploration of racist and sexist expression on 4chan and Reddit (a point that spurred me to think more carefully about the work I was doing, and which helped shade my book’s introduction). “Even if it’s done in the service of critical assessment,” he writes, “reproducing [harmful] discourses continues their circulation, and therefore may continue to normalize their antagonisms and marginalizations.”

Every retweet, comment, like, and share extends the life of a given story. So we need to pay careful attention to what we share and spread online.

Even when we mean well, even when we specify that retweets ≠ endorsements, our actions still have consequences. As Milner explains, the same memetic logics that undergird entertaining content and politically engaged content also help spread destructive content, including rumors and false narratives. This is the potential dark side of Henry Jenkins, Sam Ford, and Josh Green’s observation that in the digital age, “if it doesn’t spread, it’s dead.” Research even suggests that frantic, hyper-saturated media attention encourages copycat crime.

So, before we do or say anything online, before we retweet unconfirmed details about the latest gun-related tragedy, before we post a shrill, sensationalist article to Facebook, before we furiously peck out our own hot take, we have to ask ourselves: Does this have the potential to make someone’s day worse? Someone’s life worse? If the answer is maybe, back away from the computer. Go outside and look at a tree. And remind yourself: Everyone you encounter on the internet is a person.

3. Embrace a robust, inclusive approach to free speech

People on the internet love talking about the meaning, limits, and future of free speech online. Sometimes these conversations engage with the right to freedom of speech in the constitutional sense. This limits the government’s ability to restrict speech by passing censorship laws or arresting people for what they say—for posting threatening lyrics on an ex’s Facebook page, for example.

But often, online conversations focus on free speech in the more colloquial sense, deployed on the internet as shorthand for “I should be able to do and say whatever I want, and anyone who challenges me or attempts to moderate the comments I post to privately-owned platforms is infringing on my civil rights.” Or put even more simply, “You’re not the boss of me, don’t tell me what to do.”

Because these arguments tend to be deployed by antagonizers or those sympathetic toward antagonizers, the resulting conversations almost always center on the aggressor’s speech, the aggressor’s feelings, and the perceived need to protect their self-proclaimed right to torment, harass, and antagonize strangers on the internet. (And you’d better watch out, because if you try to take that right away from them–for example, when Reddit banned a very small number of outrageously offensive subreddits–the aggressors might throw a tough baby temper tantrum).

Many argue that online hate speech and harassment is the price we pay for democracy, citing the old adage, “I may not like what you have to say, but I’ll defend with my life (and by that I mean post a rant on Facebook) your right to say it.” This is a fair-minded idea on paper. But in practice, this tenet belies the fact that the individuals whose right to freedom of speech is most vociferously defended on the internet tend to be straight white men, a demographic whose speech has, historically, needed the least protecting.

The imbalance doesn’t stop there. Knee-jerk defense of online antagonists obscures the fact that by allowing the loudest, most bigoted instigators to have the floor, people who otherwise might have contributed to a conversation are often intimidated into silence or driven away completely. So ironically, the result is an overall loss of speech, all to keep the worst offenders happy.

A much better and more inclusive approach would be to embrace an understanding of free speech that values and seeks to facilitate the greatest amount of speech from the most diverse group of people. The goal should be free speech for the rest of us, in other words, not just the hateful few.

Just as using a more accurate term than “trolling” won’t reverse the damage of antagonistic online behavior, reframing discussions of free speech won’t solve the problem of harassment in itself. What the shift can do, however, is help foreground the importance of diverse public voices, and serve as a reminder that mean-spirited, violent bigots aren’t the only people worth listening to. In fact, mean-spirited, violent bigots are rarely worth listening to. It’s about time we shift our focus to the people who are.

4. Demand that platforms take a side

Online platforms like Facebook, Twitter, Reddit, Medium and Tumblr aren’t–and shouldn’t be–immune to introspection. They are, after all, comprised of individual humans, some of whom are able to make decisions about what kinds of speech and behavior will be tolerated on their sites.

The people who work for platforms need to ask themselves, honestly and unflinchingly, whose side they are on–and furthermore, what approach to free speech they’re choosing to embrace.

The weak approach, which privileges the speech of antagonizers and creates a hostile space for historically underrepresented populations, is, in an ironic twist, also a short-sighted business strategy (ahem). The robust approach privileges the speech of individuals who don’t antagonize, creates an inclusive space in which diverse groups of people feel comfortable participating, and as a result is better for business and objectively just better.

Platforms that decide to privilege the users who actually have something worthwhile to contribute, and as a consequence attract new and diverse users instead of repelling them, have a number of moderation options at their disposal. These range from manual deletion of offensive content to word filtering, the adoption of new discussion platforms, or simply shuttering comments sections entirely.

The most important step is for the platform’s owners to declare what they value most—and even more importantly, who they value most.

5. Remember that you don’t get to dictate the feelings of others

One of the many problems with the common online imperative “don’t feed the trolls” is that it perpetuates victim-blaming logic. If only you didn’t do X, the argument goes–and sometimes “X” is simply “being a woman on the internet”–then maybe that person wouldn’t have gone after you. Keep that in mind, next time, and maybe you’ll have a better outcome.

This normalizes the idea that there will be a next time. What’s worse, the underlying argument–that if you don’t react to an online harasser, then they can’t upset you–is wrong and unfair on several levels.

First, the target of symbolic or embodied violence is never the responsible party. Only the responsible party is the responsible party.

Moreover, the blithe assertion that the target of online nastiness really should have known better, that the antagonizer is just some jerk on the internet and that’s what jerks on the internet do, immediately minimizes and redirects the target’s emotional reaction. The result is a patronizing imperative: “Don’t feel this; feel that. Don’t do this; do that instead.”

It is especially important that we avoid falling into this logic when we’re the source of another person’s distress. And this becomes absolutely crucial when unequal power dynamics factor into the discussion.

Now: I fully concede that everyone is capable of offending others, whether deliberately or accidentally. But as Sarah Banet-Wiser and Kate Miltner discuss in a forthcoming Feminist Media Studies article about popular misogyny, women online face a disproportionate deluge of sexism, vitriol, and outright harassment from men specifically.

Some male readers will rankle at such a suggestion, on the grounds that–say it with me–NOT ALL MEN. And I agree. A solid percentage of you–probably a majority–would never sink to, say, GamerGate levels of harassment.

But just because you don’t make rape threats doesn’t mean your work is done. You don’t get a cookie, a parade, or a star on the Hollywood Walk of Fame for not actively terrorizing women on the internet. Sexism–like racism–can be subtle, so subtle that you may not even realize you’re doing it. But trust me, women notice. We don’t have the luxury not to.

To that point, men online need to avoid perpetuating victim-blaming logic and the policing of emotional boundaries, two of the primary components of sexist online toxicity. Sometimes, all it takes to contribute to this toxicity is to refuse responsibility for your actions (“Oh come on, I was just joking”). Sometimes all it takes is to deny someone’s reaction (“You’re blowing this way out of proportion”), or to suggest that someone shouldn’t take things so seriously, since it’s just the internet (“Well then, log off if it’s so upsetting”). These responses shift the focus of the conversation to how someone responds to upsetting behavior, away from the fact that the behavior is upsetting in the first place.

There is a simple solution to this problem: just don’t be an asshole. It’s pretty easy to do! If, for example, something you do or say hurts another person–even if you thought it was a harmless joke, even if you thought you were being silly or charming–the appropriate response is “I’m sorry,” not some tortuous argument about why that person shouldn’t have reacted that way in the first place. You may think that person is overreacting, but just because you would have responded differently doesn’t make that the universally appropriate response. You do not know where someone else is coming from, and more importantly, are not the arbiter of their emotional life.

What you are, however–and I am still speaking directly to the men in the audience–is very important to the conversation, and very important to the solution. Ultimately, that’s what this is about: making responsible and respectful choices. Thinking outside of yourself, and doing what you can to make things better and safer for the women you care about and for everyone else. That’s all we ask. We need you to not let us down.

6. The way forward

When it comes to online aggression, there is something reassuring about the narrative of us versus them. We’re fine. We’re not the problem. It’s them. They’re the reason we can’t have nice things online.

But the real reason we can’t have nice things online–or anywhere–is because there is less distinction between us and them than many of us would care to admit.

We might think our behavior is beyond rebuke. And for some of us, maybe it is. But the soil is not beyond rebuke. The soil is toxic. Racist. Violent. Misogynist. Dangerous. It is up to us–all of us–to do what we can to nurture that soil, and in the process nurture our sense of justice, compassion, fairness, and care. We are all responsible for what the internet becomes. We are all responsible for how we act, what we say, the content we choose to amplify, and the content we choose to ignore.

Heading into this new year, let’s try something different. Let’s try to grow some flowers. We can start with this deceptively simple imperative: when in doubt, be humane.

Special thanks to Sarah Todd for editing, Kate Miltner for encouraging, Lisa Silvestri for inspiring, and Ryan Milner for fortifying (and various combinations therein).