Everyone is mad at tech platforms. Conservatives attack them for heavy-handed censorship. Liberals say they’re too lax on hate speech. Both sides agree they have too much unchecked power.
The age-old problem of balancing free expression with harmful, and false, content seems like an impossible problem. But online, at least, there’s a lot that sites can do to fix it, says Susan Benesch, a faculty associate of Harvard University’s Berkman Klein Center for Internet and Society who studies dangerous speech on and offline. Indeed, our decades of experience in web design have already taught many sites how to discourage incivility and promote reasoned debate.
“There is often the assumption in public discourse and in government policymaking and so forth that there are only two things you can do to respond to harmful speech online,” says Benesch. “One of those is to censor the speech, and the other is to punish the person who has said or distributed it.” Instead, she says, we could be persuading people not to post the content in the first place, rank it lower in a feed, or even convince people to take it down and apologize for it themselves.
What does this look like in practice?
Change how you incentivize comments
In 1997, Mike Masnick founded an online technology blog called Techdirt to analyze the policy, technology, and legal changes affecting company innovation and growth. Early on he decided that his blog would neither censor comments and nor filter out anonymous commenters.
Then a troll used the comments section to describe in graphic detail how he was going to kill someone.
Masnick doesn’t recall exactly how he dealt with the troll, but rather than compromise his original vision, he added three small design tweaks on each comment to foster more civil discourse: a “funny” upvote button (a green “LOL” icon), an “insightful” upvote button (a green lightbulb), and a “report” button (a red flag)—to mark abuse, trolling, or spam. (The grey buttons were added later for users to pay to promote their comments.)
The idea was, “Can we sort of nudge people in a better direction without being heavy-handed about it?” Masnick says. “Certainly at that point, there were a bunch of sites that had started having either an upvote option or an upvote and downvote option. We just felt that it wasn’t a very information-rich solution.”
In contrast, the funny and insightful buttons incentivized those specific types of comments. “It sort of gave an explicit indication that that’s what we’re looking for,” he says. “It also gave a way for the community to participate.”
When a comment passes a set number of votes for funny or insightful, a little icon pops up to honor the author’s contribution. When it reaches the threshold for reports, a line of text appears in its place: “This comment has been flagged by the community. Click here to show it.”
“The comments don’t disappear,” Masnick says. “Anyone can click and still see them. But it’s an indication that this is not the community’s viewpoint.”
The system isn’t perfect and it may not work for large sites—the blog averages 1.5 million visitors per month. But since its implementation, “we definitely felt like we saw an uptick in comments that were both insightful and funny,” Masnick says.
Encourage long comments, not Twitter-like bursts
Online publishing platform Medium, which hit 60 million readers a month in November 2016, wanted to encourage substantive engagement on the site. In 2015, it rolled out its “Responses” feature to allow readers to write as much as they wanted in response to something posted, says lead designer Peter Cho. Previously, the platform only allowed comments to be written about specific paragraphs or lines within the post.
Inspired by the idea of long form letter correspondence, the feature treated comments as independent stories rather than posts dependent on already published material. The design team confirmed from observing user behavior that this approach encouraged longer and, subsequently, more thoughtful prose.
They also tested out the different prompts “Write a response” and “Write a related story.” The latter generated lengthier posts and more engagement from readers, but the team felt the phrase didn’t spur direct discussion of ideas. In contrast, “write a response,” sparked thoughtful dialogue because commenters replied to the original story—so Medium went with that.
In another test, the design team tried unfurling a comment box at the bottom of a story or redirecting users to a full screen. The full screen encouraged greater depth, but unfurling a comment box made it easier to reference the original story. The team ultimately created a hybrid feature: When a user clicks the comments section to respond, the edit box by default stays confined within the bottom of the story, but there is also an option to “Go full screen.”
Medium also decided to let only registered users respond to stories. Additionally, an algorithm orders the responses to a story based on their relevance to an individual reader. Responses written by the story’s author or by the users that a reader follows are highlighted at the top of the comments section; responses written by less relevant users are hidden behind a “Show all responses” button.
Make users sign pledge to be civil
Parlio, a platform founded by Egyptian Revolution activist and former Google manager Wael Ghonim, used several relatively simple mechanisms to dissuade bad behavior.
First, it required users to sign a civility pledge before joining the platform. Unlike most tech platforms that have long-winded, dense community guidelines that almost no one reads, Parlio made the pledge clear, concise, and prominent. Users had to sign off on each rule, helping ensure they read them, Benesch says.
In addition, the platform implemented a hierarchy of participants. Users could post through invitation only, though anyone could read the discussion threads. It also hosted Q&A’s to bring the audience more civil and stimulating debates, which helped reinforce community norms.
“Parlio built a small but devoted following, including thought leaders from media, academia and business,” chief strategy officer Emily Parker wrote in a Politico article published in January. “We hosted remarkably civil conversations about divisive issues like race, terrorism, refugees, sexism, and even Donald Trump’s candidacy for president.”
“I find that I’m using Parlio more because I can find a more reasoned engagement there than I do on Twitter,” wrote Max Boot, a senior fellow at the Council on Foreign Relations, on Commentary magazine in February 2016.
Parlio was acquired by Quora in March of the same year because its invite-only approach did not allow the platform to scale. It no longer accepts postings, and Quora said it bought the site for its talent. “Parlio was an acqui-hire for Quora,” the company said. Ghonim also wrote of the acquisition: “We look forward to apply what we’ve learnt at Parlio, and help build product experiences that make the Internet a better place!”
Benesch says other platforms could take cues from Parlio’s design, especially implementing a simple code of conduct that users are more likely to read.
Use clear feedback to reform bad behavior
In 2012, Riot Games, maker of one of the most popular computer games, League of Legends, hired Jeffrey Lin to head their social systems design team. At the time, the gaming industry believed that vitriolic language was an inseparable part of social computer games. Lin disagreed. Applying his background in cognitive neuroscience, he conducted a series of data-driven design experiments to reduce toxic behavior during gameplay.
In one experiment, Lin measured the impact of giving players who engaged in toxic behavior specific feedback. Previously, if a player received a suspension for making racist, homophobic, sexist, or harassing comments, they were given an error message during login with no specifics on why the punishment had occurred. Consequently, players often got angry and engaged in worse behavior once they returned to the game.
As a response, Lin implemented “reformation cards” to tell players exactly what they had said or done to earn their suspension and included evidence of the player engaging in that behavior. This time, if a player got angry and posted complaints about their reformation card on the community forum, other members of the community would reinforce the card with comments like, “You deserve every ban you got with language like that.” The team saw a 70% increase in their success with avoiding repeat offenses from suspended users.
Lin also realized that the cards gave players delayed feedback. So he conducted another experiment to speed up the feedback cycle by giving players mechanisms to report or honor each other during a game. The reported phrases—those that other players objected to—were fed into a machine learning model, which learned over time which phrases were considered toxic in different languages and cultures. If a player used any of those flagged terms during a game, the system could give him or her feedback immediately after the game ended to discourage their use. After the system was implemented, League of Legends saw a 40% drop in the number of ranked mode games—the most competitive mode where wins and losses determine the team’s global ranking—that had instances of racism, homophobia, sexism, or extreme harassment.
The machine learning algorithm isn’t perfect, Lin said in his 2015 Game Developer Conference talk. Once, it flagged a player who was using self-deprecating language in the third person. Riot later apologized to the player and used the case to improve its system.
“When you’re dealing with design issues in online communities and games, you’re dealing with a pretty new space. Mistakes are going to happen,” Lin said. “But when you have the players’ trust, you can survive these and come out stronger.”
Hold steadfast to freedom of expression
Both Masnick and Benesch say that with social media proliferating, thoughtful tactics to reduce toxic screaming matches is important. At the same time, they worry about society’s growing complacency with compromising free speech.
“As people are shocked and repulsed and even frightened by some of the content that they see online, there seems to be an increasing willingness to see lots and lots of speech suppressed,” Benesch says.
The suppressed can ultimately create their own platforms in darker corners of the internet where discussions like organizing hate crimes or terrorist attacks are no less dangerous.
“We’re in a time where we are seeing more bad behavior and we’re seeing bad behavior go more mainstream than it has in the past,” says Masnick. “But I think that at the same time we do have more tools and more things that we can do.”