Twitter’s new solution for hate speech is to avoid it, instead of actually removing it

Twitter’s plan for a hate speech “fix” has a lot of holes.
Twitter’s plan for a hate speech “fix” has a lot of holes.
Image: AP Photo/Jeff Chiu
We may earn a commission from links on this page.

Twitter is still shying away from tackling abuse head on.

During the company’s October earnings calls, Twitter promised to unveil stricter measures against harassment come November. Today, Twitter announced it will be making it easier for users to hide content they do not wish to see using an improved version of the existing ”mute” function.

To date, “mute” has given users the power to block tweets from select accounts, but now chosen keywords, phrases, or hashtags can be filtered out too. “We’re expanding mute to where people need it the most: in notifications,” the company wrote in a blog post. Users can also choose to opt out of alerts for conversations they have been added to.

The main issue with this proposed solution is that the onus remains on users to protect themselves. Although users should now be able to guard themselves from being subject to conventional sexist and racist slurs, trolls may still find a way to attack with other hateful language, unless an account is specifically muted.

For example, while avoiding name-calling or specific abuses is possible, generic terms like “jew” or “black” may slip through the cracks because users probably won’t want to filter out such broad terms. Abusers could potentially come up with new words or distinguishing symbols, like when neo-Nazis secretly targeted Jewish people by adding a set of three parentheses around their victims’ names—a form of anti-semitism that is virtually impossible to search for and find online.

In addition, blocking abusive users means that those accounts—that would have otherwise been reported for harassment—will just continue to operate under the radar, with abusive tweets still existing on the site and visible to others.

In another new move, the site has defined discrimination on the basis of race, religion, gender, or orientation more clearly under its hateful conduct policy to make reporting easier. Now, anyone can also flag abusive content they see going on between two other users, “with an eye toward lessening the burden on the person experiencing the abuse and empowering others within their community to help,” Twitter’s vice president of trust and safety, Del Harvey, told Quartz. He added that more changes will be rolled out in the coming months.

To better implement the rules, Twitter has also retrained its support team with “special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program.” However, the company did not disclose any specific repercussions outlined for those administering hate speech.

And anyway, since Twitter doesn’t have a real-name policy or other background checks in place, a user can still just create a new account if one gets suspended.

Twitter’s tussle with abuse has been long and winding: Terror-related tweets, politically-charged abuse, and sexist bullying have long plagued the platform. Ghostbusters star Leslie Jones quit Twitter after being subjected to racism, and a New York Times editor left the platform after facing a barrage of anti-semitic tweets. Overall, the platform has faced shrinking user growth, and abuse on Twitter was reportedly a deal breaker for potential bidder Disney. And the company has time and again only offered band-aid solutions for hate speech instead of finding a way to weed it out entirely.

The microblogging platform’s 317 million users aren’t going to see abusive conduct removed from Twitter immediately, the company acknowledged in its post: “No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn.”