According to Twitter’s rules, abusive behavior is “an attempt to harass, intimidate, or silence someone else’s voice,” and the company specifically lays out four scenarios that merit Twitter taking action. These include violent threats; abusive slurs, epithets, racist, or sexist tropes; abusive content that reduces someone to less than human; and content that incites fear. In the case of Sayoc’s tweet to Ritchie, the threat is confusing—what exactly does a “nice silent Air boat ride” mean?—and the “hug your loved ones” bit isn’t a slur or violent, but the intent seems quite clear: Sayoc is, again, attempting to intimidate Ritchie. From The New York Times’ review of Sayoc’s tweets at politicians, it seems like “see you soon” was a phrase he used often in his threats.

Twitter has since apologized for not taking Sayoc’s tweets seriously enough.

For years, Twitter has affirmed its commitment to stopping harassment on its platform, yet users still routinely experience harassment—many have expressed concern and frustration about Twitter’s lack of transparency in its rulings on reported accounts. Even in cases of fairly clear-cut harassment or intimidation, Twitter is slow to act, or decides not to ban accounts:

Like Sayoc’s tweet to Ritchie, many harassers’ behavior falls just short of what Twitter traditionally defines as harassment, but are stillclearly quite threatening or dangerous. In an interview with Amnesty International, writer Jessica Valenti said that only obvious, direct threats have gotten Twitter’s attention. “That’s part of the problem,” she added. “Harassers can be savvy and know what they can say that’s not going to get them kicked off a site or not illegal.”

For example, writer Sady Doyle reported an account that threatened her with a photo of a gun, then gloated about how they “won” Twitter’s review. When Doyle reported them again for that tweet, Twitter again said they found no violation of their rules, but reversed the decision after the tweet went viral. A Twitter engineer at the time apologized for the error, saying often the solution is for a Twitter employee to escalate the case internally.

Here’s a more personal example. Yesterday, a newly-created account started following me and tweeted creepy albeit innocuous things at me, like “Hello dearie, good to see you,” and “Should I take you for a ride?” Another Twitter user alerted me to this, and when I looked at the account, the account’s only eight tweets were directed at me. This person hasn’t threatened me or lobbed slurs at me, but I’m certainly unsettled by it, since it appears that the only purpose of their account is to bug me. I reported the account (as did the person who alerted me), but if journalist Allison Morris’s (much worse) experience is any indication, it’s unlikely the account will be banned. Morris’s harasser created an account to tweet malicious things about her family, yet the account was not found to violate Twitter’s community standards. Plus, even if Twitter bans this person, there’s no reason they couldn’t just create a new account.

Twitter has a huge task ahead of it if it’s truly committed to tackling the harassment issue on its platform. But it certainly could do more. It’s been criticized for its uneven application of its own rules (such as not banning Trump despite threatening entire countries, although it’s since effectively put its own rules in place just for Trump) and what it’s able to do: Twitter is legally obligated to hide white nationalist and Nazi content from German users, but has not offered up that feature for use to users in other countries.

And ultimately, the question is not just how to get a handle on Twitter harassment, but also how online harassment transfers to real-life action. Not every Twitter troll is the next Cesar Sayoc, but how do we know which ones are? And even if we knew, what could we do?

📬 Sign up for the Daily Brief

Our free, fast, and fun briefing on the global economy, delivered every weekday morning.