Cesar Sayoc is a poster child for Twitter’s harassment problem

Sayoc’s van is basically a visual representation of his Twitter account.
Sayoc’s van is basically a visual representation of his Twitter account.
Image: REUTERS/Geo Rodriguez
By
We may earn a commission from links on this page.

On Oct. 26, US authorities charged Florida resident Cesar Sayoc with sending 13 explosive devices to prominent public figures, including former president Barack Obama and former secretary of state Hillary Clinton. The internet immediately uncovered Sayoc’s social media accounts, which were promptly suspended.

Sayoc’s tweets were a hodgepodge of anti-Democrat memes, and included attacks on many of the folks he targeted: Obama, Clinton, billionaire philanthropist George Soros, representativeDebbie Wasserman Schultz, as well as Oprah Winfrey, former NFL starColin Kaepernick, and Parkland shooting survivor David Hogg. Sayocalso threatened members of the media, like New York Times reporter Sarah Jeong and MSNBC’s Andrea Mitchell, where he tweeted a video of a python swallowing a human with the comment, “This one for you MSNBC Andrea Mitchell . A promise reply to your threats . We will answer is coming see you soon.”  Though the message is not easy to parse, it’s pretty clear he’s attempting to threaten Mitchell.

The former press secretary for the Democrats in the US House of Representatives, Rochelle Ritchie, posted a screenshot on Twitter of a tweet from Sayoc just a couple weeks ago threatening her with a “nice silent Air boat ride” and that “we will see you 4 sure, hug your loved ones real close every time you leave you [sic] home.” Ritchie says she reported Sayoc’s account, and included a screenshot of a message from Twitter saying that Sayoc’s tweets didn’t violate their rules against abusive behavior.

According to Twitter’s rules, abusive behavior is “an attempt to harass, intimidate, or silence someone else’s voice,” and the company specifically lays out four scenarios that merit Twitter taking action. These include violent threats; abusive slurs, epithets, racist, or sexist tropes; abusive content that reduces someone to less than human; and content that incites fear. In the case of Sayoc’s tweet to Ritchie, the threat is confusing—what exactly does a “nice silent Air boat ride” mean?—and the “hug your loved ones” bit isn’t a slur or violent, but the intent seems quite clear: Sayoc is, again, attempting to intimidate Ritchie. From The New York Times’ review of Sayoc’s tweets at politicians, it seems like “see you soon” was a phrase he used often in his threats.

Twitter has since apologized for not taking Sayoc’s tweets seriously enough.

For years, Twitter has affirmed its commitment to stopping harassment on its platform, yet users still routinely experience harassment—many have expressed concern and frustration about Twitter’s lack of transparency in its rulings on reported accounts. Even in cases of fairly clear-cut harassment or intimidation, Twitter is slow to act, or decides not to ban accounts:

Like Sayoc’s tweet to Ritchie, many harassers’ behavior falls just short of what Twitter traditionally defines as harassment, but are stillclearly quite threatening or dangerous. In an interview with Amnesty International, writer Jessica Valenti said that only obvious, direct threats have gotten Twitter’s attention. “That’s part of the problem,” she added. “Harassers can be savvy and know what they can say that’s not going to get them kicked off a site or not illegal.”

For example, writer Sady Doyle reported an account that threatened her with a photo of a gun, then gloated about how they “won” Twitter’s review. When Doyle reported them again for that tweet, Twitter again said they found no violation of their rules, but reversed the decision after the tweet went viral. A Twitter engineer at the time apologized for the error, saying often the solution is for a Twitter employee to escalate the case internally.

Here’s a more personal example. Yesterday, a newly-created account started following me and tweeted creepy albeit innocuous things at me, like “Hello dearie, good to see you,” and “Should I take you for a ride?” Another Twitter user alerted me to this, and when I looked at the account, the account’s only eight tweets were directed at me. This person hasn’t threatened me or lobbed slurs at me, but I’m certainly unsettled by it, since it appears that the only purpose of their account is to bug me. I reported the account (as did the person who alerted me), but if journalist Allison Morris’s (much worse) experience is any indication, it’s unlikely the account will be banned. Morris’s harasser created an account to tweet malicious things about her family, yet the account was not found to violate Twitter’s community standards. Plus, even if Twitter bans this person, there’s no reason they couldn’t just create a new account.

Twitter has a huge task ahead of it if it’s truly committed to tackling the harassment issue on its platform. But it certainly could do more. It’s been criticized for its uneven application of its own rules (such as not banning Trump despite threatening entire countries, although it’s since effectively put its own rules in place just for Trump) and what it’s able to do: Twitter is legally obligated to hide white nationalist and Nazi content from German users, but has not offered up that feature for use to users in other countries.

And ultimately, the question is not just how to get a handle on Twitter harassment, but also how online harassment transfers to real-life action. Not every Twitter troll is the next Cesar Sayoc, but how do we know which ones are? And even if we knew, what could we do?