In December 2012, an Icelandic woman named Thorlaug Agustsdottir discovered a Facebook group called “Men are better than women.” One image she found there, Thorlaug wrote to us this summer in an email, “was of a young woman naked chained to pipes or an oven in what looked like a concrete basement, all bruised and bloody. She looked with a horrible broken look at whoever was taking the pic of her curled up naked.” Thorlaug wrote an outraged post about it on her own Facebook page.
Before long, a user at “Men are better than women” posted an image of Thorlaug’s face, altered to appear bloody and bruised. Under the image, someone commented, “Women are like grass, they need to be beaten/cut regularly.” Another wrote: “You just need to be raped.” Thorlaug reported the image and comments to Facebook and requested that the site remove them.
“We reviewed the photo you reported,” came Facebook’s auto reply, “but found it does not violate Facebook’s Community Standards on hate speech, which includes posts or photos that attack a person based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability, or medical condition.”
Instead, the Facebook screeners labeled the content “Controversial Humor.” Thorlaug saw nothing funny about it. She worried the threats were real.
Some 50 other users sent their own requests on her behalf. All received the same reply. Eventually, on New Year’s Eve, Thorlaug called the local press, and the story spread from there. Only then was the image removed.
In January 2013, Wired published a critical accountof Facebook’s response to these complaints. A company spokesman contacted the publication immediately to explain that Facebook screeners had mishandled the case, conceding that Thorlaug’s photo “should have been taken down when it was reported to us.” According to the spokesman, the company tries to address complaints about images on a case-by-case basis within 72 hours, but with millions of reports to review every day, “it’s not easy to keep up with requests.” The spokesman, anonymous to Wired readers, added, “We apologize for the mistake.”
* * *
If, as the communications philosopher Marshall McLuhan famously said, television brought the brutality of war into people’s living rooms, the Internet today is bringing violence against women out of it. Once largely hidden from view, this brutality is now being exposed in unprecedented ways. In the words of Anne Collier, co-director of ConnectSafely.org and co-chair of the Obama administration’s Online Safety and Technology Working Group, “We are in the middle of a global free speech experiment.” On the one hand, these online images and words are bringing awareness to a longstanding problem. On the other hand, the amplification of these ideas over social media networks is validating and spreading pathology.
We, the authors, have experienced both sides of the experiment firsthand. In 2012, Soraya, who had been reporting on gender and women’s rights, noticed that more and more of her readers were contacting her to ask for media attention and help with online threats. Many sent graphic images, and some included detailed police reports that had gone nowhere. A few sent videos of rapes in progress. When Soraya wrote about these topics, she received threats online. Catherine, meanwhile, received warnings to back up while reporting on the cover-up of a sexual assault.
All of this raised a series of troubling questions: Who’s proliferating this violent content? Who’s controlling its dissemination? Should someone be? In theory, social media companies are neutral platforms where users generate content and report content as equals. But, as in the physical world, some users are more equal than others. In other words, social media is more symptom than disease: A 2013 report from the World Health Organization called violence against women “a global health problem of epidemic proportion,” from domestic abuse, stalking, and street harassment to sex trafficking, rape, and murder. This epidemic is thriving in the petri dish of social media.
While some of the aggression against women online occurs between people who know one another, and is unquestionably illegal, most of it happens between strangers. Earlier this year, Pacific Standard published a long story by Amanda Hess about an online stalker who set up a Twitter account specifically to send her death threats.Under the image, someone commented, “Women are like grass, they need to be beaten/cut regularly.”
Across websites and social media platforms, everyday sexist comments exist along a spectrum that also includes illicit sexual surveillance, “creepshots,” extortion, doxxing, stalking, malicious impersonation, threats, and rape videos and photographs. The explosive use of the Internet to conduct human trafficking also has a place on this spectrum, given that three-quarters of trafficked people are girls and women.
A report, “Misogyny on Twitter,” released by the research and policy organization Demos this June, found more than 6 million instances of the word “slut” or “whore” used in English on Twitter between December 26, 2013, and February 9, 2014. (The words “bitch” and “cunt” were not measured.) An estimated 20 percent of the misogyny study Tweets appeared, to researchers, to be threatening. An example: “@XXX @XXX You stupid ugly fucking slut I’ll go to your flat and cut your fucking head off you inbred whore.”
A second Demos study showed that while male celebrities, female journalists, and male politicians face the highest likelihood of online hostility, women are significantly more likely to be targeted specifically because of their gender, and men are overwhelmingly those doing the harassing. For women of color, or members of the LGBT community, the harassment is amplified. “In my five years on Twitter, I’ve been called ‘nigger’ so many times that it barely registers as an insult anymore,” explains attorney and legal analyst Imani Gandy. “Let’s just say that my ‘nigger cunt’ cup runneth over.”
At this summer’s VidCon, an annual nationwide convention held in Southern California, women vloggers shared an astonishing number of examples. The violent threats posted beneath YouTube videos, they observed, are pushing women off of this and other platforms in disproportionate numbers. When Anita Saarkeesian, who produces a popular web series called Tropes vs. Women, launched a Kickstarter to develop games with female protagonists, she became the focus of a massive and violently misogynistic cybermob. Among the many forms of harassment she endured was a game where thousands of players “won” by virtually bludgeoning her face. In late August, she contacted the police and had to leave her home after she received a series of serious violent online threats.
Danielle Keats Citron, law professor at the University of Maryland and author of the recently released book Hate Crimes in Cyberspace, explained, “Time and time again, these women have no idea often who it is attacking them. A cybermob jumps on board, and one can imagine that the only thing the attackers know about the victim is that she’s female.” Looking at 1,606 cases of “revenge porn,” where explicit photographs are distributed without consent, Citron found that 90 percent of targets were women. Another study she cited found that 70 percent of female gamers chose to play as male characters rather than contend with sexual harassment.
This type of harassment also fills the comment sections of popular websites. In August, employees of the largely female-staffed website Jezebel published an open letter to the site’s parent company, Gawker, detailing the professional, physical, and emotional costs of having to look at the pornographic GIFs maliciously populating the site’s comments sections everyday. “It’s like playing whack-a-mole with a sociopathic Hydra,” they wrote, insisting that Gawker develop tools for blocking and tracking IP addresses. They added, “It’s impacting our ability to do our jobs.”
For some, the costs are higher. In 2010, 12-year-old Amanda Todd bared her chest while chatting online with a person who’d assured her that he was a boy, but was in fact a grown man with a history of pedophilia. For the next two years, Amanda and her mother, Carol Todd, were unable to stop anonymous users from posting that image on sexually explicit pages. A Facebook page, labeled “Controversial Humor,” used Amanda’s name and image—and the names and images of other girls—without consent. In October 2012, Amanda committed suicide, posting a YouTube video that explained her harassment and her decision. In April 2014, Dutch officials announced that they had arrested a 35-year-old man suspected to have used the Internet to extort dozens of girls, including Amanda, in Canada, the United Kingdom, and the United States. The suspect now faces charges of child pornography, extortion, criminal harassment, and Internet luring.When Anita Saarkeesian launched a Kickstarter to develop games with female protagonists, someone created a game where thousands of players could virtually bludgeon her face.
Almost immediately after Amanda shared her original image, altered versions appeared on pages, and videos proliferated. One of the pages was filled with pictures of naked pre-pubescent girls, encouraging them to drink bleach and die. While she appreciates the many online tributes honoring her daughter, Carol Todd is haunted by “suicide humor” and pornographic content now forever linked to her daughter’s image. There are web pages dedicated to what is now called “Todding.” One of them features a photograph of a young woman hanging.
Meanwhile, extortion of other victims continues. In an increasing number of countries, rapists are now filming their rapes on cell phones so they can blackmail victims out of reporting the crimes. In August, after a 16-year-old Indian girl was gang-raped, she explained, “I was afraid. While I was being raped, another man pointed a gun and recorded me with his cellphone camera. He said he will upload the film on the Net if I tell my family or the police.”
In Pakistan, the group Bytes for All—an organization that previously sued the government for censoring YouTube videos—released a study showing that social media and mobile tech are causing real harm to women in the country. Gul Bukhari, the report’s author, told Reuters, “These technologies are helping to increase violence against women, not just mirroring it.”
In June 2014, a 16-year-old girl named Jada was drugged and raped at a party in Texas. Partygoers posted a photo of her lying unconscious, one leg bent back. Soon, other Internet users had turned it into a meme, mocking her pose and using the hashtag #jadapose. Kasari Govender, executive director of the Vancouver-based West Coast Legal Education and Action Fund (LEAF), calls this kind of behavior “cybermisogyny.” “Cyberbullying,” she says, “has become this term that’s often thrown around with little understanding. We think it’s important to name the forces that are motivating this in order to figure out how to address it.”
In an unusually bold act, Jada responded by speaking publicly about her rape and the online abuse that followed. Supporters soon took to the Internet in her defense. “There’s no point in hiding,” she told a television reporter. “Everybody has already seen my face and my body, but that’s not what I am and who I am. I’m just angry.”
* * *
After Facebook removed Thorlaug’s altered image and the rape threats, she felt relieved, but she was angry too. “These errors are going to manifest again,” she told Wired, “if there isn’t clear enough policy.”
Yet, at the time of Thorlaug’s report, Facebook did have a clear policy. Its detailed Community Standards for speech, often considered the industry’s gold standard, were bolstered by reporting tools that allowed users to report offensive content, and Thorlaug had used these tools as instructed. But serious errors were still manifesting regularly.
Not long after Thorlaug’s struggle to remove her image, a Facebook user posted a video documenting the gang rape of a woman by the side of a road in Malaysia. The six minutes of graphic footage were live for more than three weeks, during which Facebook moderators declined repeated requests for removal. It had been viewed hundreds of times before a reader of Soraya’s forwarded the video to her with a request for help. We notified a contact on Facebook’s Safety Advisory Board, and only then was the video taken offline.
Around the same time, another Icelandic woman, Hildur Lilliendahl Viggósdóttir, decided to draw attention to similar problems by creating a page called “Men who hate women,” where she reposted examples of misogyny she found elsewhere on Facebook. Her page was suspended four times—not because of its offensive content, but because she was reposting images without written permission. Meanwhile, the original postings—graphically depicting rape and glorifying the physical abuse of women—remained on Facebook. As activists had been noting for years, pages like these were allowed by Facebook to remain under the category of “humor.” Other humorous pages live at the time had names like “I kill bitches like you,” “Domestic Violence: Don’t Make Me Tell You Twice,” “I Love the Rape Van,” and “Raping Babies Because You’re Fucking Fearless.”
* * *
Jillian C. York, director for international freedom of expression at the Electronic Frontier Foundation, is one of many civil libertarians who believe Facebook and other social media platforms should not screen this, or any, content at all. “It of course must be noted that the company—like any company—is well within its rights to regulate speech as it sees fit,” she wrote in a May 2013 piece in Slate in response to growing activism. “The question is not canFacebook censor speech, but rather, should it?” She argues that censoring any content “sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook’s censors.”“A cybermob jumps on board, and one can imagine that the only thing the attackers know about the victim is that she’s female.”
When the problem involves half the world’s population, it’s difficult to classify it as a “pet issue.” What’s more, there are free speech issues on both sides of the regulated content equation. “We have the expressive interests of the harassers to threaten, to post photos, to spread defamation, rape threats, lies on the one hand,” explains Citron. “And on the other hand you have the free speech interests, among others, of the victims, who are silenced and are driven offline.”
These loss-of-speech issues tend to draw less attention and sympathy than free speech rights. However, as Citron points out, sexual hostility has already been identified as a source of real harm: Title VII demands that employers regulate such hostility in the workplace. These policies exist, Citron says, because sexual hostility “is understood as conduct interfering with life opportunities.”
For online harassers, this is often an overt goal: to silence female community members, whether through sexual slurs or outright threats. It’s little surprise that the Internet has become a powerful tool in intimate partner violence: A 2012 survey conducted by the National Network to End Domestic Violence(NNEDV) found that 89 percent of local domestic violence programs reported victims who were experiencing technology-enabled abuse, often across multiple platforms.
For their part, social media companies often express commitment to user safety, but downplay their influence on the broader culture. Administrators repeatedly explain that their companies, while very concerned with protecting users, are not in the business of policing free speech. As Twitter co-founder Biz Stone phrased it in a post titled “Tweets Must Flow,” “We strive not to remove Tweets on the basis of their content.” The company’s guidelines encourage readers to unfollow the offensive party and “express your feelings [to a trusted friend] so you can move on.”
None of this was of much help to Caroline Criado-Perez, a British journalist and feminist who helped get a picture of Jane Austen on the £10 banknote. The day Bank of England made the announcement, Criado-Perez began receiving more than 50 violent threats per hour on Twitter. “The immediate impact was that I couldn’t eat or sleep,” she told The Guardian in 2013. She asked Twitter to find some way to stop the threats, but at the time the company offered no mechanism for reporting abuse. Since then, the company has released a reporting button, but its usefulness is extremely limited: It requires that every tweet be reported separately, a cumbersome process that gives the user no way of explaining that she is a target of ongoing harassment. (The system currently provides no field for comments.)
And yet companies like Facebook, Twitter, and YouTube do moderate content and make quasi-governmental decisions regarding speech. Some content moderation is related to legal obligations, as in the case of child pornography, but a great deal more is a matter of cultural interpretation. Companies have disclosed that governments rely on them to implement censorship requests—earlier this year, for example, Twitter blocked tweets and accounts deemed “blasphemous” by the Pakistani government. (In response to these government incursions, a coalition of academics, legal scholars, corporations, non-profit organizations, and schools came together in 2008 to form the Global Network Initiative, a non-governmental organization dedicated to privacy and free expression.)
When it comes to copyright and intellectual property interests, companies are highly responsive, as Hildur’s “Men who hate women” experience highlighted. But, says Jan Moolman, who coordinates the Association of Progressive Communications’s women’s rights division, “‘garden variety’ violence against women—clearly human rights violations—frequently get a lukewarm response until it becomes an issue of bad press.”
For that reasons, when social media companies fail to respond to complaints and requests, victims of online harassment frequently turn to individuals who can publicize their cases. Trista Hendren, an Oregon-based blogger, became an advocate for other women after readers from Iceland, Egypt, Australia, India, Lebanon, and the UK began asking her to write about their experiences. “I was overwhelmed,” she told us. In December 2012, Hendren and several collaborators created a Facebook page called RapeBook where users could flag and report offensive content that the company had refused to take down.Facebook content categorized as “humor” included pages titled “I kill bitches like you,” “I Love the Rape Van,” and “Raping Babies Because You’re Fucking Fearless.”
By April 2013, people were using RapeBook to post pictures of women and pre-pubescent girls being raped or beaten. Some days, Hendren received more than 500 anonymous, explicitly violent comments—“I will skull-fuck your children,” for instance. Facebook users tracked down and posted her address, her children’s names, and her phone number and started to call her.
By that time, Hendren had abandoned any hope that using Facebook’s reporting mechanisms could help her. She was able, however, to work directly with a Facebook moderator to address the threats and criminal content. She found that the company sincerely wanted to help. Their representatives discussed the posts with her on a case-by-case basis, but more violent and threatening posts kept coming, and much of the content she considered graphic and abusive was allowed to remain.
Eventually, Hendren told us, she and Facebook became locked in disagreement over what constituted “safety” and “hate” on the site. Facebook’s people, she said, told her they didn’t consider the threats to her and her family credible or legitimate. Hendren, however, was concerned enough to contact the police and the FBI. The FBI started an investigation; meanwhile Hendren, physically and emotionally spent, suspended her Facebook account. “I was the sickest I have ever been,” she said. “It was really disgusting work. We just began to think, ‘Why are we devoting all our efforts on a volunteer basis to do work that Facebook—with billions of dollars—should be taking care of?’”
Hendren contacted Soraya, who continued to press Facebook directly. At the same time, Soraya and Laura Bates, founder of the Everyday Sexism Project, also began comparing notes on what readers were sending them. Bates was struck by surprising ad placements. At the time, a photo captioned “The bitch didn’t know when to shut up” appeared alongside ads for Dove and iTunes. “Domestic Violence: Don’t Make Me Tell You Twice”—a page filled with photos of women beaten, bruised, and bleeding—was populated by ads for Facebook’s COO Sheryl Sandberg’s new bestselling book, Lean In: Women, Work, and the Will to Lead.
In early May, Bates decided to tweet at one of these companies. “Hi @Finnair here’s your ad on another domestic violence page—will you stop advertising with Facebook?” FinnAir responded immediately: “It is totally against our values and policies. Thanks @r2ph! @everydaysexism Could you send us the URL please so that we can take action?”
Soraya, Bates, and Jaclyn Friedman, the executive director of Women, Action, and Media, a media justice advocacy group, joined forces and launched a social media campaign designed to attract advertisers’ attention. The ultimate goal was to press Facebook to recognize explicit violence against women as a violation of its own prohibitions against hate speech, graphic violence, and harassment. Within a day of beginning the campaign, 160 organizations and corporations had co-signed a public letter, and in less than a week, more than 60,000 tweets were shared using the campaign’s #FBrape hashtag. Nissan was the first company to pull its advertising dollars from Facebook altogether. More than 15 others soon followed. The letter emphasized that Facebook’s refusal to take down content that glorified and trivialized graphic rape and domestic violence was actually hampering free expression—it was “marginaliz[ing] girls and women, sidelin[ing] our experiences and concerns, and contribut[ing] to violence against us.”
On May 28, Facebook issued a public response:
In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like … We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better—and we will.
* * *
For all its shortcomings, Facebook is doing more than most companies to address online aggression against women. Cindy Southworth, vice president of development and innovation at the National Network to End Domestic Violence, has served on Facebook’s Safety Advisory Board since 2010. “[My organization] gets calls from Google, Twitter, Microsoft—but Facebook and Airbnb are the only ones who’ve put us on an advisory board,” she told us.
This is huge progress, she says. By inviting experts who understand the roots of violence against women and children and are familiar with emerging strategies to prevent it, tech is more likely to innovate improvements. The once profusely applied “Controversial Humor” label in Facebook is no longer in use. The company now officially recognizes gender-based hate as a legitimate concern, and its representatives continue to work closely with advocates like Southworth and the coalition that formed during the #FBRape campaign. There are ongoing efforts to improve user safety and identify content that is threatening, harassing, hateful, or discriminatory.
Southworth calls the company’s representatives “thoughtful, passionate, concerned, and straddling the line between free speech and safety.” But, sometimes, progress feels slow. “The teams who handle these cases are just swamped,” she explained.
When Emily Bazelon, author of a book and a March 2013 Atlantic story about Internet bullies, visited Facebook’s headquarters, the young men she saw working as moderators were spending roughly 30 seconds assessing each reported post, millions of reports a week. Outsourced speech moderation has become a booming industry. Like Facebook’s own moderation process, the operations of these companies are opaque by design.
TaskUs, with bases in Santa Monica, California, and several locations in the Philippines, provides moderation for iPhone and Android apps such as Whisper, Secret, and Yik Yak. The company advertises “a bulletproof system to ensure that no one—not a single person—is hurt physically or mentally by the actions of another user in an anonymous app community.” Yet TaskUs doesn’t disclose its standards of speech, its hiring practices, its training process, or working conditions. “Unfortunately,” we were told when we inquired, “we’re bound by confidentiality from discussing details of process including hiring and training. We can speak generally about how we handle the moderation process but our clients are not comfortable with us exposing anything proprietary (and they consider the moderation and training processes proprietary).”
While private companies protect their practices, nonprofits like The Internet Watch Foundation don’t. IWF, based in Cambridge, England, screens images of child sexual abuse for Facebook, Google, and Virgin Media, among others. IWF staff watch, analyze, categorize, and report abusive images—70 percent of them involve children under 10. Data collected by police across England and Wales in 2012 suggest that 150 million child pornography images were in distribution in the UK alone that year. By comparison, in 1995, when the reach of the Internet was far narrower, only about 7,000 child pornography images were in online circulation.When it comes to copyright and intellectual property interests, companies are highly responsive. But violence against women “frequently gets a lukewarm response until it becomes an issue of bad press.”
IWF’s analysts see everything, said Heidi Kempster, IWF’s director of business affairs, during a conversation this summer. Kempster was candid about IWF’s business practices: The group screens rigorously during its hiring processes, conducting psychological interviews that establish everything from family history and relationship-building skills to views on pornography. The company also requires monthly individual counseling and quarterly group counseling, as well as expert consultation—with police, attorneys, or judges— and breaks as needed. IWF analysts, said Kempster, “look at shocking and violent images all day every day. There are days that are tough. They have to take time out.”
Over the past two years, Facebook has taken steps to improve its reporting system and address those gray areas. Matt Steinfeld, a spokesperson for Facebook, spoke to us about Facebook’s Compassion Research, a bullying prevention project developed in conjunction with Yale’s Center for Emotional Intelligence. The tools developed through this program have more than tripled the rate at which users send a message directly to the person who posted the offensive material, asking for the removal of photos or comments. Now, in 85 percent of those requests, the person who posted the photo takes the photo down or sends a reply.
Still, the company hasn’t yet come up with a reliable approach for dealing with the other 15 percent of cases. “We’ve always recognized that there is going to be content that won’t be moderated through compassion tools,” said Steinfeld. “We’re not going to tell the people who is right and who is wrong.”
The opacity maintained around moderation means sites are not obligated to honor the reports they receive and any decision to remove content can be legitimized, as Kate Crawford of Microsoft Research and MIT, and Tarleton Gillespie at Cornell University, observed in an August study on social media reporting tools. In other words, “given that flags remain open to interpretation and can be gamed, they can also be explained away when the site prefers to ignore them.”
In conversations this summer, Matt Steinfeld, a spokesperson for Facebook, maintained that his own company’s standards are clearly defined. “There’s this misconception that there’s an algorithm,” he said during one conversation in July, “but there’s a human who’s given objective standards” for responding to individual complaints.
When we spoke with Microsoft’s Crawford about her research, she described the limitations of these seemingly objective standards. “The flag is being asked to do too much,” she said. “It’s a fundamentally narrow mechanism: the technical version of ‘I object.’ And while some platforms claim that flags are ‘objective’ data about which content to remove, they are part of a profoundly human decision-making process about what constitutes appropriate speech in the public domain.”
Researchers and industry experts are beginning to consider the effects of that context. Ninety percent of tech employees are men. At the most senior levels, that number goes up to 96 percent. Eight-nine percent of startup leadership teams are all male. Google recently announced that it is implementing programs to, in the words of a New York Times report, “fight deep-set cultural biases and an insidious frat-house attitude that pervades the tech business.” A computer simulation used by the company illustrated how an industry-wide 1-percent bias against women in performance evaluations might have led to the significant absence of women in senior positions.
Many Silicon Valley leaders—such as Twitter co-founder Jack Dorsey, who recently acknowledged a “leadership crisis” among women in tech—have been investing in programs they hope will encourage girls to enter, and remain, in STEM fields. However, the fact that companies better understand the need to encourage girls and women doesn’t necessarily mean they’re welcomed. Despite the presence of visible, active and prominent women in the industry, according to one recent study, 56 percent of the women who do enter tech leave the industry, frequently stating that they were pushed out by sexism. This attrition rate is twice that of their male peers. As Vivek Wadha, author Innovating Women, recently pointed out, only 2.7 percent of 6,517 companies that received venture funding from 2011 to 2013 had female chief executives.
It’s not hard to imagine how unconscious biases might affect systems architecture, including the ways companies handle moderation requests. It is notable that Ello, a new ad-free social network, launched without private profiles, a block button, or a reporting mechanism. (After much criticism, those features were added.) Its designers appear to be seven young white men whose features, appearing on the beta website, are obscured by smiley faces. The Ello site reads, “We reserve the right to enforce or not to enforce these rules in whatever way we see fit, at our sole discretion. According to our lawyer, we should also tell you that Ello’s rules and policies do not create a duty or contractual obligation for us to act in any particular manner. And we reserve the right to change these rules at any time. So please play nice, be respectful, and have fun.”
* * *
Sandy Garossino, a former British Columbia prosecutor who has worked on dozens of cases of cyber extortion and online child pornography, is concerned about the implications of today’s industry practices and policies, not only for children but for adults. “Right now, the slightest calibrations are going to have a profound effect on the future,” she told us.
In late June, the U.S. Supreme Court announced it would hear the case of Anthony Elonis, a man with five charges of sexual harassment, who was imprisoned after threatening to kill his wife on Facebook. Elonis insists that his Facebook posts were not real threats but protected speech. Tara Elonis, his estranged wife—possibly aware that most female murder victims are killed by intimate partners—said that there was nothing unthreatening about her husband’s Facebook posts and that they forced her to take necessary, costly precautions. “If I only knew then what I know now,” read one, “I would have smothered your ass with a pillow, dumped your body in the back seat, dropped you off in Toad Creek, and made it look like a rape and murder.”
“Although threats are traditional categories of excluded speech,” explains First Amendment legal scholar Susan Williams, “there is very little take on actually defining what a true threat is in constitutional terms.” This lack of definition is what plagues social media companies seeking scalable solutions for moderating content and keeping users safe. Following legal precedent they, too, avoid defining what makes a comment a threat and instead home in on whether or not there’s one specific target. However, as Williams explains, “threats can be one of those environmental factors that reduce the autonomy of whole classes of persons.”“What these people are doing is reminding women that, no matter who they are, they are still women. They are forever vulnerable.”
In a recent high-profile case, intimate photographs of 100 celebrities—all of whom were women—were stolen and shared without consent. Google is now facing the possibility of a $100 million lawsuit, brought by over a dozen of the women whose privacy was violated, for refusing to remove the stolen photographs. In a letter dated October 1, the women’s attorneys wrote that “Google has exhibited the lowest standards of ethical business conduct, and has acted dishonorably by perpetrating unlawful activity that exemplifies an utter lack of respect for women and privacy. Google’s ‘Don’t be evil’ motto is a sham.”
In response, Google removed tens of thousands of the hacked celebrity photographs. Meanwhile, social media companies have been far less responsive to similar demands from ordinary citizens. “Hey @google, what about my photos?” tweeted revenge porn victim Holly Jacobs in the aftermath of the celebrity scandal. It remains to be seen how the courts will rule in the case of Meryam Ali, a Houston woman who filed a $123 million lawsuit against Facebook for failing to remove a false profile that showed her face superimposed on pornographic images. As writer Roxanne Gay poignantlyobserved in The Guardian, “What these people are doing is reminding women that, no matter who they are, they are still women. They are forever vulnerable.”
In late August, Drew Curtis, founder of the content aggregator FARK, announced that the company had added “misogyny” to its moderation guidelines. FARK no longer allows rape jokes or threats. It also prohibited posts that call groups of women “whores” or “sluts,” or suggest that a woman who suffered a crime is somehow asking for it. In a note to readers, Curtis wrote, “This represents enough of a departure from pretty much how every other large Internet community operates that I figure an announcement is necessary.” Responding in Slate, Amanda Hess praised FARK’s new policy but also pointed out its limitations: Just underneath his announcement, users posted dozens of comments about rape, whores, and “boobies.”
Announcements like FARK’s are important, particularly for catalyzing discussion, but policy changes alone can’t solve such a complex problem. Kate Crawford of MIT and Microsoft urges tech innovators to think about solutions “as pluralistically as possible.” She’d like to see more platforms develop systems that leave traces of when and why content has been removed or modified—an approach in play at Wikipedia, for instance.
Other experts agree that companies have a responsibility to provide greater transparency. They also need to dedicate more staff to understanding and performing moderation. They need to attract and retain female engineers, programmers, and managers. They need to invite experts in violence prevention to their tables. Whether online or off, there seems to be an increasing consensus, from the NFL to the White House, that misogyny requires a broad societal response. As President Obama put it in mid-September, “It is on all of us to reject the quiet tolerance of sexual assault and to refuse to accept what’s unacceptable.”
Soon after Hess’s piece appeared on Slate, a reader posted it on a FARK message board and users filled the comment thread mocking the policy and discussing the best way slip a thermometer into Hess. So far, that thread has not been removed. As Hess herself put it, “Policing misogyny is fabulous in theory. In practice, it’s a bitch.”