Deepfakes are not a new threat. You’d be forgiven for thinking otherwise, given the consternation surrounding a lightly doctored video of Speaker Nancy Pelosi and recent rush to pass deepfake state and federal legislation. In reality, manipulated videos have been distorting truth for nearly two years. Only they were designed to harass and demean women—often celebrities—by creating AI-generated porn, and so weren’t considered worthy of widespread action.
Such flagrant apathy harms us all: Time and again, women are the first target for attacks that go on to warp the broader system. A similar pattern played out in the 2016 election, when sexist tactics created a blueprint for political trolls. Online weapons are perfected in these misogynistic onslaughts, and are met with a shrug until attackers select a new victim. There’s an entire industry devoted to analyzing national security threats posed by technology, but misuse of AI somehow grew in scale and sophistication, largely uninhibited, as long as it was women’s concern.
Imagine, for a moment, that tech companies and lawmakers cared about women’s safety. When Motherboard first reported on pornography deepfakes in December 2017, there would have been a sharp legal response and wide-ranging debate over appropriate social media policies. Instead, legal scholars argued that removing deepfake porn would violate the First Amendment and no new legislation was passed. One federal deepfake bill was introduced at the end of 2018, but never got out of committee and only concerned deepfakes that facilitated behavior already deemed illegal. There was little discussion of how such videos should be flagged on social media sites—so much so that, when the manipulated video of Pelosi went viral in May, Facebook dithered about how to respond.
That one political video—a “shallow-fake” rather than deepfake, as it was merely altered by slowing the speed so Pelosi seemed drunk—prompted action where over a year of manipulated porn spurred little more than a furrowed brow. In June, the Deepfakes Accountability Act was introduced to the House of Representatives and the House Intelligence Committee held its first hearing on deepfakes. The next month, Representative Adam B. Schiff of California, chairman of the House Intelligence Committee, finally asked Facebook, Twitter and Google how they plan to address deepfakes. All three said they were still figuring out appropriate policies.
State legislation has quickly followed. When Schiff first wrote to the tech giants, Virginia alone had a deepfake law, having banned AI-generated pornography at the beginning of July. Other states have since adopted legislation, though with a careful eye towards elections. Texas passed a law banning political deepfakes in September, and California followed in October, with specific provisions on how to combat political and pornographic deep fakes. (The former, which makes it illegal to distribute political deepfakes within 60 days of an election, has greater heft than the latter, which only gives victims of deepfake porn the right to sue distributors.)
Those in government and Silicon Valley could have started passing these laws and developing clear policies two years ago. Instead, women were rendered powerless to defend themselves. “The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part,” actress Scarlett Johansson said when asked about deepfakes in 2018.
Deepfakes’ journey from sexist weapon to political tool may be shocking to those who view women’s rights as a fringe issue, and utterly unsurprising to everyone else. Women inspire widespread, deeply-felt animosity and the intensity of this hatred drives misogynists to devise ever-more powerful forms of destruction.
A similar dynamic shaped the 2016 election, when the US political system was left stunned by the torrent of lies and prejudice that overwhelmed public discourse. Social networks and politicians would have been better prepared had they paid more attention to tactics deployed two years earlier in Gamergate, when women in the gaming industry were subject to such rampant and vicious threats they were forced to leave their homes.
Gamergate was not the first instance of online trolling. People of color, women, queer, and trans people have faced inordinate levels of hatred since the internet provided a cloak of anonymity for such prejudice. But Gamergate was an early example of coordinated attacks combined with relentless lies that distorted the truth. Brianna Wu, one of those targeted, believes the Obama administration’s failure to prosecute Gamergate’s perpetrators is to blame for allowing similar tactics to enter 2016 politics.
The memes, threats, and doxxing that dominated Gamergate have become the standard tools of those railing against political correctness. Forums that were home to the earliest Gamergate discussions, such as 4chan and 8chan, are now associated with the alt-right and white supremacy. Gamergate cheerleaders, such as Mike Cernovich and Milo Yiannopoulos, later became prominent supporters of President Trump and peddlers of political conspiracy theories.
Gamergate was also an early instance in which facts were subsumed by incessant repetition of lies. Game developer Zoë Quinn was accused of sleeping with a journalist in exchange for a favorable review of her games, even though such a review didn’t exist. The loud, empty concerns about “gaming journalism ethics” in Gamergate provided a neat model for the claims of “fake news” that Mr. Trump and his supporters have since adopted.
The link between misogyny and political violence is not confined to the internet. There is a long, lethal history of men abusing women and going on to commit mass shootings. The killers at Orlando nightclub in 2016 and Sutherland Springs Church in 2017—to name but a few—had harmed women. And yet, despite all the counter-terrorism proposals developed to address mass violence, US police forces still fail in their responses to domestic violence. One in four women who called the police to report a partner’s violence say they wouldn’t do so again, according to a 2015 survey, and one third felt less safe having reported the attack.
It’s hardly a coincidence that sexism so frequently provides the training ground for political weapons. Misogyny is not a quirk or isolated character flaw, but an inherently ideological stance—one that eschews equality and progress and human rights. Men who hate women enough to send threats, spread falsehoods, and defile reputations are unified by this shared sexism. And so the misogynistic attacks form the basis of vindictive communities, which become stronger and more tightly knit. Their memes and methods get sharper, their techniques of creating fake videos ever-more advanced.
The political and online landscape has changed significantly since 2016, but one gaping security flaw is perpetually overlooked. For as long as women’s pain is treated as an afterthought, society as a whole will remain deeply vulnerable.
Our channels are open.
Olivia Goldhill is a reporter on Quartz’s investigations team. Here’s how you can reach us with feedback or tips:
Email (insecure): investigations@qz.com
Signal (secure): +1 929 202 9229
Secure Drop (secure & anonymous): qz.com/tips
Be the first to know.
Sign up for the Quartz investigations email and get updates 2-3 times per month.