
In This Story
Lawmakers and advocates are pushing for federal legislation to criminalize AI-generated pornography. They say that so-called deepfake porn is ruining the lives of victims, who are most often women and girls.
“When you don’t have clear legislation on a federal and state level, when a victim goes to law enforcement, they are frequently told there’s nothing that can be done,” the director of an advocacy group called Image-Based Sexual Violence Initiative, Andrea Powell, said in a recent roundtable discussion on the issue. The online forum was hosted by the nonprofit National Organization for Women (NOW).
“Those individuals then went on to have offline threats of sexual violence, harassment, and what we’ve also found, unfortunately, is that some [victims] don’t survive,” Powell added. She calls AI deepfake nude apps a “virtual gun” for men and boys.
The term “deepfake” was coined in late 2017 by a Reddit user who used Google’s GOOGL-1.77% open-source face-swapping technology to make pornographic videos. AI-generated, sexually explicit content has spread like wildfire since ChatGPT brought generative artificial intelligence into the mainstream. Tech companies have raced to build better AI photo and video tools, and some people are using the tools for harm. According to Powell, there are 9,000 websites listed on Google Search that show explicit deepfake abuse. And between 2022 and 2023, deepfake sexual content online increased by over 400%.
“It’s getting to the point that you’re seeing 11- and 12-year-old girls who are scared to be online,” she said.
Deepfake regulations vary across states. So far, 10 states have laws on the books, and six of those states impose criminal penalties. Additional deepfake bills are pending in Florida, Virginia, California, and Ohio. And San Francisco this week filed a groundbreaking lawsuit against 16 deepfake porn websites.
But advocates say a lack of consistency across state laws creates problems, and federal regulations are well past due. They also say that platforms, not just individuals, should be held liable for nonconsensual deepfakes.
Some federal policymakers are working on it. Representative Joe Morelle (NY) in 2023 introduced the Preventing Deepfakes of Intimate Images Act, which would criminalize non-consensual dissemination of deepfakes. Shortly after deepfake nudes of Taylor Swift took the internet by storm, lawmakers introduced DEFIANCE Act, which would bolster victims’ rights to take civil action. And a bipartisan bill called the Intimate Privacy Protection Act would hold tech companies accountable when they fail to address deepfake nudes on their platforms.
In the meantime, victims and advocates are taking matters into their own hands. Breeze Liu was working as a venture capitalist when she became the target of deepfake sexual harassment in 2020. She created an app called Alecto AI that helps people track and remove deepfake content online that’s using their likeness.
Recalling her own experience as a victim of deepfake abuse, Liu said during the online meeting of advocates, “I felt like I was probably better off dead because it was just absolutely, horrendous.
“Long enough have we suffered from online image abuse,” she added. “I built this company with the hope that all of us and our future generations would one day take it for granted that no one would ever have to die from what happened, the violence that happened online.”
In addition to Alecto AI, Liu is also advocating for federal policy changes that would criminalize non-consensual AI deepfake pornography, such as Morelle’s 2023 bill. However, the Preventing Deepfakes of Intimate Images Act has not progressed since it was introduced last year.
Notably, some tech companies have already taken steps towards addressing the issue. Google on July 31 updated its policies to mitigate nonconsensual deepfake content. Others are facing pressure. Meta’s META-0.05% Oversight Board in late July said the company must do more to address explicit AI-generated content on its platform.