Last week, explicit AI-generated photos of Taylor Swift flooded X, marking the latest high-profile deepfakes and highlighting the challenge of stopping them.
One fake image of the pop singer got more than 45 million views and 24,000 reposts, according to the Verge. The post was reportedly live on the social media platform for 17 hours before its removal.
What are deepfakes, and why are they happening?
Deepfakes use a form of machine learning called deep learning, whereby an algorithm gets fed examples and learns to produce outputs that resemble them, generating an artificial image, video, or audio.
One high-profile deepfake is a video from 2022 that appeared to show Ukrainian president Volodymyr Zelenskyy telling soldiers to lay down their weapons and return to their families. Another is the Republican National Committee’s release a of deepfake-generated ad showing US president Joe Biden’s second term plagued by disasters.
Then there’s the infamous AI deepfake of the Pope—made by a 31-year-old construction worker who was playing around with AI image generator Midjourney—which came off as very realistic. That said, most deepfakes found on the internet are nonconsensual pornography.
What’s the big deal? The problem is that anyone with the ability to create deepfakes can release misinformation and even influence people to behave in ways that advance their agenda. Now there’s concern that generative AI can enable the creation of fake images en masse.
Of course, not all deepfakes are bad. For instance, education platforms are using the tech to create AI tutors that provide better support for students than a generic video lecture.
What happened with the Taylor Swift deepfakes?
Last Wednesday (Jan. 24), explicit images of Swift began proliferating on X. The fakes were easy to spot, and Swifties swarmed the platform to try drowning them out with real images of the singer, the Wall Street Journal reported. The next day, a host of accounts flagged by users for sharing the most viral images were suspended or restricted.
A report from 404 Media found that the images might have originated in a group on the social platform Telegram, where users share explicit, AI-generated images of women.
When did deepfakes start?
The term “deepfake” was coined in late 2017 by a Reddit user calling himself the same name. On Reddit, that person shared pornographic videos created using Google’s open-source face-swapping technology.
What are companies and policymakers doing to stop them?
In response to the backlash aginst AI deepfakes, tech companies including Google and Meta have mandated that political ads have a written disclosure if they use AI-generated images, video, or audio.
The states of California, Texas, and Virginia have criminalized deepfake porn. And in the first three weeks of this year, at least 14 US states have introduced legislation to combat AI deepfakes created in elections, either via disclosure requirements or bans. Elsewhere, China and South Korea are regulating deepfakes, too.