AI-generated media imitating public figures have caused a stir across many realms of life — pop culture, religion, politics, and more. As generative artificial intelligence continues to advance, the ease with which we can distinguish the true from the false diminishes. AI-made photos, videos, and audio called “deepfakes” have taken the internet by storm over the last two years, prompting calls for regulation of the technology.
Advertisement
Quartz looks at the biggest AI deepfake moments.
Advertisement
Previous Slide
Next Slide
2 / 9
A fake Ukrainian president Volodymyr Zelenskyy surrenders to Russia
A fake Ukrainian president Volodymyr Zelenskyy surrenders to Russia
The video was more obviously fake than newer ones that have become harder to distinguish from reality. The clip of fake Zelenskyy convinced few if any, given the odd accent and dissimilar skin tone of the deepfake.
Later in 2023, Universal Music Group (UMG), together with ABKCO and Concord Publishing, filed a copyright infringement lawsuit against AI startup Anthropic for allegedly stealing lyrics to train its chatbot. And some of the biggest names in the music industry — Nicki Minaj, Katy Perry, Billie Eilish, Camila Cabello, and a whole bunch of others (more than 200 in total) — have since signed a strongly-worded open letter calling on tech companies, AI developers, and music platforms to pledge they won’t make or use AI music-generating tools.
Advertisement
Previous Slide
Next Slide
5 / 9
Trump and Fauci get friendly (not)
Trump and Fauci get friendly (not)
A campaign video for DeSantis features deepfake photos of Trump kissing and hugging Fauci.Screenshot: DeSantis War Room video on X (Other)
Florida Gov. Ron DeSantis’ now-defunct presidential campaign posted a video including deepfakes of Donald Trump kissing former National Institute of Allergy and Infectious Diseases director Anthony Fauci on X last summer. The move heightened ongoing concerns over the use of AI misinformation and disinformation ahead of the U.S. 2024 presidential election.
Advertisement
Previous Slide
Next Slide
6 / 9
Fake Biden tells New Hampshire not to vote
Fake Biden tells New Hampshire not to vote
U.S. President Joe Biden participates in a joint press conference with Kenyan President William Ruto in the East Room at the White House on May 23, 2024.Photo: Anna Moneymaker (Getty Images)
“What a bunch of malarkey,” the message began.
It was an AI-generated voice imitating President Joe Biden, calling thousands of New Hampshire residents to discourage them from voting in the state’s Democratic primary election. “[Y]our vote makes a difference in November, not this Tuesday,” the voice said.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said FCC Chairwoman Jessica Rosenworcel at the time.
Advertisement
Previous Slide
Next Slide
7 / 9
Taylor Swift Deepfake Nudes
Taylor Swift Deepfake Nudes
Taylor Swift performs onstage during “Taylor Swift | The Eras Tour” at La Defense on May 09, 2024 in Paris, France.Photo: Kevin Mazur/TAS24 (Getty Images)
If there’s one person who can unite the country, it’s Taylor Swift. The attack on Swift moved federal legislators across both parties to action. While there’s no federal law on the books regarding AI-made nudes, U.S. senators introduced legislation in January that would allow victims to sue their perpetrators, citing Swift’s case.
“Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real,” said U.S. senators Richard Durbin and Lindsey Graham in their announcement of the bill dubbed the DEFIANCE Act. “Victims have lost their jobs, and may suffer ongoing depression or anxiety.”
Advertisement
Previous Slide
Next Slide
8 / 9
Spamoflauge spams Taiwan’s presidential election
Spamoflauge spams Taiwan’s presidential election
An elevated view of Taipei city during twilight.Photo: Jorge Fernández/LightRocket (Getty Images)
Ahead of the Taiwan presidential and legislative elections in January, an online operation backed by the Chinese Communist Party, known as “Spamouflage” or “Dragonbridge,” used AI deepfakes to try and sway the results. The group made fake audio clips of a former candidate, who had dropped out of the race months earlier, endorsing someone they did not support.
Advertisement
In response, Microsoft sounded the alarm about China’s use of AI to create misinformation campaigns and sway foreign elections. “This was the first time that Microsoft Threat Intelligence has witnessed a nation state actor using AI content in attempts to influence a foreign election,” its researchers wrote.