In This Story
Taylor Swift finally announced which candidate she’s voting for in the upcoming presidential election — and didn’t let the other get away with using artificial intelligence to fake her support.
The singer posted her endorsement of Vice President Kamala Harris on Instagram (META+0.24%) minutes after Harris’s first debate with former President Donald Trump. Swift, who previously endorsed Harris and President Joe Biden in 2020, wrote that she’s voting for the vice president “because she fights for the rights and causes I believe need a warrior to champion them.”
But while Swift explained her reasoning and the importance of voting, the pop star said her public support was spurred by misinformation from the opposing camp.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”
In August, Trump posted seemingly AI-generated images of women wearing “Swifties for Trump” t-shirts at his rallies, and an AI-generated image of the singer dressed as Uncle Sam to his Truth Social account. The AI-generated image of Swift included the words, “Taylor wants you to vote for Donald Trump.” Trump wrote, “I accept!,” on the post.
After the images circulated, Swift said she came “to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” Swift also said she was “so heartened and impressed” by Harris’s vice presidential pick, Tim Walz, “who has been standing up for LGBTQ+ rights, IVF, and a woman’s right to her own body for decades.”
Swift signed her post with, “Childless Cat Lady,” possibly referencing Trump running mate JD Vance, who previously said the U.S. is being run by groups including, “a bunch of childless cat ladies who are miserable at their own lives and the choices that they’ve made and so they want to make the rest of the country miserable, too.”
Around 20 U.S. states have legislation to regulate deepfakes in elections, which it defines as “AI-generated images, audio, or video depicting a candidate saying or doing things they never did” in an effort to misinform voters or harm a candidate’s reputation. Meanwhile, AI companies have updated their tools to limit election-related queries and curb misinformation, including Google (GOOGL+1.60%), which announced in August it added election-related restrictions to its Search AI Overviews feature.