AI music is getting messy
Major labels that once fought streaming are racing to negotiate deals that will determine how music gets made, who gets paid, and what consumers know
A version of this article originally appeared in Quartz’s AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox.
Suggested Reading
Xania Monet just became the first “artificial” artist to chart on Billboard's airplay rankings and secure a multimillion-dollar record deal. But most listeners can't tell she's not actually human: She’s a creation of generative AI. That disconnect is a problem the music industry is scrambling to solve.
Related Content
Monet's breakthrough arrives as the recording industry, already transformed by two decades of digital disruption, enters its next phase of reinvention. Major labels that once fought streaming are now racing to stake claims in AI territory, negotiating deals that will determine how music gets made, who gets paid, and what consumers actually know — or care — about what they're listening to.
The sound of uncertainty
The deal that landed Monet her recording contract came after what Billboard described as "a bidding war," suggesting that multiple labels saw commercial potential in an artist who doesn't exist beyond code. Her Apple Music profile describes her as "an AI figure presented as a contemporary R&B vocalist in the highly expressive, church-bred, down-to-earth vein" of established soul and R&B artists.
Behind Monet is Telisha Nikki Jones, a Mississippi poet who writes the lyrics that Monet performs using Suno's generative AI platform. Monet has released at least 31 songs since the summer, including a full-length album "Unfolded" in August with 24 tracks. Her songs "Let Go, Let God" and "How Was I Supposed to Know" have charted on Billboard's Hot Gospel Songs and Hot R&B Songs, respectively, a first for artificial artists.
"AI doesn't replace the artist," Romel Murphy, Monet's manager, told CNN. "It doesn't diminish the creativity and doesn't take away from the human experience. It's a new frontier."
But that frontier looks different depending on where you're standing. Working musicians see their already precarious livelihoods threatened by endless AI-generated alternatives. Industry executives see both opportunity and existential threat. And listeners? They mostly don't know what they're hearing.
A recent study found that listeners could only correctly identify AI-generated music 53% of the time, barely better than random guessing. When presented with stylistically similar human and AI songs, accuracy improved to 66%, but that still means one in three listeners couldn't tell the difference.
From courtroom to conference room
The speed of the industry's position on AI is dizzying. Just last year, Universal Music Group, Sony Music, and Warner Music Group sued the AI music startups Suno and Udio, accusing them of training their models on copyrighted music without permission. Now Universal has settled with Udio, agreeing to launch a subscription service next year where fans can create remixes and customized tracks using licensed songs.
The settlement terms remain undisclosed, but the structure hints at the industry's strategy. Artists must opt in to have their music included, and all AI-generated content must stay within Udio's platform. Similar deals are reportedly weeks away. According to the Financial Times, Universal and Warner are in talks with Google, Spotify, and various AI startups including Klay Vision, ElevenLabs, and Stability AI. The labels are pushing for a streaming-like payment model where each use of their music in AI training or generation triggers a micropayment.
The urgency is understandable. Besides Monet, Billboard said at least one new AI artist has showed up on the charts for the last five weeks, meaning there are more and more chances for chart-topping confusion. Spotify revealed that it removed 75 million tracks last year to maintain quality, though the company won't specify how many were AI-generated. Deezer, another streaming platform, reports that up to 70% of AI-generated music streams on its platform are fraudulent, suggesting the technology is already being weaponized for streaming fraud at scale.
The human cost
For independent artists and smaller acts, the implications are stark. Unlike Taylor Swift or Billie Eilish, who have leverage through their labels and massive fanbases, emerging musicians face an ecosystem where they compete against infinite variations of themselves.
The lack of transparency about what music AI models are trained on means independent artists could be losing compensation without even knowing their work was used. Industry groups are calling for mandatory labeling of AI-generated content, warning that without safeguards, artificial intelligence risks repeating streaming's pattern of tech platforms profiting while creators struggle.
Currently, streaming platforms have no legal obligation to identify AI-generated music. Deezer uses detection software to tag AI tracks, but Spotify doesn't label them at all, leaving consumers in the dark about what they're hearing.
The industry's challenge goes beyond detection or regulation. Music has always been more than sound waves arranged in pleasing patterns. It's been about human connection, shared experience, and the stories we tell ourselves about the songs we love.
As AI-generated artists climb the charts and secure record deals, the question isn't whether machines can make music that sounds real. They already can.
The question is whether listeners will still care about the difference once they know the truth.