Folk musician Murphy Campbell discovered in January 2025 that several AI-generated covers of her songs had been uploaded to Spotify under her name — songs she had never released to the platform, with vocals that weren't quite her own.
Campbell, an independent folk artist, had posted performances of her music to YouTube. Someone appears to have used those recordings as source material to create AI-generated versions of her voice, then distributed the resulting tracks to streaming platforms under her identity. When The Verge ran one of the uploaded songs, "Four Marys," through two separate AI detection tools, both flagged it as probably AI-generated.
Someone had effectively stolen her identity, her voice, and her name — and the systems meant to protect artists had no clear answer for her.
How AI Voice Cloning Turns Artists Into Unwilling Brands
What happened to Campbell reflects a specific and growing pattern: bad actors harvest publicly available audio from platforms like YouTube, generate AI covers using voice-cloning tools, and then upload the results to streaming services under the original artist's name to collect royalty payments. The artist whose identity is being used receives nothing — and in some cases, doesn't even find out.
Campbell told The Verge she was "kind of under the impression that we had a little b[it of protection]" — a sentence that captures the gap between what most musicians assume about the law and what the law actually delivers. Independent artists in particular tend to lack the legal resources and label infrastructure that might give them faster recourse.
The economic incentive for bad actors is straightforward. Streaming platforms pay per play, and an artist with an established following — even a modest one — represents a ready-made audience. By uploading under a known name, fraudsters skip the work of building a listener base and parasitize one that already exists.
A Copyright System That Wasn't Built for This
When Campbell tried to have the fake tracks removed, she ran into a copyright system that was designed for a different era of music piracy. Traditional copyright enforcement tools — takedown notices, platform dispute mechanisms — were built to address the unauthorized copying of existing recordings, not the creation of new recordings that synthetically replicate a real artist's voice and are then uploaded as if they were legitimate releases.
Compounding the problem, Campbell reportedly faced a copyright claim from a third party during the process of trying to protect her own work. The precise nature of that claim, and who filed it, underscores a broader dysfunction: the same legal mechanisms artists use to defend themselves can be turned against them, particularly by those who understand how to exploit automated systems on platforms like YouTube and Spotify.
This is not an isolated case. Researchers and music industry analysts have documented a surge in AI-generated content flooding streaming platforms. In 2024, Universal Music Group and other major labels filed lawsuits against AI music generation companies, citing the use of copyrighted recordings to train voice models. But those cases address training data — not the downstream problem of fake tracks being distributed under real artists' names.
What Platforms and Lawmakers Are — and Aren't — Doing
Spotify has stated that it prohibits artificial streaming and fraudulent content, and has removed tracks in high-profile cases. But enforcement remains reactive rather than proactive, dependent on artists or their representatives spotting the problem and filing complaints. For an independent musician without management, that places the entire burden of policing a global platform on a single person.
In the United States, the NO FAKES Act — proposed federal legislation that would create a right protecting individuals against unauthorized AI replicas of their voice and likeness — has been discussed in Congress but has not yet become law. Without it, artists like Campbell must navigate a patchwork of existing intellectual property law that offers incomplete and inconsistent protection.
Some states have moved faster than the federal government. Tennessee passed the ELVIS Act in 2024, specifically protecting musicians' voices from AI replication without consent. But state-level laws cannot reach platforms and actors operating across jurisdictions, and they do nothing for artists in states without similar protections.
The human cost of this gap is not abstract. For independent musicians, streaming revenue — however modest — can be a meaningful part of their income. Having fake tracks siphon listeners, distort their catalog, and potentially damage their reputation with fans who encounter poor-quality AI imitations represents a direct financial and professional harm.
What This Means
Murphy Campbell's case illustrates that AI-enabled music fraud is no longer a theoretical risk for independent artists — it is occurring now, and the legal and platform infrastructure to stop it does not yet exist at the scale the problem demands.
