Who Owns a Song Written by AI? The Music Industry's Biggest Legal Battle

Trending·4 min read
Close-up of a music mixing console with colorful lights

Last month, a track called "Echoes of You" climbed to number 14 on Spotify's Global Top 50 playlist. It had a lush vocal arrangement, clever chord progressions, and a hook that lodged itself in the listener's brain. It also had no human songwriter.

The song was generated entirely by an AI system called Harmonia, developed by a Berlin-based startup. It was uploaded to Spotify by a 22-year-old producer in Lagos who used the platform's API to create, master, and distribute the track in under 90 minutes. His total cost: $12 for the monthly subscription.

The track has since been pulled from the platform following a copyright dispute, but the questions it raised are not going anywhere. They are, in fact, multiplying.

A Flood of Machine-Made Music

The scale of AI-generated music on streaming platforms has exploded. Industry analysts estimate that more than 100,000 AI-generated tracks are uploaded to major streaming services every day, a figure that has tripled in the past year. Many are background instrumentals or lo-fi beats designed to rack up passive streams. But an increasing number are sophisticated vocal tracks that are nearly indistinguishable from human-made music.

The tools have become remarkably accessible. Services like Suno, Udio, and Harmonia allow anyone to generate broadcast-quality songs from a text prompt. Type "upbeat pop song about heartbreak with female vocals in the style of early 2020s," and you will have a finished track in minutes.

For the music industry, this represents both an existential threat and an uncomfortable mirror.

The Copyright Vacuum

United States copyright law, as it currently stands, does not recognize AI as an author. The Copyright Office has repeatedly stated that copyright protection requires human authorship, meaning that a song generated entirely by AI cannot be copyrighted by anyone -- not the developer of the AI, not the person who typed the prompt, and certainly not the machine itself.

This creates a paradox. The songs exist. They generate revenue. They compete for listener attention with human-made music. But they exist in a legal no-man's-land where ownership is undefined.

"You have a product that generates economic value but has no legal owner," said Mary Rasenberger, CEO of the Authors Guild. "That is not a sustainable situation. The law has to catch up."

Some producers argue that the human who crafts the prompt and curates the output deserves authorship credit, much like a photographer who frames a shot does not manually paint each pixel. The Copyright Office has so far rejected that argument in most cases, though it has allowed protection for works where AI-generated elements are combined with substantial human creative input.

Artists Push Back

The backlash from working musicians has been fierce. More than 30,000 artists, including Billie Eilish, Kendrick Lamar, and Stevie Wonder, have signed an open letter calling for legal protections against AI systems trained on copyrighted music without consent or compensation.

The core grievance is straightforward: the AI models that generate these songs were trained on vast libraries of existing music, much of it copyrighted. Artists argue that their work is being used as raw material to build tools that will ultimately replace them, and that they deserve both consent and payment.

"They scraped my entire catalog to build a machine that makes songs that sound like me," said Grammy-winning producer Mustard in a recent interview. "And now that machine is competing with me on the same playlist. How is that legal?"

Several major lawsuits are working their way through the courts. Universal Music Group has filed suit against Suno and Udio, alleging mass copyright infringement in the training process. The cases are expected to reach trial later this year and could set precedents that reshape the industry.

The Label Dilemma

Major record labels find themselves in an awkward position. On one hand, they represent the artists whose work was used to train AI models without permission. On the other, they see enormous potential in AI tools that could reduce production costs, generate catalog filler, and create personalized music experiences for listeners.

Warner Music Group quietly signed a licensing deal with an AI music startup last fall, drawing criticism from artists on its own roster. Sony Music has taken a harder line, issuing takedown notices against AI platforms and lobbying for stricter regulation.

What Comes Next

Congress is considering the AI Music Transparency Act, which would require AI-generated tracks to be labeled as such on streaming platforms and mandate that training datasets be disclosed. A separate bill would create a compulsory licensing framework for AI training on copyrighted music, similar to the mechanical licenses that govern cover songs.

The European Union is further ahead, with its AI Act already requiring disclosure of copyrighted training data.

But legislation moves slowly, and the technology does not wait. Every week, the tools get cheaper, the output gets better, and the line between human and machine creativity gets harder to draw. For an industry built on the notion that songs come from somewhere -- from lived experience, from emotion, from the irreducible spark of human expression -- that blurring is not just a business problem. It is a philosophical one.

Share

Related Stories