Skip to content

AI-generated music is about to flood streaming platforms


The music business is pushing back on AI. Universal Music Group, home to superstars like Taylor Swift, Nicki Minaj and Bob Dylan, has urged Spotify and Apple to block AI tools from extracting lyrics and melodies from their artists’ copyrighted songs, the financial times reported last week. UMG Executive Vice President Michael Nash wrote in a recent opinion piece that AI music is “diluting the market, making original creations harder to find, and violating artists’ legal rights to compensation for their work.”

Neither Apple nor Spotify responded to requests for comment on how many AI-generated songs are on their platforms or whether the AI ​​has created more copyright infringement issues.

The news came after a request from UMG to do an Eminem-style rap about cats. remote YouTube for copyright infringement. But the music industry is concerned with more than just AI copying a vocal performance; he also worries that the machines will learn from the songs of his artists. Last year, the Recording Industry Association of America sent to list of AI scrapers to the US government, claiming their “use is unauthorized and infringes our members’ rights” when they use copyrighted works to train models.

This argument is similar to the one used by the artists in a lawsuit brought against AI imagers earlier this year. As with that case, there are still many unanswered questions about the legality of AI-generated art, but Erin Jacobson, a music attorney in Los Angeles, points out that those who upload AI-created material that clearly violates copyrights they could be held liable. Whether streamers will be held accountable is more nuanced.

New generative technology shows a tendency towards mimicry. Earlier this year, Google announced that it had created an AI tool called musicLM that can generate music from text. Enter a message requesting a “fusion of reggaeton and electronic dance music, with a spacey, otherworldly sound,” and the generator will deliver a clip. But Google didn’t roll out the tool widely, noting in its paper that about 1 percent of the music generated matched existing recordings.

Much of this AI music could take over mood-based genres like ambient piano or lo-fi music. And it may be cheaper for streamers to make playlists using AI-generated music than to pay even negligible royalties. Clancy says he doesn’t think AI is moving too fast, but that people may be moving too slowly to adapt, potentially leaving human artists without the fairness they deserve in the industry. Changing that means making clear distinctions between AI and human-made music. “I don’t think it’s fair to say ‘AI music is bad’ or ‘human music is good,’” Clancy says. “But one thing I think we can all agree on is that we like to know what we’re listening to.”

But there are many examples of artists working with AI, not in competition with him. Musician Holly Herndon used AI to create a clone of her voice, which she calls holly+, sing in languages ​​and styles that he cannot. Herndon created it to maintain sovereignty over her own voice, but as she said WIRED late last year, he did so too in the hope that other artists would follow suit. BandLab has a song launcher feature, which allows users to work with AI to create royalty-free beats. It’s meant to remove some of the barriers to songwriting.

AI may become a perfect mimic, but on its own it may not create music that resonates with listeners. Our favorite songs capture heartbreak or speak to and shape today’s culture; they break new ground in times of political turmoil. AI will have a role in writing, recording and performing songs. But if people open the music streamers and see too many songs created by AI, they may not be able to connect.



Source link