Skip to main content
All CollectionsNews & Data
Youtube against AI deep voices
Youtube against AI deep voices

The platform announced that they are developing new tools to control this music industry issue

Ángeles Delfina Herrera avatar
Written by Ángeles Delfina Herrera
Updated over 2 months ago

Nowadays, a major concern for the music industry is the rapid development of AI used in songs and productions. Throughout human history, whenever we create new technology, there’s a point where we fear it might get out of hand, just like in all those post-apocalyptic movies. In the last five years, AI has literally accelerated, improving almost monthly, discovering new capabilities, with music being one of its focal points from the start. This has sparked new debates about the “right” uses of AI in relation to human creativity and has created a whole new realm of copyright issues regarding voices. In the United States, for example, there is a proposed law to make unauthorized AI use of voices illegal. However, it’s not easy to implement because some people naturally have similar voices or are skilled impersonators, and unlike fingerprints, voices aren’t naturally unique. Authorities are trying to find a balanced approach to protect artists while avoiding misunderstandings.

There are also AI-generated artists but they are trained with music with copyright so majors consider that this product belongs to them. One of the most famous cases involved Bad Bunny. Someone released a song called "Nostalgia" that went viral, monetizing on certain platforms because people believed it was his work, even though he was unaware of the song’s existence, the composition wasn't his work at all, it was a new original masterpiece but they were using his voice. He had to take legal action, but the legal framework was unclear.

Some less-known artists claim that major labels have always used AI to enhance their works or for inspiration. They argue that it’s unfair to restrict the use of this amazing tool now that independent artists have access to it too, viewing it as a selfish move by big companies to monopolize AI technology.

This is an extensive debate, and while I’m writing this, new AI capabilities are likely being developed, which could change everything. In the meantime, digital platforms are introducing features to protect artists’ rights and their ability to monetize their work and voices.

For example, YouTube is developing new tools to identify AI-generated content. Since this year, record labels and artists can request the removal of a video if it uses their AI-generated voice. Creators are also required to disclose the use of AI in a specific section during the submission process. YouTube warns that penalties for not accurately labeling AI-generated content will vary but could include removals and demonetization. YouTube also offers its music industry partners the option to request the removal of content that "imitates an artist's unique singing or rapping voice." The only exception allowed is if the content is "for the purpose of news, analysis, or critique of synthetic voices."

Last month, YouTube announced that it is working on adding a “synthetic-singing identification technology” to its Content ID functions, prioritizing the protection of artists’ work over new AI-generated content and the possibility of monetizing it. There is no specific release date yet, but there are high expectations.

YouTube also emphasized that collecting content from its platform without permission violates its terms of service, sending a clear message to individuals or companies who might use existing YouTube videos to create AI-generated content without authorization.

In their blog, they added, “That’s why we’re developing new ways to give YouTube creators choice over how third parties might use their content on our platform. We’ll have more to share later this year.”

Ángeles Delfina Herrera

Did this answer your question?