What's new

From the Recording Academy - AI Hub & Advocacy Day May 1

NekujaK

I didn't go to film school, I went to films -QT
Email sent to Recording Academy members...

How much do you actually
know about AI in music?

In the ever-evolving landscape of music, the rise of generative artificial intelligence (AI) brings with it an exciting frontier — not as a replacement for human creativity, but as a partner enhancing the musical experience.

Together, human ingenuity and AI can create symphonies that were previously unimaginable, transcending traditional boundaries and introducing a new era of expression.

To ensure this future benefits all creators, adequate legislation must be implemented to foster a thriving ecosystem where technology amplifies, and does not replace, creativity and artists.

Join us in shaping a future where humanity and technology play in concert.


1. There are no safeguards in place at the federal level to protect an artist’s voice, image, name, or likeness from AI exploitation.

2. Over 170 million AI-generated tracks have been created so far, totaling over 970 years of constant listening.

3. The generative AI music market is expected to be valued at $2.6 billion by 2032.

4. AI music generators have used copyrighted songs and lyrics without authorization from artists, songwriters, music publishers, or record labels to train their data models.

5. The Recording Academy spearheaded a collective effort to combat AI fraud in Tennessee with the passing of the ELVIS Act, the first law to protect human creativity at the state level.

6. Federal legislation, like the No AI FRAUD Act, is imperative to protect human music makers.

7. Used responsibly, AI can contribute to amazing creative opportunities and enhance human artistry.

1714601386947.png

1714601417053.png
This week at GRAMMYs on the Hill, the Recording Academy will share AI concerns with lawmakers on Capitol Hill during Advocacy Day and at the inaugural Future Forum where panel discussions will explore the impacts of artificial intelligence on our community.

We’ll be advocating for YOU on Capitol Hill, but we need your support from home, on the road, or wherever you are!

Get Ready To Protect Human Artistry On Advocacy Day, May 1!

You can support the cause by contacting your local representatives, amplifying our efforts on social media, exploring our AI resource hub to deepen your understanding of the issues, and more.

1714601496071.png
 
5. The Recording Academy spearheaded a collective effort to combat AI fraud in Tennessee with the passing of the ELVIS Act, the first law to protect human creativity at the state level.

So no more Elvis impersonators? Damn, this is confusing. Does this mean he is really dead or still alive?

6. Federal legislation, like the No AI FRAUD Act, is imperative to protect human music makers.


Dug into this a little. My take away is it is much like the TN Elvis Act. Geared toward protecting stars and their owners, but more broad--not just music stars. Focus is on "voice and likeness". Pretty good summary in the findings section of what was misappropriated in the last couple years:

Congress finds that recent advancements in artificial intelligence (AI) technology and the development of deepfake software have adversely affected individuals’ ability to protect their voice and likeness from misappropriation, including:
(1) On or around April 4, 2023, AI technology was used to create the song titled “Heart on My Sleeve,” emulating the voices of recording artists Drake and The Weeknd. It reportedly received more than 11 million views.
(2) On or around October 1, 2023, AI technology was used to create a false endorsement featuring Tom Hanks’ face in an advertisement for a dental plan.
(3) From October 16 to 20, 2023, AI technology was used to create false, nonconsensual intimate images of high school girls in Westfield, New Jersey.
(4) In fall 2023, AI technology was used to create the song titled “Demo #5: nostalgia,” manipulating the voices of Justin Bieber, Daddy Yankee and Bad Bunny. It reportedly received 22 million views on Tik Tok and 1.2 million views on YouTube.
(5) A Department of Homeland Security report titled the “Increasing Threat of Deepfake Identities” states that as of October 2020, researchers had reported more than 100,000 computer-generated fake nude images of women created without their consent or knowledge.
(6) According to Pew Res

7. Used responsibly, AI can contribute to amazing creative opportunities and enhance human artistry.

"Together, human ingenuity and AI can create symphonies that were previously unimaginable, transcending traditional boundaries and introducing a new.."

Hmm, why does this smell like bullshit?
 
Last edited:
7. Used responsibly, AI can contribute to amazing creative opportunities and enhance human artistry.

"Together, human ingenuity and AI can create symphonies that were previously unimaginable, transcending traditional boundaries and introducing a new.."

Hmm, why does this smell like bullshit?
I interpret this as throwing a bone to AI proponents, and making sure the RA doesn't come across as anti-tech luddites. Ultimately what really matters, is any legislation they can help push thru that limits AI incursion, and their human-only Grammy policy.
 
Like with Image AI's, this will fail as well.

This is the anger and bargaining stage, so at least there's less denial.

No one that is in the "It's copyright infringement, we must ban all uncleared copyrighted training data" stage can ever explain how they think they can possibly ever get what they want.

Even if you beat the biggest corporations in the world, with the biggest lobbying powers, with the most expensive lawyers. You don't need to just beat the Googles/Facebook'/Microsoft's/Adobe etc. You need to beat company's like Getty Images, which as I said before, literally sued Stable Diffusion for using their content in their training data. You might think a company like Getty are trying to get the same thing as you, but they're not. You might think if their court case succeeds that helps yours, but they're actually helping to ensure you lose. Getty didn't care that Stable Diffusion used their copyrighted content for the same reason, they were upset because THEY didn't get to profit. Getty went and made their own AI model (ironically, presumably on Stable Diffusion) which was trained on their entire catalogue! I've said this several times now, but do you think they asked their creators to opt in? Do you think they paid them? Even if they did all of this, how much do you think they could possibly pay them to cover for the destruction of their entire industry? There's no way they'd be able to pay any of their creators much of anything, or it would cost an absolute fortune.

In other words the music giants like Universal Music, if they sue, are likely planning to screw creators/artists in the same way. They'll likely intentionally attempt to establish a legal precedent that benefits THEM not you. Getty wasn't trying to protect artists, since their own model "screws" their own creators almost as much as Stable Diffusion does that they were suing. These lawsuits will end up creating precedents that even if Getty wins, the content creators necessarily also lose. Getty isn't going to win a case that makes their own AI model illegal are they?

One of the legal standards you're likely to get is that AI outputs "must not infringe copyright". This would be an absolute loss, because what you actually want the legal standard to be is that it doesn't matter what the audio sounds like. That's why these arguments about how Udio can make vocals that sound like the training data are shooting themselves in the foot, even though they think it's some slam dunk. It sets up the premise with the court the idea that the issue is what the output is, this doesn't logically imply the training data is the problem. There's a legal argument that using the data is transformative, agree with it or not, it is there. It isn't clear-cut like many seem to think it is, and the arguments focusing on output only make it more clear. The output should be totally irrelevant. Standard copyright infringement only cares what the output is, whereas what you want is a legal precedent that says literally all outputs that came from a model that used uncleared copyrighted training data is infringement.

If you got what you wanted OpenAI would have to junk GPT-4! You think they're going to allow that to happen?

The AI they want to make needs to be trained on as much material as possible, so they're never going to stop fighting to be able to. I mentioned the big tech companies, which are extremely powerful in themselves, but there's bigger powers. Those like Blackrock and Vanguard (who are owned by each other and basically one megacorporation), are really the ones that effectively own the entire stock market. You're also fighting them and their interests as well. The "powers that be" want this AI, and for that they need it to hoover up all content they can get their hands on.

None of this legal stuff matters, because in the end there will be an Open Source Ai model that will allow you to train a model on whatever music you like and even share those models. When Stable Diffusion release a new model, the training for their base model doesn't matter, since the power of SD is found once you start using custom models. You could ban all copyrighted content in AI models, but unless you banned Open Source and the ability to use a reference image/music/audio, you're in essentially the same situation.


The world is flooding and you think you can stop it with a few logs. I don't know why some people are so stubborn about seeing this, or in such denial they think the the law will ban it. Ironically many of these people had no issue with AI, an even openly said they liked AI like ChatGPT and Midjourney.... until it came for music.

TL;DR:

- It's not possible for artists to stop AI's being trained on copyrighted content.
- The biggest corporations in the world won't junk their models (like the GPT series, DallE, Sora, etc)
- The arguments composers/producers are making just ensure their failure in court, by focusing on the output. You can win on arguing it infringes likeness and copyright, but that won't logically imply the training data must only be on cleared content.
- The lawsuits are doing a terrible job so far of arguing for this. The NYT lawsuit against OpenAI and ChatGPT was heavily cherrypicked and unfair, and again focused on the output. Getty Images isn't on your side, despite suing Stable Diffusion, since they went ahead and "ripped off" all their content creators.
- They'd have to not only junk all their models, but outlaw the ability to use reference material as media AI inputs.
- Even if you got everything you wanted, you'd have to somehow ban Open Source as well.
- You'd have to ban companies like Getty training their own AI on their own content, even though they're suing Stable Diffusion for using their content.

None of this will happen, because there's way too much money to be made and too many advances to be had with AI to sacrifice it all to make sure artists careers are protected. In the end you don't really care that the AI uses uncleared training data. You don't care how close the output is to someone's work. You care that it's TOO GOOD. You know it's a threat to your career NOT because of the training data, otherwise you'd care just as much if it wasn't very good.
 
Last edited:
Top Bottom