What's new

AI - Next steps we can take

SimonFranglen

New Member
My name is Simon, I am a working composer. I love the bleeding edge of technology and I work with companies on finding the next bleeding edge once the wounds have healed enough. I have sat on panels and committees around the world on the future of AI and Music, most recently the UK Government ‘Creative UK’ summit, where ironically I was the lone music creator on the floor with four AI business leaders from the other side of the equation.

At each of these panels, the AI pioneers genuinely promote a wonderful vision of us being freed from the drudgery of creating music by the new systems, the learning systems that generate playlists for Spotify become the learning systems of music creation. Billions of dollars are being invested in the utopian ideal that everyone can be creative, that there’s no need for music training, that all the learning from experience and mistakes, all the blood sweat and tears that we call composition and music production can be condensed into a text line prompt.

I have three major problems with this. Well, maybe a hundred major problems, but let’s start with these.

1: Elevator music. A billion gibbons on typewriters will submerge that one copy of AI Hamlet with 999,999,999 variations that head towards blandness; the system will not ‘understand’ why Mozart is not Salieri, as Salieri is more successful. The action of sampling large models of data is that everything is averaged, so over iterations, AI Mozart becomes AI Salieri; the average quality of all music will be inexorably driven down. The shock of a radically different approach, say Stravinsky causing riots in 2013 in Paris, The Damned 'New Rose', Miles Davis' 'Kind of Blue' will disappear as we are be submerged in a swamp of bland.

2: The death of production music. The low end is going to disappear from composition and music creation. To quote the advertising from one of the AI music generation market leaders in this area “Stop paying too much for your music”. A TV production company making weekly general light entertainment will leap at the chance to have their cooking shows and afternoon cop shows scored by machines at a tenth of the cost, with no paperwork, no royalties to pay, no composers missing deadlines, no headaches. My gut is that most production music is AI generated within three years.

3: The future. For all the slagging off that many within VI-Control give the sweat shops of composers’ assistants, that time in the trenches is where we really get to learn our craft and also, more importantly, make mistakes with someone else’s name on it, and get training and feedback from a master at their craft. AI systems will inevitably decimate that path. Someone who needs four people in 2024, will need two in 2026. I’ve done my time, it was invaluable and made me who I am today. If an AI system replaces the need for a younger me to be doing an 120 hour week to hit a deadline, that will lower my knowledge base, my library of techniques, my resilience.

I was at the cutting edge of when sampling technology was predicted to destroy live musicians. It did, just like the gramophone, just like the electric guitar. A wave of digital musicians appeared to replace them. VI-Control is an example of what happens in the evolution of the creative species, thousands of composers now actively discussing what makes millions of 24 bit digital signals controlled by a seven bit data word more ‘realistic’, whether thirteen mic positions is enough for a clave.

We're going to have to embrace the future, it's coming whether we like it or not. Machine learning can be used to support what we do. For example - “Hey Sibelius, Take this 8 bars of Staccato markings on the Violas and extrapolate these markings across the entire score for next 100 bars” - this might be a real timesaver; yeah it won’t be perfect but it would get us 80% of the way there in 10 seconds. I will actively embrace the systems that help me. One of the projects I’m working on has a mantra that ‘you can’t allow the machine to make shit up’, we feel that’s where madness lies. AI systems can be used to support the creator. This is a good thing and inevitable.

Solutions:
1: The concept of computer 'creation' is something I think we need to reframe as a creative community. At a recent discussion with an AI department head for a company with 120,000 employees worldwide, I was presented with the line ‘inevitably the systems will end up as better composers than us’ - analogising computer chess systems with composition, they conveniently ignored the fact that being able to recalculate every possible ending after each new move is not creation, it’s a large data model looking for an optimal result. Data set analysis with random variability is not composing, or painting, or film making. It's mimicry with standard deviation curves.

As a species we are not quite done yet but we do need to address the oncoming light at the end of the tunnel before the train hits us. There will be a propagation of “Co-Creation” systems (see Adobe for that wonderful doublespeak). Write a theme for your TV show and have the co-creation software take away the annoyance of scoring that episode. Inevitably those co-creation systems will eat their co-creators, given a few years they become self-aware and on August 29th, 1997 we'll be freed from the drudgery of our compositional existence. I think we need to start stomping on the misuse of 'art creation'.

2: Make some noise. It’s important that we challenge our banks (ASCAP, BMI, PRS, SECAM, etc…) to take a cold hard look at what happens when the revenue disappears. Given the tens of thousands of voices here, I hope we can make a little noise.

Art is what separates us from the billion digital gibbons with typewriters submerging us in a river of s***t. It’s important.


Sorry for this to be my first post. I’ve been a lurker for a gazillion years. I felt I might be able to add something to the AI discussions.

 
Last edited by a moderator:
I think the horse has bolted. Say composers in the west get to cramp the use of AI tools in some way. Companies will either buy product from countries that dont or shift production to countries that dont. Assume govts get around that somehow with local production quotas - then local productions will be restricted to local sales because they are more costly to countries that dont do that.

One positive action, only available int he near future, is to lobby for getting a slice of the pie. However lobbying has its problems as Australia discovered when facebook and google were asked to pay an annual fee for use of content made from news stories made by others. All that money goes to the big publishers, not the small fry who need it - a but like Spotify but much more concentrated. However one might use money gleaned from 'creativeAI'' for building a kitty from which grants and commisions can be made that have local content rules ie cross subsidise from AI back into art made by artists
 
Good points Simon - and welcome to the forum! Great to have someone of your experience and talent amongst us. PRS are doing good work in this area: (broken link removed)

- the point about chess vs art is one I've tried to articulate too. It's not the job of art to 'solve' anything, but to express an idea, or emotion, or philosophy etc. The places where music doesn't fill this function but instead just papers over space, like YouTube, low-end podcasts, daytime TV etc are definitely the first to go. I think there's more resilience in the higher end of things, partly because an experienced and passionate director is going to feel more kinship with human composers, artists etc than AI.
But how long does that last? Most of the new generation of filmmakers will want a score that's more noises/drones than harmony and melody because of the zeitgeist and their own influences growing up, so if the newest filmmakers start to use, or rely on AI to get the soundtrack they want with full control and no complaints/input from a 3rd party, does that become the norm? Because the pool of 'what is music' would pretty much stop there and all we get from then on is iterations based on a a fixed and finite data set. It's a net reduction in creativity and expression, which is bad for everyone. And the people who could potentially write interesting new music will be out of a job anyway by then. UBI as the solution?
 
Welcome Simon,
Love your filmscores and of course all your work with the late James Horner...! It is great to have you here.

The death of production music. The low end is going to disappear from composition and music creation. To quote the advertising from one of the AI music generation market leaders in this area “Stop paying too much for your music”. A TV production company making weekly general light entertainment will leap at the chance to have their cooking shows and afternoon cop shows scored by machines at a tenth of the cost, with no paperwork, no royalties to pay, no composers missing deadlines, no headaches. My gut is that most production music is AI generated within three years.

A lot of TV companies also handle the music publishing for their shows. They make back the money they spend on composers or music libraries through publishing rights. This can add up to a lot of cash, especially if a show is broadcast a lot and in different countries.

The big question is: Will PROs (like ASCAP, BMI, SACEM) let AI-generated music be registered to earn royalties? The easy answer would be just not to allow it. If AI music can't be registered, then TV companies will still need real composers and music libraries.

But there's another problem: how will we tell the difference between AI-made music and human-made music in a few years? PROs usually need a WAV file to register music, but right now, that doesn't help tell them apart.

A more complicated solution might be to label every piece of art made entirely by AI. This would include everything, like pictures and music, created with a prompt. Then, this AI-made art wouldn't be able to make money through PROs. We'd have to make this a law in as many countries as possible, and AI companies would need a government license to sell their 'creations'.

To get this license, they should list all the artists used to train their AI models. More than that, they'd have to get permission to use these artists' work to train their models. No permission, then no license and no business.

I know this sounds tough, but I think it's necessary. Without these rules, we'll have big problems, starting with production music and going all the way to original scores for TV shows, Netflix series, and video game music.

Of course, the top composers will still have their jobs, and we'll still have custom scoring for directors and producers who want something special... but a big chunk of the business might be lost to AI or some opportunists "AI composers" who will simply prompt AI and make the generated music their own.
 
Really good post. And I'm especially concerned about point 3, the decimation of the ground for training.

It's mimicry with standard deviation curves.
There's an argument that a lot of human creativity falls under this category. Not all of it, certainly, but enough of it that machine "creativity" resembles human creativity to an alarming degree. The machine may well be producing note salad rather than music, that is, tones without any musical understanding other than correlations surfaced in the data set, but if it satisfies the audience, I find it hard to fault the machine for doing what humans also do (even though humans also do more). This is not an argument I am making in favor of AI but I am suggesting that the line is very hard to draw if you are looking only at the output.

One start of a solution might be to demonetize all AI and alogorithmic production, basically making anything generated algorithmically public domain. This wouldn't mean that a musical piece that used algorithms (or an AI) couldn't be copyrighted as wholes, only that any aspects of the composition that were generated with an algorithm (or AI) would not fall under copyright and couldn't be used as identity markers for systems like ContentID. If you use an AI drummer to generate your percussion part, you can't claim copyright over your percussion part, etc. But you also can't claim copyright over an aleatoric wash of notes that you used a randomizer to generate, or the output of one of those automated ambient music makers.
 
I think the horse has bolted. Say composers in the west get to cramp the use of AI tools in some way. Companies will either buy product from countries that dont or shift production to countries that dont. Assume govts get around that somehow with local production quotas - then local productions will be restricted to local sales because they are more costly to countries that dont do that.

One positive action, only available int he near future, is to lobby for getting a slice of the pie. However lobbying has its problems as Australia discovered when facebook and google were asked to pay an annual fee for use of content made from news stories made by others. All that money goes to the big publishers, not the small fry who need it - a but like Spotify but much more concentrated. However one might use money gleaned from 'creativeAI'' for building a kitty from which grants and commisions can be made that have local content rules ie cross subsidise from AI back into art made by artist
I'm attempting to get a conversation going at the corporate level, there is a difference between 'training' and 'passing off'. If an input prompt is 'generate a jingle in the style of Stevie Wonder from 1969 about Dog Food' - an AI company will ask us how their process of creation is any different to a human commercials company copying as close as they can without getting sued. Intent is in the eye of the beholder.

Part of the discussion in the recent UK creative summit, was that they have to embrace the new AI systems or be left behind. Countries like Saudi etc… have said publicly that they're not going to set boundaries for AI systems, the ones that do are at a disadvantage in the new economy. The problem is that creative arts will be collateral damage.
 
Last edited:
Good points Simon - and welcome to the forum! Great to have someone of your experience and talent amongst us. PRS are doing good work in this area: (broken link removed)

- the point about chess vs art is one I've tried to articulate too. It's not the job of art to 'solve' anything, but to express an idea, or emotion, or philosophy etc. The places where music doesn't fill this function but instead just papers over space, like YouTube, low-end podcasts, daytime TV etc are definitely the first to go. I think there's more resilience in the higher end of things, partly because an experienced and passionate director is going to feel more kinship with human composers, artists etc than AI.
But how long does that last? Most of the new generation of filmmakers will want a score that's more noises/drones than harmony and melody because of the zeitgeist and their own influences growing up, so if the newest filmmakers start to use, or rely on AI to get the soundtrack they want with full control and no complaints/input from a 3rd party, does that become the norm? Because the pool of 'what is music' would pretty much stop there and all we get from then on is iterations based on a a fixed and finite data set. It's a net reduction in creativity and expression, which is bad for everyone. And the people who could potentially write interesting new music will be out of a job anyway by then. UBI as the solution?
Great to be here. Happy to add to the discourse.

The high end and the arthouse sector will survive for as long as the act of collaboration between a director and composer; my concern is the commercial section of the business, where making profit is the first job of the producers. As a composer who believes in themes, who trained under composers for whom themes were important, I hope that there will always be a place for a good tune. In my experience, textural scores often provide an easy way out for directors or producers, they don't challenge the ear but end up as a decent extension of the sound effects.
 
Welcome Simon,
Love your filmscores and of course all your work with the late James Horner...! It is great to have you here.

Thanks very much, I'm glad to be adding to the noise here, hopefully signifying something.
A lot of TV companies also handle the music publishing for their shows. They make back the money they spend on composers or music libraries through publishing rights. This can add up to a lot of cash, especially if a show is broadcast a lot and in different countries.

The big question is: Will PROs (like ASCAP, BMI, SACEM) let AI-generated music be registered to earn royalties? The easy answer would be just not to allow it. If AI music can't be registered, then TV companies will still need real composers and music libraries.
Your point is completely right, but it comes at it from a PRO mature environment. Whether AI systems have copyright or not is moot if there's no method to collect any royalties.
I've had enormous fun working around the world where you get the opportunity to do the most exotic compositional projects. The problem is that in many huge countries, China for example, there is no PRO system in place. The concept of royalties is anathema to the film and tv world there. In India, the songs in films can make a fortune, however the score often makes little or no money.

To get this license, they should list all the artists used to train their AI models. More than that, they'd have to get permission to use these artists' work to train their models. No permission, then no license and no business.

There's a class action lawsuit between fine artists and AI art systems going through the courts for exactly this.
 
Really good post. And I'm especially concerned about point 3, the decimation of the ground for training

One start of a solution might be to demonetize all AI and alogorithmic production, basically making anything generated algorithmically public domain. This wouldn't mean that a musical piece that used algorithms (or an AI) couldn't be copyrighted as wholes, only that any aspects of the composition that were generated with an algorithm (or AI) would not fall under copyright and couldn't be used as identity markers for systems like ContentID. If you use an AI drummer to generate your percussion part, you can't claim copyright over your percussion part, etc. But you also can't claim copyright over an aleatoric wash of notes that you used a randomizer to generate, or the output of one of those automated ambient music makers.
The AI music generation companies label themselves as giving content producers ultimate control of their content and music with little or no paperwork, not that they will earn money from them. If you're in a country where there are no PRO's, if you're a production company making a show for a streamer where the music revenue is microscopic then an AI generated music track just reduces your paperwork, your headaches and makes your life simpler…
 
The AI music generation companies label themselves as giving content producers ultimate control of their content and music with little or no paperwork, not that they will earn money from them. If you're in a country where there are no PRO's, if you're a production company making a show for a streamer where the music revenue is microscopic then an AI generated music track just reduces your paperwork, your headaches and makes your life simpler…
Also being able to pass ContentID tests. I think that will be a key "selling" point for this. Cheap and able to avoid and ContentID problem.
 
Welcome to VIC Simon!

Before the AI hype, when people were still calling it "machine learning" and it was still mostly Chess and Go bots, the machine learning advocates had a cute saying.

They said the Chess bots "trained" by "playing against themselves."

Both the concept of "Training" and the concept of "playing against itself" are lies.

The ugly reality is "playing against itself" means amassing a database of *billions & billions* of RANDOM moves. The neural network then sifts that database for associations with winning the game.

The bot's supposed "intelligence" comes from its ability to process an insane amount of data, unlike us frail humans where, when my brain absorbs the capital of Wyoming, the capital of Nebraska leaks out. However, at game 1 million, or 10 million, AlphaZero's moves still look almost random. It takes millions more before it starts to look "intelligent." What kind of intelligence is dumber - after playing more games than the collective human race has played! - than a child who learned the game yesterday?

If you take away the zillions of games, AlphaZero has no knowledge. ML advocates will say "No, it can still win, because it has the trained neural network." That is exactly where the deception is. There are no chess principles inside AlphaZero, only a very very efficiently compressed & parsed dataset which contains a percentage of "every possible move in chess" - a tiny percentage but massively more than any human could hold in their memory.

tl;dr The dataset, the neural network and the "intelligence" are just 3 ways of talking about the same thing.

AI advocates will say "Don't humans also use snap-decision heuristics that are trained by data? Aren't heuristics just compressed data?" If there's any correspondence at all there, the most important fact is that AI has to brute force its heuristic development. If it had to learn from 1 chess game a day we would be at the heat death of the universe waiting for AI to learn when to castle.

This is where the picture AIs and Music "AIs" become clearer. They aren't "trained." They are thieves. SUNO can create funk music because it has a database of funk music. It's making a funk smoothie. Without the database SUNO can't create music, it cannot even be "taught" the rules of funk by differential learning from the rules of a music genre it knows ("Here is how funk is different from dubstep"), because it cannot be "taught" any-f***in-thing at all. It can only remix.

This isn't Human Intelligence 2.0, it's just Napster 2.0.
 
Last edited:
tl;dr The dataset, the neural network and the "intelligence" are just 3 ways of talking about the same thing.

This is where the picture AIs and Music "AIs" become clearer. They aren't "trained." They are thieves. SUNO can create funk music because it has a database of funk music. It's making a funk smoothie. Without the database SUNO can't create music, it cannot even be "taught" the rules of funk by differential learning from the rules of a music genre it knows ("Here is how funk is different from dubstep"), because it cannot be "taught" any-f***in-thing at all. It can only remix.

This isn't Human Intelligence 2.0, it's just Napster 2.0.
Great point. The problem is the AI community looks at the smoothie and asks the music community: "how is that different to a music student transcribing Beethoven's fifth for analysis?"
 
Great point. The problem is the AI community looks at the smoothie and asks the music community: "how is that different to a music student transcribing Beethoven's fifth for analysis?"

That's just the right question to ask!

Humans are able to integrate the graph of existing points and then sense & explore the space outside that graph. They can create new things. A human being that was trained ONLY on pre-1900 music could still create Stravinsky's music. In fact there was one human who did!

This particular kind of ML will never be able to create a new genre of music. It's actually worse, let's imagine that the AI is trained on all human music ever recorded, with the exception of 1 genre (let's say polka!), then we show the ML bot about two songs of that genre and say "make more. Don't be creative, just make more of this genre." The bot, afaik, can't do that.

We will have real AI when these bots can do those kinds of tasks. "Learning as fast as a human" means learning from the same amount of data inputs as a human. No "AI" can do this or even within a 1000x of this. "Thinking like a human" means creating the unknown and undiscovered, that's not even close atm.

When we come back down to earth from the hype, and see these bots for what they are, the solution is coming down like a ton of bricks with laws about paying the real creators.
 
It's mimicry with standard deviation curves.

There's an argument that a lot of human creativity falls under this category. Not all of it, certainly, but enough of it that machine "creativity" resembles human creativity to an alarming degree. The machine may well be producing note salad rather than music, that is, tones without any musical understanding other than correlations surfaced in the data set, but if it satisfies the audience, I find it hard to fault the machine for doing what humans also do (even though humans also do more). This is not an argument I am making in favor of AI but I am suggesting that the line is very hard to draw if you are looking only at the output.

Great point. The problem is the AI community looks at the smoothie and asks the music community: "how is that different to a music student transcribing Beethoven's fifth for analysis?"

A point that must be made repeatedly, which seems subtle but will be consequential, is that the burden of proof must be placed on those promoting the use of AI in a given creative industry to show that the machine is "doing what humans also do."

1) That proof will be rather difficult to come by, because the architecture of human brains is different enough at the local and global level from the nodal arrangement of modified perceptrons that constitute current neural networks, that, for scientific purposes, "neuromorphic" computational arrays are currently being investigated by computational cognitive scientists to more closely approach the architectural features of mammalian brains. If it's not good enough for neurobiologists, then it's nowhere close to good enough to upend existing legal structures in favor of some putative new "inventive act" by a machine. This should not be glossed over.

AI is an automated inductive reasoning technology. That does not mean that when a network is trained on a large set of songs, that the AI "does what we do" when we listen to music and interpret it through the filter of our own experiences and memories to produce something new. AI is a training-set-programmable interface. It's a little confusing because we have never had a technology that allows the structure of the program itself to be created by non-code input in just this way. But make no mistake, the music the neural network trains on is structuring the software just as a programmer would.

The implication of that is striking but clear: the musicians whose music is trained upon co-author that network and are - at minimum - owed licensing fees.


2) Let us assume that Startup X has created a neuromorphic machine capable of authoring music that bests that of every human composer, living or dead. This is a ridiculous assumption but let's adopt it.

The law does not exist to protect, and does not contemplate protecting a right to, any acts of creation that might be undertaken by such a device. Law supervenes on citizenship, which cannot be meaningfully understood in its absence.

The implications of this are, likewise, not immediately apparent but will become so as the technology develops. Human beings (citizens) have a right to press claims against those who falsify, mischaracterize, deprive them of or render impossible their attempts to communicate with each other according to their rights as citizens. Think: the outlawing of voice cloning in campaign robocalls. Copyright law itself exists because human expression has origin in citizens who cannot be parted from their right to acknowledgement of its origination; but there is more: a right not to be drowned in false speech generated by non-citizens such that this saturation renders communicative action unfeasible or impossible.
 
A point that must be made repeatedly, which seems subtle but will be consequential, is that the burden of proof must be placed on those promoting the use of AI in a given creative industry to show that the machine is "doing what humans also do."
I don't find much controversial in the claim that AI "creativity" is similar to some human creativity. There's a mode of human creativity that is simply trying out more or less random variations/juxtapositions and seeing what sticks. AI seems decent at this, as decent as humans. It can hallucinate. It can mislead, state falsehoods, maybe even lie, depending on how you want to define "lying." But AI doesn't seem to know what it's done (that's why whether it is lying is in question), and I don't see any evidence that it can tell which of the things it spits out are worth preserving or working further. That's why its music is tone salad. I would also say that human creativity often goes far beyond this, so I definitely wouldn't say that the machine is doing what humans do, or if it is, it is only to a very limited extent. But for certain commercial purposes that may be enough, just as algorithmic ambient music may be enough to serve as a sort of pleasant white noise to sleep to.

I should add that I'm not sure what I think about any of this except I'm quite sure that anything generated by AI should not be copyrightable, nor should the prompts that generated AI content be copyrightable. I would focus energy toward demonitizing AI as much as possible. That is, anything that is made with AI can't generate IP.
 
Copyright law itself exists because human expression has origin in citizens who cannot be parted from their right to acknowledgement of its origination; but there is more: a right not to be drowned in false speech generated by non-citizens such that this saturation renders communicative action unfeasible or impossible.
And the recent cases that reinforce that AI doesn't have copyright are being circumvented by the AI Generated music companies using licensing as an effective royalty stream.
 
I would like to contact my representatives in Congress about legislative steps to take to protect creators, especially regarding copyright law.
Any links to resources to educate myself on the state of things?
 
Regarding Copyright: AI companies could basically build a city of servers and fill them with zettabytes of images, songs and literature all created by AI and all available to the public for use with a small fee plus royalties.
Then, when any human creates an original picture or song or any literature and attempts to release it, the AI company could swoop in and claim that it is their material. Afterall, they could feed the song, picture or literature into their machine and it could render an AI version within milliseconds.
Next thing, you are sitting in court opposite a Lawbot 5000 arguing that you created the material.

Or, they could just render 20 very similar songs and upload them to their mega site. One minute I'm uploading my song to Sound Cloud, next minute their mega site has 20 similar songs and I'm receiving an email stating that I am being sued.

Or, if a record label requires a ContentID check against these AI company's servers, they can actually render a AI version of it in real time during the check.
 
Last edited:
I would like to contact my representatives in Congress about legislative steps to take to protect creators, especially regarding copyright law.
Any links to resources to educate myself on the state of things?
I take these discussions forward with our representatives and governments when I see them but as an individual, I have limited power when faced with (say) a corporation with 120,000 employees. Raging against the dying of the light is something that needs to happen around the world.
 
Glad to be here. I was reading some of the other threads, since I’ve been involved in the argument with policy makers it felt like I could add to the VI discourse.
The “they’re all evil” and “systems will replace us all” vibe that often comes from the creative community is viewed as Luddite by the other side. I’m trying to find a consensus to get them to ask themselves how far they should be taking this and what guard rails should be in place. We all have nuclear weapons, after 1945 we chose so far to not use them because the consequences were too dire. I’m hoping for a similar realisation within the future AI superpowers.
 
Top Bottom