What's new

SPINOFF: Made with AI: Suno, Udio and others - Discussion

I just generated a couple of Latin lounge themes. The results are simply stunning and I refuse to believe it was "generated". To me it sounds like a plain vintage vinyl ripoff. It may be public domain, but that completely deteriorates the "generation" phenomenon.
Refusing to believe will not make it less true.
 
It's pretty obvious that they're performing copyright infringement on a massive scale.

An example was recently been posted with a vocal is clearly derivative of Paul McCartney.

Vocals have been posted from specialized genres such as Broadway, Opera, and Country, where no comparable vocal library is available.

Legally making these libraries would be hugely expensive.

Illegally creating them using copyright materials is vastly cheaper.
Unfortunately, this may not be the case. I've heard (but not confirmed) that music library companies like APM are licensing their vast music libraries as training data. this means full songs plus stems. That Paul McCartney sounding song could have been pulled from a bunch of totally legal knockoffs. Same thing with broadway, opera and country. I hate to come off as doom and gloom but they may have already won. Decisively so.
 
One obvious creative application is asking generative AI to combine different genres, styles, instruments, etc.---with the prompter coming up with unusual combinations and/or sequences. Unfortunately:

"One thing I noticed Udio does is that, if I have multiple genres in my prompt, it prioritizes them in order, but it has no incentive to "mix" them, if I say something like:

"neurotrap R&B crunk grunge", I might get a track that is R&B, or I might get one that is, rarely grunge, but most of the time it should be neurotrap - getting it to effectively mix the genres is a crapshoot, and absolutely forget about extending it when it does nail it, because you'll roll the same dice with every new generation. :("

 
Last edited:
next evolution will be prompting a part i need for my next song, with my backing track as reference

i input my rock track and i ask for "an irish violin doing exotic scales, espressivo, crescendo".

and then the AI gives me also the midi performance, maybe adapted to a library of my choice, so that i can refine and edit it..

or i want to ask for "a progressive metal drumming with a lot of double bass and polyrhytm on cymbals and hihat", following a guitar riff that I have just recored. Then, the bass part will follow, with "agressive octaves and uptempo, in folk style"

what about "a lush pad with dreamy release, with a soft arpeggiated pattern" which will follow the chords and dynamic of my song?

stuff like jamstix or toontrack EZ-serie may become obsolete in the blink of an eye.
 
I think a gamechanger for the VI world would be if you can upload your own midi file of a melody for example and ask it to put it on a solo flute or any other solo instrument at a set BPM.

Then download this into your DAW and into your composition. From the solo examples of Udio it does the performance aspect better than using samples.

I think this would also be easily be able to be done with the current models, (its just whether they prioritize it as a worthwhile addition),

From there you go to uploading a midi file with chords and melody and getting it to put it on a string section for example. Or for a choir part (which it does better than current samples)

So even the current models could fundamentally change the way we compose with a DAW. Imagine uploading your melody and getting 5 or 6 different performances of your melody in under a minute vs tediously programming the performance cc's and articulation switching in midi.
 
Last edited:
Udio has temporarily disabled manual mode, apparently because too many people were generating voices (or possibly songs?) excessively similar to named artists.

The only mode they have now randomly adds a bunch of words to your prompt:

"I put in: Slow, Pop, Pop Rap, 2010's

It reprompts into R&B lol: Female vocalist, R&b, Contemporary r&b, Rhythmic, Hip hop soul, Romantic, Introspective, Pop, Atmospheric, Pop rap, Soft

I mean, it's unusable 😂"



Seems like the quality (of the transitions and the melodic coherence) may have gone down considerably too, based on a few generations I tried just now vs before... though that could just be luck. They really sound like garbage fires today (and not in good ways...).
 


... so perhaps Udio decided that symphonic instrumental passages were also potentially infringing?... Or it could just be glitching as they update their automated moderation.... There's no "right to likeness" equivalent for instrumentals in the US afaik... OTOH as far as copyright goes there was that "Blurred Lines" verdict... "The Blurred Lines case was unique, in that the two works at issue did not have similar melodies; the two songs did not even share a single melodic phrase. In fact, the two works did not have a sequence of even two chords played in the same order, for the same duration. They had entirely different song structures (meaning how and where the verse, chorus, etc. are placed in the song) and did not share any lyrics whatsoever.

... based upon a perception that the overall “feel” or “groove” of the two works is similar, as songs of a particular genre often are."

 
"It's pretty obvious that they're performing copyright infringement on a massive scale."
Is there a point in discussing how good it is if it's completely illegal from the outset?
And by the way, has anyone tried to Shazam their Udio creations to know exactly which old record it was ripped off from?
 
Last edited:
next evolution will be prompting a part i need for my next song, with my backing track as reference

i input my rock track and i ask for "an irish violin doing exotic scales, espressivo, crescendo".

and then the AI gives me also the midi performance, maybe adapted to a library of my choice, so that i can refine and edit it..

You want the perfect melodies, perfect performance, without any input on your part, AND the MIDI parts "to refine".

That's not writing music. I don't see the point. Just let it do the whole thing and watch at this point.
 
Is there anyone here with a moral conflict about using AI tools when writing their music? I feel like everyone has discussed how great it is and how much easier life will be but I'm not entirely sure how I feel about it yet. It almost feels like undermining the creative process to me.

And if it goes in the direction that we're talking about now then live instrumentalists are going to go out of business as well as sample lib devs...

While I don't want to get behind the 8 ball and lose an "advantage" that my fellow composers have, I want to have the pride and joy that comes from finishing a piece that I wrote, arranged, mixed, and mastered. It feels similar to writing something using ChatGPT. What if (and this isn't speculation, this is going to happen) authors started writing ChatGPT prompts instead of novels? Before you say "that's different," think about the parallel here. If I give the AI a "prompt" or a melody with no expression, timbre, or chordal harmony, and ask it to turn it into a guitar solo with a bassline to accompany it, haven't I done the same thing?

If AI does get "that good" then it seems to me that everyone can be a composer. I'm not saying that I want only us elite VI-C members to be composers; but there's something called market saturation. Imagine if Doug the piano player could suddenly turn the melodies he whistles into orchestral arrangements, all mixed and mastered to perfection, just by paying $19 a month for his AI subscription. There's probably a LOT of Dougs out there. I have a lot of friends that would write music if it wasn't such an investment and wasn't so hard to learn.

So, the world is ending and the industry is irreversibly changed? No. Paramount will keep hiring the same people they always have, because they're Paramount and they can afford it. But what about the directors and game devs that are my target demographic? They're not paying for my name...even though there are a lot of really talented composers here, they haven't heard any of our names. Cost is a driving factor for most smaller projects.

Thank you for reading my AI-written Oxford essay.

No, I wrote that garbage myself.
 
Drake releases new song featuring AI Tupac and AI Snoop Dogg:



Not clear if he got clearance, or if that would currently e required for (deceased) Tupac: "California: The state introduced a bill (AB 1836) in January that would make anyone who uses the digitally simulated likeness or voice of a deceased celebrity liable for damages" but it hasn't passed yet. Snoop Dogg's online response implied that he wasn't aware of the track before its release....

The ELVIS act did pass in the (US state of) Tennessee (Nashville, etc.), which does extend to the voices of the deceased:

"The law goes into effect on July 1, 2024. ... broad definition raises the specter that liability under the ELVIS Act may extend not only to use of an existing sound recording of someone’s voice, and not only to digitally generated recordings or audiovisual content that approximates individual voices, but also to humans who can imitate other artists (i.e., soundalike artists) ...

Neither of the new causes of action explicitly include such a commercial-use requirement, creating the possibility that a platform or generative AI company might be held liable for a wider range of unauthorized uses of an individual’s voice or likeness. Indeed, the new provisions seemingly overlap with the existing prohibitions in a manner that could effectively eliminate the commercial-use requirement in many cases."

 
You want the perfect melodies, perfect performance, without any input on your part, AND the MIDI parts "to refine".

That's not writing music. I don't see the point. Just let it do the whole thing and watch at this point.
I was Just thinking of a possible and maybe real future

Right now I'm using jamstix for drums writing and I used toontrax ez piano for some parts of my albu because I suck as a piano player.

They are not a.i. and I "choose" what to do, what to keep, what to edit, etc

What would be the difference in term of creativity and artistic value in using a loop from a library and a loop created by an a.i.?

A.i. will be a step further. I'm not a drummer but jamstix is a valuable help, as well as ez keys
A..I. will be a "better" help, if I can drive the results to my need, but 'd like to avoid random results.

It is an inevitable direction, imho.

I have used chatgtp to get ideas and brainstorming for my metal album, to write lyrics sung by SynthV, because I never wrote song lyrics before.
if I coukd use a.i. to generate a piano part which follows my song, suited to my needs, I will probably start to use it instead of ezkeys. Will it limit my creativity? I don't think so: I will still write my guitar riff, for example, or I will take the a.i. input to develop some idea.
 
Last edited:
I was Just thinking of a possible and maybe real future

Right now I'm using jamstix for drums writing and I used toontrax ez piano for some parts of my albu because I suck as a piano player.

They are not a.i. and I "choose" what to do, what to keep, what to edit, etc

What would be the difference in term of creativity and artistic value in using a loop from a library and a loop created by an a.i.?

A.i. will be a step further. I'm not a drummer but jamstix is a valuable help, as well as ez keys
A..I. will be a "better" help, if I can drive the results to my need, but 'd like to avoid random results.

It is an inevitable direction, imho.

I have used chatgtp to get ideas and brainstorming for my metal album, to write lyrics sung by SynthV, because I never wrote song lyrics before.
if I coukd use a.i. to generate a piano part which follows my song, suited to my needs, I will probably start to use it instead of ezkeys. Will it limit my creativity? I don't think so: I will still write my guitar riff, for example, or I will take the a.i. input to develop some idea.
It's a personal thing I guess. I would never use assisted writing tools - AI or not.

I write all of my parts.
I'm not a violin player I still write my parts, not a sitar player, still write my parts, never touched a real taiko in my life, still writing the parts etc etc... (maybe they suck who knows, but they are 100% human). Sometimes it's a painstaking slow work to do so - so be it.
I don't use loops either 99% of the time.

I would never use an assistant to write my lyrics, especially a software, unless you are writing a nappies commercial or something like that I guess who cares, but for your own personal album? Your pride and joy?
If you need inspiration read a book or something, read (human) poetry, enrich yourself instead of pressing buttons and waiting for stuff to happen.

It is the opposite of inevitable, we can (still) choose if to use it or not.

AI should be kicked out of art in my opinion. It shouldn't be "writing" even a heavily edited line in my view.

And if you want to use it great, enjoy, just don't forget to share the credit with Mr. "AI" on your album/work.
 
Last edited:
I understand and respect your choice. When I was younger with much more spare and free time I did the same
Now time and effort for my hobby are limited .. in my latest album I used
A theorbo loop from a library
I asked a DJ through fever to make some scratches
I asked a guitaris friend of mine to make a virtuoso guitar solo I would not be able to play
I used piano parts from ezkeys
Drum was arranged with jamstix
Lyrics and topics were conceived by me but I used chatgtp and Reverso to help me choose the right and correct words.

I don't think that the result would have been less "mine* if I would have used an a.i. because I'm any case the end result is what I have in mind, with some help in brainstorm or in making the part
The guitarist friend made three or four takes, and he did all the work in his home, we chatted via WhatsApp! The same with the DJ.
Would have been so much different if I used a prompt to ask what I needed?
 
Some Youtube channels posting Suno examples where the vocals sounded like too much like particular vocalists have been taken down.

More than two days ago now Tupac's estate demanded that Drake's AI Tupac track be taken down, but it's still up. They said they're taking legal action---not clear what exactly.
 
I'm a bit late to the party here, but I wanted to say a few things, having waded into the whole discussion on Twitter.

Yes, LLM AI recognizes patterns. It generates new items through random variation of reproducing patterns. There's an affinity with a certain mode of human creativity in this, which is why I don't think it can be completely dismissed and it might indeed turn out to be very good at making materials, simply because quantity often yields quality, and the one thing it is already very good at is delivering in quantity...

I would say composition required human labor because that conceptual human labor was deemed integral to its standing as music. This is why there have been debates about algorithmic "composition" and whether it was even music since it came onto the scene. Those debates were never definitively decided, and I imagine the reason for that has to do with ambiguities that reside in the ontology of music. I don't really expect AI to change that, though I expect it will bring a great deal of legal clarity to the issue of algorithms and copyright.
LLM AIs may recognise patterns, but actual machine learning is pretty dumb. Watch this video of a machine using a neural network to learn to play one level of Super Mario World. It took many, many tries over a 24 hour period to even learn how to do this.

This other video regarding speedrunning Mario literally takes the ChatGPT algorithm and makes it speedrun one level of Mario. At 2:00 the programmer (Kush) says the algorithm starts taking "random actions". Machine learning is literally brute-forcing the process of "evolution" through hundreds and hundreds of repetitions. A human wouldn't take this long (or be this dumb) when trying to learn to beat one level of a Mario game.

I'll repost a series of tweets I made here, because it's relevant:

The difference between human learning and machine learning is that humans have context that can guide their learning, and that context influences what is learned and why. It is a focused process of trial and improvement.

In contrast, machine learning is like trying to get out of your house by walking up to the nearest wall, bashing your head against it and moving a little bit, then doing it again and again until you find the spot where you find an open door.

Then you go through the door, walk up to the next wall, and repeat the head-bashing process until you finally manage to get out of your house.
This is why the scale of our current AI situation boggles my mind, and this is why the environmental costs are so freaking high. Imagine that entire data farms have been doing this process with copyrighted material 24/7 for quite possibly years without being noticed. It's only now that we see what they've been doing without our knowledge.

Make no mistake, this is NOT creativity. This is not composition. This is not "trial and improvement", as I said above. This is, at best, trial and error. It's not focused, it's random. This is monkeys with typewriters recreating Shakespeare, just slightly smarter, because there is a learning component (an "artificial dopamine reward system", if you will) involved. AI has literally learned to copy what it learns from, but there is no context to decide what is placed where, and why.

Now, knowing all this, are you still impressed? I'm actually kinda sickened by the whole thing.
 
Last edited:
This is not "trial and improvement", as I said above. This is, at best, trial and error. It's not focused, it's random.
While the initial weights in the neural net are random, the process of learning is not. The learning process is one of successive refinement through error minimization. At each step, the change to the weights is small, in order to avoid overtraining the network. Overtraining results in the neural network learning to replicate the training data, but not being able to generalize past that. By gradually moving to a solution and using a sufficiently diverse training set, the neural network will learn to classify a larger dataset than the training data.

Running the solution is not random, either. Once a neural network has been trained, its behavior is purely deterministic.

The "context" is literally hard-coded into the LLMs. If the context can be derived from the training data, the LLM will learn it.
 
While the initial weights in the neural net are random, the process of learning is not. The learning process is one of successive refinement through error minimization. At each step, the change to the weights is small, in order to avoid overtraining the network. Overtraining results in the neural network learning to replicate the training data, but not being able to generalize past that. By gradually moving to a solution and using a sufficiently diverse training set, the neural network will learn to classify a larger dataset than the training data.

Running the solution is not random, either. Once a neural network has been trained, its behavior is purely deterministic.


The "context" is literally hard-coded into the LLMs. If the context can be derived from the training data, the LLM will learn it.
So if the context is hard-coded, and the algorithm is purely deterministic, is this why AI has trouble making specific changes to something that might be a good first pass? For example, if you generate an orchestral track but need more detail in the strings, or you generate a piece of artwork but the perspective or proportions look wrong? And is it possible for the AI algorithms to get better at this over time, or will it need more research into new algorithms that can work together with the existing ones to level up the output?
 
Top Bottom