What's new

Aiva - Artificial Intelligence Composition: beta starting today

Given the amount of editing required, does it actually speed time to finished piece? Is it relieving composers/orchestrators of drudge work? From the video, I don't see that it does. I mean, it's a fascinating technology to be sure, and it may or may not get better in ways that matter.

The better analogy might be to composing/arranging with a phrase library. The algorithm spits out a set of precomposed bits—a kind of highly personalized phrase library—that you can arrange/incorporate as you will within the limitations of the licensing.
I think it could serve as inspiration. If someone's on a quick deadline, they can have this thing spit out music which they can arrange. From my understanding, it happens pretty much instantly once you tell it what to do. I can't see myself using it much, but it could serve well for people writing for libraries or trailers.
 
This will likely completely replace most trailer and generic "epic orchestral" music fairly quickly though.
Yea, something like that. Need a modern hybrid action track? Upload whatever numbers of those into the AI's database, hit compose, edit midi if needed, done, no need for composing anything.

But there's no way this replaces an 8 minutes Goldsmith/Horner/Williams-level action cue. Now, due to things like this, the difference between golden era composing and modern trendy stuff starts to emerge. Until software like this composes The Asteroid Field, Battle in the Mutara Nebula or Escape From Torture, it's all okay. Honestly, forget 8 minutes, show me how this program handles making even like 20 seconds of music like that.

The thing is, the market demands quite exactly the stuff this product is good at emulating. Whether that's a problem or not is up to anyone to decide. I don't care that much since I don't find that kind of music interesting and I don't listen to it. And I don't compose it, so...

Anyway, you can't stop the progress, I'm interested in this, what it can achieve, etc. Maybe in a few years it will compose a great Williams piece, what do I know?.. Music is about patterns and that's also a thing computers are good at. But I can't imagine what kind of quirkiness you'd have to put into the program to enable it compose like a great human composer.
But maybe you could let it generate a lot of melodies based on some sample pool of the best ones and then mine those variations that end up being fantastic, or something like that...
 
I signed up just cuz I'm interested. I don't feel threatened by it at all. There's plenty of crap royalty free music so if creators want to undercut composers the options are already there. I don't see AIVA breaking out of the "10 royalty free tracks per day" composer barrel anytime soon. It will be hard times for the people who make their living from AudioJungle but not the rest of us, I figure. As for replacing trailer and production music (because of its supposed lack of creative & complex structure), that's laughable, y'all are incorrect, this thing is nowhere close to doing that.
 
It sounds indeed not bad...but also not good. But - that's the current state of AI. When somewhen in the future these programs get more complex, we probably have to recon with artificial personalities that have very good skills....and might be even composing skills & Talent. who knows...
 
Here's a controversy-starter - I reckon mixers will be replaced by AI long before composers. There are already lots of "assisting tools" in mixing. Ozone can remaster your music (carrying out several complicated tasks at once) based on just being told the target genre of the music. FabFilter Q3 alerts you when two tracks have powerful signals in the same frequency range. Newfangled EQivocate can do dynamic EQ-matching, not only applying an EQ curve based on matching a sidechain signal, but also changing that curve as both the program information and sidechained audio play out. Any plugin with a dynamically changing function based on program information, like Vocal Rider, is already in effect a "mixing AI."

None of these tools replace having a REALLY good mixing/mastering engineer look at your music, but they are respectively FAR more advanced in the field of frequency/dynamic manipulation than AIVA is for notes. All AIVA seems to be - from the outside anyway - is one more "Bach engine" that ingests a bunch of known-to-be-good MIDI notes and regurgitates a sort of micro-medley of patterns.
 
There is no real creativity here, just regurgitation of patterns that a sufficiently large data set would indicate are acceptable or pleasing.

well, that's what is actually "commercial" music - the music that you listen everywhere and everyday on commercials, spots, standard youtube videos, vlogs, etc... even on TV shows and series...
 
well, that's what is actually "commercial" music
Even if it's needle-drop, a real-life person still needs to decide what goes where, what it's signaling to the audience, how to situate it in the picture, how to tweak when necessary. Post production involves departments communicating and coordinating with notes. How well does Aiva take notes?
 
Last edited:
Even if it's needle-drop, a real-life person still needs to decide what goes where, what it's signaling to the audience, how to situate it in the picture, how to tweak when necessary. Post production involves departments communicating and coordinating with notes. How well does Aiva take notes?

Hell, I think I'm more interested in an AI note taker now. If it can figure out what the hell the director means by "the trumpets are too loud" in an all string cue, it'll be worth its server's weight in gold.
 
I see a lot of worry in this thread. Remember when sample libraries grew and people were afraid it might turn all session musicians homeless. It didn't.

Sample libraries are just tools assisting composers and will likely never fully replace humans. Real instrument always have a edge for just being human: nothing is more realistic than reality and then there's also the very important element of craft: a good violinist will be able to convey emotion in a line like only a skilled human player can. Someone smashing on their keyboard wiggling the mod-wheel might come close but can only approach it.
Same thing goes for this, an AI may make decent compositions, but a human can have a whole different level of creativity - especially when it comes to scoring it takes craft and taste to know what's a suitable piece of music for that time and scenario where something generated to fit a generic theme might work, but will just not cut it to be unique.

I'm pretty interested in trying this out myself. The compositions posted were pretty surprising & I feel like it can be a fun tool for inspiration, just messing around or trying something new.
 
There are some big bummers mankind had to deal with:
- The earth isn't in the middle of the solar system
- The solar system isn't in the middle of the universe
- The earth is extremely small
- We are a kind of animal that survived the evolution
- Our conscious mind is slower than our decisions
- Our memory is very subjective

Next step: We are only mediocrely intelligent

AI is learning software. Now it's a kid with basic beginners skills. But it will grow. We will have to get used to.
 
It will undoubtedly replace the mountains of generic mediocre royalty-free nonsense used by 'content creators' but for TV, Film and top-tier Game? I think not. The actual music in those scenarios is only the final result of much longer, very human and collaborative process. If an AI could accurately model that particular roller coaster of joy and frustration then it's all over for the human race anyway. I think we are very much at the gee-wiz shiny toy stage of AI's development.

What I would like to see as is an AI tool that understands the process of an assistant - stick a II-V-I modulation in there before the last 8 bars - change everything to Hungarian Minor without screwing up the melody - write the score markings so it actually sounds like the mock-up that the client approved - go score me a baggie - that sort of thing.
 
Has there been any comment on how Aiva plans to protect against users uploading copyrighted music as reference? AI/ML algorithms like deep neural networks are deterministic (i.e., given the same input parameters, training data, etc) it will generate the same results. If I take a song owned by AC/DC and input into a specified algorithm to generate something new, the output is still owned by AC/DC. It would be no different than just putting the song through a filter. In fact, you can model a neural network as a highly dimensional non-linear filter. So, if someone put my music into Aiva and generated a hit song, I would likely be in the right to sue them.

As an aside, in my day gig, I am founder of a tech company that uses real-time AI/ML to automate sensor processing tasks (video, RF, etc.). Typically, these are tasks no human wants to do, or has the time to do. Legally, I cannot take video owned by someone else, and create training/validation data for commercial gain, without the owner of said videos approval (and likely required payment). I can only use "publicly" available data for free without permission.

What prevents a user of Aiva from uploading a reference track owned by someone else?

Legal questions aside, the use of AI/ML for creating music and art end-to-end is a cool demo of the tech, but hard to see how it is will have a net positive impact on our lives and civilizations. I think humans (even coders) get a large part of their satisfaction from creating. Of course, I cannot imagine paying anything to go see my favorite robot artists perform at Red Rocks. People will always enjoy playing music, even if AI takes away the composing side.
 
Has there been any comment on how Aiva plans to protect against users uploading copyrighted music as reference? AI/ML algorithms like deep neural networks are deterministic (i.e., given the same input parameters, training data, etc) it will generate the same results. If I take a song owned by AC/DC and input into a specified algorithm to generate something new, the output is still owned by AC/DC. It would be no different than just putting the song through a filter. In fact, you can model a neural network as a highly dimensional non-linear filter. So, if someone put my music into Aiva and generated a hit song, I would likely be in the right to sue them.

As an aside, in my day gig, I am founder of a tech company that uses real-time AI/ML to automate sensor processing tasks (video, RF, etc.). Typically, these are tasks no human wants to do, or has the time to do. Legally, I cannot take video owned by someone else, and create training/validation data for commercial gain, without the owner of said videos approval (and likely required payment). I can only use "publicly" available data for free without permission.

What prevents a user of Aiva from uploading a reference track owned by someone else?

Legal questions aside, the use of AI/ML for creating music and art end-to-end is a cool demo of the tech, but hard to see how it is will have a net positive impact on our lives and civilizations. I think humans (even coders) get a large part of their satisfaction from creating. Of course, I cannot imagine paying anything to go see my favorite robot artists perform at Red Rocks. People will always enjoy playing music, even if AI takes away the composing side.

I think these legal/philosophical questions are fascinating.

If a machine is fed music from 1000 artists, and uses that input with a simple instruction ("make it sound happy") to generate a new piece of music that is by a musicologist's evaluation "original", who owns the output? I assume it would be the owner of the machine? Or would it be the person who issues the "make it sound happy" command and presses play?

In a reductive way, that's what you're doing when you hire a composer - the difference being the composer is a human who can legally own things.

Maybe we should start thinking about AI rights before it's too late. :P
 
Of course, I cannot imagine paying anything to go see my favorite robot artists perform at Red Rocks. People will always enjoy playing music, even if AI takes away the composing side.
People also enjoy composing music... even though kills of the masters out there are out of reach.
 
Last edited:
I think the results of 1000 artists fed into the machine will generate nothing really new, just a mixture of older existing stuff.
It would be interesting to let the machine generate additive random output and let real people send back feedback about their emotion. That way there's a chance to discover new ways of emotional impact. A new art form might be the musical emotion trainer for AI programs.
 
Top Bottom