This discussion is overall missing the point, I think...
First, stop bringing up art. These technologies are all about disrupting
industries, i.e. by definition they are focused on replacing
working composers. Art has absolutely nothing to do with this discussion because it's not like people will stop writing music for fun/enjoyment/art just because an algorithm can get paid to do it. I should be clear, though, that by "art" I mean the intention of the music, i.e. it is not being made for any specific purpose beyond that of the artist's/artists' (whatever it may be) and is not being commissioned. We're talking about music that is made in exchange for money; that's the only thing these technologies would have any reason to target.
Second, AI is a bit of a misnomer here. Obviously whatever capabilities the algorithm has will be decided by a human being. Humans are the ones writing the code, after all, and machine learning is still quite young. This is far better classified as "automation" as opposed to "artificial intelligence." I suppose if you want to get all semantics-y, this is technically "AI," but...yeah, at its core, it's automation.
As ChristianM said, this technology imitates. If it were inventing, then it'd truly be AI--machine learning--which is of course the goal and if it's possible to do at the desired level will inevitably come to pass, but at the moment, this technology's biggest strength is imitation. And that is
hilariously easy for an algorithm to do. You all should know, because you already do it. It's called "learning your craft." It's literally what makes a professional musician a professional--we understand
what musical parameters to tweak in order to gain the desired emotional response from the audience. It's a little cold to define it that way, but it's exactly what we do. Otherwise, none of us would be capable of composing music for dramatic projects. Doing it by "feel" just means your brain is wired in such a way that you can do it without explicitly thinking about it. Sort of how loads of people have to study their asses off in order to improvise in a jazz setting, but for some people like Jeff Beck, it's a natural thing for them and they already understand it. Their brains just already had that firmware installed!
I mean, take for instance orchestration. It's a deep and life-long art, but you can automate the meat of the process to where you have something "good enough" by having instruments with like ranges playing the same part. That'd take four seconds to code, and the vast majority of the people in the world wouldn't notice, especially considering that the vast majority of orchestrated music they hear is probably Beethoven and Mozart, and for all their genius, their orchestration was pretty tame and "vanilla" by latter standards. So the audience would just hear flute doubling violin, french horn doubling viola or cello, etc, and they'd unconsciously go, "Yeah, that sounds like an orchestra."
Same with more composition-related stuff. If I were to write an algorithm (using pseudo-code Javascript here) to define a Law & Order procedural track:
var genre = " ";
if(genre = procedural) {
var tempo = 112; // ideally we could randomize this within a given range, but that'd make the code more dense, so I'm leaving that out
var key = "D minor"; // same deal here
var signature = 4/4;
var chordProgression = [i, bVI || bVI/3] // this is just an example, obviously there'd be more potential chord progressions to cycle through
var harmonicRhythm = 2; // same deal here
}
Obviously you'd have to define that 112 is actually 112 bpm, that D minor contains these notes and not those, what the top 4 and the bottom 4 in 4/4 means, etc. And obviously this isn't complete--I'm just doing the easy lifting of defining the parameters, not the composition itself; you'd need to define sections of music, the order of the sections, what the instruments are, what patches they are, etc etc etc. And you'd of course have to tie this code to another application that can interpret it, can call the patches, can translate what you're writing as MIDI, stuff like that.
But that's easy stuff. Really easy. People were doing this thirty years ago--it just sounded like shit because producing music on a computer was at its infancy. The production level is still the bane of these services, but come on--you can easily analyze a mix with Ozone and, just like I did above, break down whatever processes during mixing resulted in a quote-unquote "mix that's good enough for the general audience" into a pretty small and easy algorithm. Because at the end of the day, anything in music can be quantified. You don't need to use an LA-2A--you just need your bass guitar to
sound like it's being run through one, which we can already do. Stuff like that.
There's a lot more to it than what I'm demonstrating here, but not
a whole lot more. Every single component of music can be broken down to its smallest part. We know this because we do this when we learn music. If you've ever taught music (break down concepts to their smallest parts), and you know how to code (translate those smallest concepts into something that a computer can interpret), then you could probably create one of these services yourself if you really, really wanted to.
Honestly, I actually hope this gains ground--and it will--and while the transition will be painful, on a
global, every-industry scale, we'll be forced to reckon with the fact that most of the reasons to organize society in under a capitalist form will be negated. I'm not preaching that this will automatically = utopia, because you'd have to be pretty fucking stupid to look back at every other major junction in history and assume that everything will be peachy on the other end. There's a major chance that the trend could eventually result in extreme boot-on-face capitalism, extreme dystopia. Maybe it won't change much at all.
But regardless, mass automation
is inevitable, and we composers will have to deal with it at some point too--and probably much sooner than you think. I've actually heard about Amper's plans, and they're big and coming fast, and money is literally being poured into them from those who will benefit (i.e. companies who currently pay composers our "expensive rates"). And since there's a chance that (again, on an
affecting literally everyone on the planet scale) we humans actually
could figure this whole "basically every job that people have in order to make money can be automated" thing, there's a
chance that, who knows, maybe we can leave the bullshit Law & Order procedural music that most of us could write in our sleep to the computers, and--assuming that we're talking about one of the better possible timelines here--could instead focus on the art, leaving the algorithms and the computers to disrupt to their heart's content. That's a very rosey "post Work" future, and it's one of a number of possible timelines. And while I'm not super optimistic that we'll figure it out within my lifetime, I'm glad that this conversation is popping up pretty regularly now in pop culture. People are taking it seriously, and hopefully we can navigate it adequately. For now, I'd say I'm truly neutral. I could see automation going in a number of ways, and to be honest I expect we'll pass through all of them at some point, roughly worst to best with a lot of dips and valleys along the way. Broadly speaking that's how humans have dealt with every other paradigm shift.