What's new

Artificial Intelligence as an assistant composer?

There is no way it is going to know the latest underground dance floor meme that Michael Bay will completely loose his sh*t over.
Ha, as an occasional DJ I've sometimes stood there picking the next track thinking that this is something software will soon be a lot better at than us humans - knowing the latest trends before they get big, and predicting what track the specific audience that specific evening will react positively to. That's just something that's pretty hard for us, and much easier for computers.
 
Totally agree with you on this @AlexanderSchiborr .

Actual chess software can beat Magnus Carlsen without problems; they play moves that are simply not human, and that are based on raw power calculation. I readed an article recently explaining how these algorithms works. Chess software is actually “making decisions” between different moves in a mechanical way: software can tag every possible move with a number. The sofware will choose the A movement over B because:

-the A value is 5 and the B value is 2. -and because 5 is more than 2. -And because it is designed to choose the hightest value.

The engenieer has inteligently established a set of rules to assign the highter number to the most powerful movement. But the software doesnt know what a number is or that five is more than two or that the queen is much more important than the pawn; is the engenieer who knows it. If he defines that 2 is more than 5 then the software will asume it without problems and will loose against Carlsen in an instant. So at the end there is only a dead machine working as excepted.

But music is imo something different. Creating music is about taking real decisions during the process that are implying your consciousness, your emotional perceptions, your knowledge, your goals… A software making music is forced to take “decisions” also. But how? I cant imagine a software perceiving by itself that “this is beautiful than this”. But i can imagine a software with defined instructions like: “if a sudden expectation is needed go from the dominant chord to the bVI chord instead of going to the tonic”. I think that in a future perhaps this could work for making correct/generic music, which eventually could be replacing some composers jobs. But not for making something totally beyond the defined rules of the sofware.

Another thing; if there is someone out there that can design a mechanical way to measure the beauty of a melody that eventualy could be encoded into an algorithm…guess what? Those are the musicians and this site is full of them…and we know that there are not mechanical rules that once aplied will automatically produce a beautiful melody.
Well said. I'm reminded of Searle's "Chinese Room" argument re: the inability of AI to actually understand rather than act as input/output.
 
knowing the latest trends before they get big, and predicting what track the specific audience that specific evening will react positively to. That's just something that's pretty hard for us, and much easier for computers.

Really? You think a computer will be able to pick up 'the vibe' in a room before an experienced human? I think that's unlikely. (eight years at the coal face in London's nightclubs)
 
Music is maths. Conceptually there is no reason whatsoever why AI couldn't be extremely useful for the 'grunt' work that I am sure everybody here does everyday. Chord sequence, harmonisation, meter, timing etc etc. However the more you learn about music the more you realise the infinite variations that are possible as those little musical decisions mount up. I'm very happy for something like Orb to resolve a chord sequence for me at the right tempo to hit 01.04.33. There is no way in my lifetime that it is going to know to add that little triplet stutter in the melody that makes the whole thing come alive. There is no way that it is going to know that the director is obsessed with cimbalom and how to construct complimentary instrumentation. There is no way it is going to know the latest underground dance floor meme that Michael Bay will completely loose his sh*t over.

Will there be automated soundtracks for dross TV shows and YouTube channels? Of course! Technology will always remove the jobs where there is a financial incentive to favour output over quality. Who wants those jobs? If you treat music like a production line (yes I know, I know) technology will eventually 'productionize' it. Just like every other industry in the world.

Change is inevitable and it's pointless bitching about it. Embrace it, stay ahead of the curve and use it to your advantage. I really hope something like Orb works so that the distance between the musical ideas in my head and finally fitting them to a cue is reduced. It still think it is some years away however. The demos on the Orb site are absolute garbage.
Good points. The key is using it in an innovative way and not letting convenience take the forefront of the actual writing.

As an aside, despite my own desire to be able to make convincing music without musicians, my end goal would always be to record as much playing as I'm able. I don't think I'm unique in this regard.

Really? You think a computer will be able to pick up 'the vibe' in a room before an experienced human? I think that's unlikely. (eight years at the coal face in London's nightclubs)
Indeed, I think this type of thinking is a deification of AI that reveals a lack of understanding about AI's capabilities. One can dream, sure, but predict?
 
Really? You think a computer will be able to pick up 'the vibe' in a room before an experienced human? I think that's unlikely. (eight years at the coal face in London's nightclubs)
(Replying to my own post - I'm clearly insane.)

Having said that - in the far-future I can imagine everybody in a nightclub being tagged, having their blood chemistry analysed, their personal computing implants interrogated and the aggregate 'vibe' being analysed and compared to historical data to provide a list of sound track options for the operator depending on the 'mood-direction' required. Of course the only nightclubs left by that time will be in the badlands of the Jupiter mining colonies.

I'll stop now...
 
As an aside, despite my own desire to be able to make convincing music without musicians, my end goal would always be to record as much playing as I'm able. I don't think I'm unique in this regard.

Couldn't agree more. We react in such deep ways to actual musicianship that I don't think will be going away any time soon.
 
Couldn't agree more. We react in such deep ways to actual musicianship that I don't think will be going away any time soon.
Indeed!

(Replying to my own post - I'm clearly insane.)

Having said that - in the far-future I can imagine everybody in a nightclub being tagged, having their blood chemistry analysed, their personal computing implants interrogated and the aggregate 'vibe' being analysed and compared to historical data to provide a list of sound track options for the operator depending on the 'mood-direction' required. Of course the only nightclubs left by that time will be in the badlands of the Jupiter mining colonies.

I'll stop now...
Of course, the implants will have been implicated in the development of rapidly advancing neurodegenerative disorders, but UNITECH will deny any such connection.
 
Most historians agree that this was the inciting incident for the great Jupiter uprisings and it's eventual independence.
 
..and the eventual outlawing of invasive brain tech in most colonies...
 
The savvy music library is going to copy Amper Music and the like straightaway. There is definitely a place for this alongside the human compositions.

..... once the compositions are better. They sound crud at the moment
 
Do not worry about the AI, the path that was chosen (40 years ago) is the one that imitates, not the one who invents.
This path has been chosen because it gives quick results, but it is not really intelligence.

To illustrate this:
"Victor tells a funny story to a monkey, the monkey does not understand anything and does not react.
Victor's brother enters the room and tells him the same funny story. The brother burst out laughing. The monkey imitates the brother and also laughs ... "

The AI as imagined by John McCarthy will probably never see the light of day ...

But in music, harmony, counterpoint, etc. are achievable because they are "techniques" ...
 
But emotion drives performance, melody creation etc. It's hard to ignore that, at least from an artistic standpoint, many people don't care about something not written or created by another person. There's nothing to personally connect with.
The only truly meaningful objective measurement to the efficacy of a piece is how it affects people emotionally.

There are two big, incorrect assumptions people always make regarding this topic.

1) The idea that an AI could not replicate pieces that create subjective feelings (Emotion) in humans because the machine itself lacks this capability, but the emotion you describe is not an objective experience of music.

2) The idea that people actually care about whether the "art" they enjoy is created by man or machine rather than caring about the quality of the piece and its appeal to the viewer.

To the first one, I would argue that emotion doesn't really drive melody creation. That's no doubt a controversial statement on forums of this area of interest, but it's the truth. Melody is the result of a linear succession of tones and every pleasing tune ever composed consists of these tones following particular rules of motion (and subversion of these expectations) repetition and variance, chord and non-chord tones to create these melodies and these rules are being followed whether the composer is consciously aware of it or not.

This is absolutely something an AI could learn and would be good at; all it needs to do is be fed these rules and examine tons of pieces, which it could do in seconds.

It's just a matter of time before an AI can easily master all of these concepts and generate them on its own. There is already notation software that can spot errors in voice-leading. Apply this as an auto correct feature, the basic "rules" of melody and harmony, have the AI compose multiple melodies at once and boom — counterpoint.

As for point two, while it certainly has its detractors for being "computer music", are we all just going to pretend EDM, trailer music, etc. doesn't have a positively massive fanbase? 99% of trance music is exactly the same and an AI should have no trouble creating if someone was truly up the task of making an AI that could do it. The number of people who are bothered by sampled music or entirely synthetic pieces are statistically insignificant.

Nobody cares if the music they're listening to on spotify or in the theater is made with samples or synthesizers and they're not going to care if it was composed by a computer too — they care that they like it.

John Williams-level compositions at the click of a button are coming in the next decades whether we like it or not.

Art is not safe from human laziness allowing humans to be replaced entirely.
 
Really? You think a computer will be able to pick up 'the vibe' in a room before an experienced human? I think that's unlikely. (eight years at the coal face in London's nightclubs)
Really. I just got to thinking about that one night, what am I really doing and how I could write requirements for software that could do the job better. It would not be trivial, but...

The number of people on the dance floor and their energy level (plus stuff like age, sex etc.) is pretty easy data to acquire, though things like how fashionably they're dressed is probably a bit harder, and I know that also goes into my thinking. Sure, I can still do that part better, even though a machine would also have an easier time getting current data on drink sales, where the actual money is. But, let's say, for just the picking up the vibe of the room part, I'm probably still better overall than any currently feasible software I can imagine.

But that's one room, which is a big limitation. A machine could access historical data, let's say up to last night, or even up to the minute, from this club and clubs all across the world, find clubs with similar profiles, and make a list of playlist suggestions. That I can't do and will never be able to do - the closest I can do is remember what worked at the same club earlier, talk to other DJs who played there, and check charts. That's not even close.

The last step is to put the first two together and mix tracks from those suggestions based on what's currently happening in the room, which, well, I'm pretty good at doing that on the fly, but again, something with data from clubs all over the world could do better.

But my day job is administering tools used by software developers. So I tend to think about this kind of stuff.
 
The whole system of automation will decimate the job market all over the place (as I've mentioned in another thread).

A landlord of a pub will stop hiring a dj because of a program that'll pump out original dance music all night. He'll be happy with the money he's saved until he gets an email from the brewery telling him they don't need him any more because they'll be installing self service drinks dispensers at the the bar and an automated barrel changing system in the cellar.

All the managers at the brewery will be pleased with themselves of the savings they've made for the company, until the shareholders contact them saying that their services are no longer needed as there's a computer algorithm that can make middle management decisions with more efficiency and a lower margin of error than any human.

While they're cleaning out their offices the warehouse staff are being informed that they're being replaced by robotic picking/packing machines and the truck drivers are being let off in favour of driverless lorries.

Obviously all this won't happen over night, but whole industries will be affected simultaneously and there's nothing really to fill the gap.

It's a real problem that's not given enough serious thought IMO. Music composition is just one small part of it...
 
There are two big, incorrect assumptions people always make regarding this topic.

1) The idea that an AI could not replicate pieces that create subjective feelings (Emotion) in humans because the machine itself lacks this capability, but the emotion you describe is not an objective experience of music.

2) The idea that people actually care about whether the "art" they enjoy is created by man or machine rather than caring about the quality of the piece and its appeal to the viewer.

To the first one, I would argue that emotion doesn't really drive melody creation. That's no doubt a controversial statement on forums of this area of interest, but it's the truth. Melody is the result of a linear succession of tones and every pleasing tune ever composed consists of these tones following particular rules of motion (and subversion of these expectations) repetition and variance, chord and non-chord tones to create these melodies and these rules are being followed whether the composer is consciously aware of it or not.
Sure. The mechanisms of a melody are largely rule-based, and what tends to be memorable and pleasing to the human ear can be said to follow certain predictable patterns. What I mean by emotion-driven is the idea of "what is the colour red" vs "what is it like to see the colour red" -- the context of the melody may be lost on many, but it's certainly not unimportant.

As for point two, while it certainly has its detractors for being "computer music", are we all just going to pretend EDM, trailer music, etc. doesn't have a positively massive fanbase? 99% of trance music is exactly the same and an AI should have no trouble creating if someone was truly up the task of making an AI that could do it. The number of people who are bothered by sampled music or entirely synthetic pieces are statistically insignificant.
True, but despite it having a fan base, I would hardly say any of it is worthy of the label of "artistically significant", or even interesting. The manipulation and context is what makes Amon Tobin's music considerably more interesting than random samples thrown together or the run-of-the-mill mashup DJ.

Nobody cares if the music they're listening to on spotify or in the theater is made with samples or synthesizers and they're not going to care if it was composed by a computer too — they care that they like it.
You make some good points, but I disagree with this. As for sound source, I think it's less important than what it sounds like to a listener. But as for nobody caring whether or not it was made by a person? Why will someone pay $6000 for a hand-made guitar when they can buy a production model of similar or better build quality, often with less small flaws, for significantly less? Why do we care about who can drive the fastest and most accurately, or lift the most weight, or shoot the straightest, or problem-solve the quickest, when machines have been able to analyze and do all of these things more competently than us for years? Why do people still attend live music events (arguing that more people spend time in clubs is a ridiculous point [not one I'm saying you're making, just in case it comes up]; a small fraction of the people in clubs even notice what music is playing, other than the deafening noise and underlying rhythm that's consistent and dance-able)? Why do people still care about painted vs digital art? Why do people still care about who the best chess player is, despite AI having beaten world-renowned players? Why do a significant amount of people care about live improvisation in music?

While I agree regarding AI being able to brute-force analyze music at ridiculous speeds, and being able to construct complex melodies and harmonies in the future (probably), I can't help but think that there will still be a significant place for human expression. There's absolutely a market for generic garbage, especially in music, but I think believing meaning is unimportant to humans is either a bias resulting from metaphysically assuming humans as purely mechanistic (an underlying assumption of this view being that humans make no connections to anything beyond deterministic neurochemistry), or it's a pessimistic view on human laziness (something I'm afraid of and unfortunately suspect myself). Of course, this isn't really touching the range of expression and interesting rulebreaking that happens in music, something much more difficult to replicate.

I also suspect many of these tech "innovations" are far more difficult to seamlessly and consistently implement in real life. Consider even something like human language and how inaccurate translation AI is (at least that which is available to the public). It works fine for short phrases, but quickly loses sense of context or nuance. Even videogame AI is nowhere near up to where people would suspect it should be by now, despite processing power no longer being much of a hindrance. I think writing interesting music will be a difficult task, something far more difficult than a brute-force response-analysis that happens with, say, AI chess.

Art is not safe from human laziness allowing humans to be replaced entirely.
This is something I personally dread. The irony is that the negative research always comes out after the invention (i.e. smartphones and the research that's come out, especially in regard to overall quality of life, neurological development in children, severe reductions in attention span and ability to focus in adults, etc).

But anyway, you're a replicant, so why would I listen to you? Do you dream of electric sheep?
 
Last edited:
Really. I just got to thinking about that one night, what am I really doing and how I could write requirements for software that could do the job better. It would not be trivial, but...

The number of people on the dance floor and their energy level (plus stuff like age, sex etc.) is pretty easy data to acquire, though things like how fashionably they're dressed is probably a bit harder, and I know that also goes into my thinking. Sure, I can still do that part better, even though a machine would also have an easier time getting current data on drink sales, where the actual money is. But, let's say, for just the picking up the vibe of the room part, I'm probably still better overall than any currently feasible software I can imagine.

But that's one room, which is a big limitation. A machine could access historical data, let's say up to last night, or even up to the minute, from this club and clubs all across the world, find clubs with similar profiles, and make a list of playlist suggestions. That I can't do and will never be able to do - the closest I can do is remember what worked at the same club earlier, talk to other DJs who played there, and check charts. That's not even close.

The last step is to put the first two together and mix tracks from those suggestions based on what's currently happening in the room, which, well, I'm pretty good at doing that on the fly, but again, something with data from clubs all over the world could do better.

But my day job is administering tools used by software developers. So I tend to think about this kind of stuff.
Ok but to be fair, what, really, is the "vibe" range in a dance club? I've spent incredibly little time in clubs and I'm sure I could "analyze" the vibe in most nightclubs without going into them.
The whole system of automation will decimate the job market all over the place (as I've mentioned in another thread).

A landlord of a pub will stop hiring a dj because of a program that'll pump out original dance music all night. He'll be happy with the money he's saved until he gets an email from the brewery telling him they don't need him any more because they'll be installing self service drinks dispensers at the the bar and an automated barrel changing system in the cellar.

All the managers at the brewery will be pleased with themselves of the savings they've made for the company, until the shareholders contact them saying that their services are no longer needed as there's a computer algorithm that can make middle management decisions with more efficiency and a lower margin of error than any human.

While they're cleaning out their offices the warehouse staff are being informed that they're being replaced by robotic picking/packing machines and the truck drivers are being let off in favour of driverless lorries.

Obviously all this won't happen over night, but whole industries will be affected simultaneously and there's nothing really to fill the gap.

It's a real problem that's not given enough serious thought IMO. Music composition is just one small part of it...
It really isn't given enough thought, because for some reason people view technological invention as a moral imperative taking precident over the good of mankind. I really think Elon Musk is one of the very few tech geniuses that understands this. Even Jobs didn't allow his own products into his home -- what kind of person creates a household product they're afraid to let their family use?
 
Last edited:
The whole system of automation will decimate the job market all over the place (as I've mentioned in another thread).

A landlord of a pub will stop hiring a dj because of a program that'll pump out original dance music all night. He'll be happy with the money he's saved until he gets an email from the brewery telling him they don't need him any more because they'll be installing self service drinks dispensers at the the bar and an automated barrel changing system in the cellar.

All the managers at the brewery will be pleased with themselves of the savings they've made for the company, until the shareholders contact them saying that their services are no longer needed as there's a computer algorithm that can make middle management decisions with more efficiency and a lower margin of error than any human.

While they're cleaning out their offices the warehouse staff are being informed that they're being replaced by robotic picking/packing machines and the truck drivers are being let off in favour of driverless lorries.

Obviously all this won't happen over night, but whole industries will be affected simultaneously and there's nothing really to fill the gap.

It's a real problem that's not given enough serious thought IMO. Music composition is just one small part of it...

No matter how you slice it, this leads to a communist dystopia for us.

"We'll just have a universal basic income!"

Yeah, well you can only redistribute wealth so long as there is wealth being generated to distribute. Which you don't have under such a system which also discourages entrepreneurship and renders impossible anyway.

Civilization is all about work, without it we have no civilization. It's working to build things, generate an economy, work to maintain those buildings, economies and innovations and our day-to-day interactions within this system that make civilization what it is and separates us from the animals.

The only hope I can see is that CEOs and corporations will realize that having boosted productivity and automation doesn't matter if no one has any money to buy your products because they have no work and their UBI is drained by utility companies that now charge the most they possibly can when everyone has a fixed income.

This happened at my local Wal-Marts. They used to have automated checkouts, but they got rid of them due to it killing jobs that gave people money to buy from them and, like I said in my first point, was actually no more "convenient" than an employee.

We can still efficiently produce most things we consume without fully automating them or having them run by HAL-9000s.

If it comes down to having my Amazon order occasionally show up late or the kid at Wendy's screwing up my order from time to time vs living in dystopia of "convenience", I think I'll take that extra 10 minute wait in line, thanks.
 
I think it's pretty obvious that when people use emulations of musical instruments, no matter how advanced they are and how fantastic they have started to sound, they are nothing like the real deal. Why do people think the human being can just be emulated? It's ridiculous. Music isn't maths. Music is emotion and no computer or algorithm feels emotion. People always want something new and exciting and the only thing capable of that is human chaos (the human being). You can't program that.
 
This discussion is overall missing the point, I think...

First, stop bringing up art. These technologies are all about disrupting industries, i.e. by definition they are focused on replacing working composers. Art has absolutely nothing to do with this discussion because it's not like people will stop writing music for fun/enjoyment/art just because an algorithm can get paid to do it. I should be clear, though, that by "art" I mean the intention of the music, i.e. it is not being made for any specific purpose beyond that of the artist's/artists' (whatever it may be) and is not being commissioned. We're talking about music that is made in exchange for money; that's the only thing these technologies would have any reason to target.

Second, AI is a bit of a misnomer here. Obviously whatever capabilities the algorithm has will be decided by a human being. Humans are the ones writing the code, after all, and machine learning is still quite young. This is far better classified as "automation" as opposed to "artificial intelligence." I suppose if you want to get all semantics-y, this is technically "AI," but...yeah, at its core, it's automation.

As ChristianM said, this technology imitates. If it were inventing, then it'd truly be AI--machine learning--which is of course the goal and if it's possible to do at the desired level will inevitably come to pass, but at the moment, this technology's biggest strength is imitation. And that is hilariously easy for an algorithm to do. You all should know, because you already do it. It's called "learning your craft." It's literally what makes a professional musician a professional--we understand what musical parameters to tweak in order to gain the desired emotional response from the audience. It's a little cold to define it that way, but it's exactly what we do. Otherwise, none of us would be capable of composing music for dramatic projects. Doing it by "feel" just means your brain is wired in such a way that you can do it without explicitly thinking about it. Sort of how loads of people have to study their asses off in order to improvise in a jazz setting, but for some people like Jeff Beck, it's a natural thing for them and they already understand it. Their brains just already had that firmware installed! :P

I mean, take for instance orchestration. It's a deep and life-long art, but you can automate the meat of the process to where you have something "good enough" by having instruments with like ranges playing the same part. That'd take four seconds to code, and the vast majority of the people in the world wouldn't notice, especially considering that the vast majority of orchestrated music they hear is probably Beethoven and Mozart, and for all their genius, their orchestration was pretty tame and "vanilla" by latter standards. So the audience would just hear flute doubling violin, french horn doubling viola or cello, etc, and they'd unconsciously go, "Yeah, that sounds like an orchestra."

Same with more composition-related stuff. If I were to write an algorithm (using pseudo-code Javascript here) to define a Law & Order procedural track:

var genre = " ";

if(genre = procedural) {
var tempo = 112; // ideally we could randomize this within a given range, but that'd make the code more dense, so I'm leaving that out
var key = "D minor"; // same deal here
var signature = 4/4;
var chordProgression = [i, bVI || bVI/3] // this is just an example, obviously there'd be more potential chord progressions to cycle through
var harmonicRhythm = 2; // same deal here
}

Obviously you'd have to define that 112 is actually 112 bpm, that D minor contains these notes and not those, what the top 4 and the bottom 4 in 4/4 means, etc. And obviously this isn't complete--I'm just doing the easy lifting of defining the parameters, not the composition itself; you'd need to define sections of music, the order of the sections, what the instruments are, what patches they are, etc etc etc. And you'd of course have to tie this code to another application that can interpret it, can call the patches, can translate what you're writing as MIDI, stuff like that.

But that's easy stuff. Really easy. People were doing this thirty years ago--it just sounded like shit because producing music on a computer was at its infancy. The production level is still the bane of these services, but come on--you can easily analyze a mix with Ozone and, just like I did above, break down whatever processes during mixing resulted in a quote-unquote "mix that's good enough for the general audience" into a pretty small and easy algorithm. Because at the end of the day, anything in music can be quantified. You don't need to use an LA-2A--you just need your bass guitar to sound like it's being run through one, which we can already do. Stuff like that.

There's a lot more to it than what I'm demonstrating here, but not a whole lot more. Every single component of music can be broken down to its smallest part. We know this because we do this when we learn music. If you've ever taught music (break down concepts to their smallest parts), and you know how to code (translate those smallest concepts into something that a computer can interpret), then you could probably create one of these services yourself if you really, really wanted to.

Honestly, I actually hope this gains ground--and it will--and while the transition will be painful, on a global, every-industry scale, we'll be forced to reckon with the fact that most of the reasons to organize society in under a capitalist form will be negated. I'm not preaching that this will automatically = utopia, because you'd have to be pretty fucking stupid to look back at every other major junction in history and assume that everything will be peachy on the other end. There's a major chance that the trend could eventually result in extreme boot-on-face capitalism, extreme dystopia. Maybe it won't change much at all.

But regardless, mass automation is inevitable, and we composers will have to deal with it at some point too--and probably much sooner than you think. I've actually heard about Amper's plans, and they're big and coming fast, and money is literally being poured into them from those who will benefit (i.e. companies who currently pay composers our "expensive rates"). And since there's a chance that (again, on an affecting literally everyone on the planet scale) we humans actually could figure this whole "basically every job that people have in order to make money can be automated" thing, there's a chance that, who knows, maybe we can leave the bullshit Law & Order procedural music that most of us could write in our sleep to the computers, and--assuming that we're talking about one of the better possible timelines here--could instead focus on the art, leaving the algorithms and the computers to disrupt to their heart's content. That's a very rosey "post Work" future, and it's one of a number of possible timelines. And while I'm not super optimistic that we'll figure it out within my lifetime, I'm glad that this conversation is popping up pretty regularly now in pop culture. People are taking it seriously, and hopefully we can navigate it adequately. For now, I'd say I'm truly neutral. I could see automation going in a number of ways, and to be honest I expect we'll pass through all of them at some point, roughly worst to best with a lot of dips and valleys along the way. Broadly speaking that's how humans have dealt with every other paradigm shift.
 
I think it's pretty obvious that when people use emulations of musical instruments, no matter how advanced they are and how fantastic they have started to sound, they are nothing like the real deal. Why do people think the human being can just be emulated? It's ridiculous. Music isn't maths. Music is emotion and no computer or algorithm feels emotion. People always want something new and exciting and the only thing capable of that is human chaos (the human being). You can't program that.

No, music is formed of tiny component parts, and when put together, elicit emotion. The computer doesn't have to understand emotion--what does that even mean, anyways. People write the code, and they understand what the emotional goal is, and therefore can define the parameters as they see fit to reach said emotional goal.

So, yes, you absolutely can program that. Programming a car to drive by itself is a metric shit ton more complicated than programming a computer to write some music. Bring in machine learning--a technology being pioneered in part by, for example, automated cars that share their experiences and therefore learn from each other--and yeah, the algorithms will be writing some damn convincing music. It's really just a matter of time because the concepts are sound and the technology is either here already or very, very close.

Edit: to be really clear, though, I'm not talking about writing music to picture. That is a complicated decision-making process, which obviously would take far more time to figure out. Not impossible by any means, but is much, much farther off than writing music that will later be placed to picture.
 
Top Bottom