oh wow oh wow. This is quite the topic. Ripe for quite strong opinions - showing that amongst us pro/semi-pro/hobbiest composers there is a vast amount of ground between our views. I'm hoping it doesn't descend into a dichotomy... the grey is far more interesting and potentially important.
Rather than going too deep into things all in one go... just a few little observations. I might get round to writing more later.
Regarding the copyright. There is no doubt. The piece linked a few pages back is infringing copyright according to the legal frameworks in Australia. I might even send this link to a musicologist in the UK to get his opinion as well from the UK / EU standpoint. Just because I'm interested.
EDIT: Of course it isn't. I stupidly was comparing that AIVA piece to an earlier version of the same AIVA piece - not Willaims Rey. Label your files folks. Let my embarrassment be a lesson to you!
I'm indeed interested enough to send it to a bunch of solicitors to see if any would like to create a test case. Why? Because the legal framework around these things is important. If the company wants to survive, they need to bend to the will of the law. And the law needs to look at what it thinks is important for society / composers / tech companies etc. I'm not sure the law (at least here) has had a chance to look at something like this.
Its somewhat difficult for the company given that although they operate (and are legally setup) in a particular jurisdiction, their product is available for folk to use all around the world. And different regions have very different legal frameworks around this stuff. Even on the fair use of the material that the AI is being trained on.
One could potentially lobby for a legislative framework which assigns composers a right to have their output NOT used to train an AI. Or the opposite could occur. This is not as far fetched as it sounds. There are all sorts of legal opinions / research going on around data usage / massed data usage (second derivative use / big data plays) which people far more intelligent than me are working on right now. There's whole new social sciences being created here in australia (the 3Ai institute at ANU comes to mind as does some very interesting projects from the law dept at Syd Uni) and projects looking at framing data usage and societies attitudes toward it. This may seem a long way from what is happening here, but this software is using data to train the AI. Society is now just starting to come to grips with what data use really means. And legal frameworks are being considered. This is going to have massive, long reaching effects on shaping society. There are huge players involved. Big tech co's are lobbying hard to enable an "anything goes" type situation. Other folk are rebutting that actively.
We will see how this plays out in many of our lifetimes. Some will undoubtably become clearer even in the next 2 or 3 years. GDPR in EU is just a start. Govt regulation is being heavily debated in almost all western regions. Conferences are being held all around the world around AI use, data use etc. Its not all fait accompli. Even if there are many who don't care - who just want to see what happens, there are others who are deeply concerned for what these tech changes mean for society as a whole. There will be the inevitable political differences between the right and the left on this in terms of regulation, but we've already seen both sides coalesce around some of the issues (and essentially place themselves against the position of anarchists and libertarians and the like.)
Edit : One final little thought. The cat is out the bag in regards to AI being useful for many many projects that we might not even have dreamed about 10 or 20 years ago. However, it is far from settled how we (society / law / governments / communities) will interact with it. How much we will allow. Where lines will be drawn. And this is very much happening in our neighbourhoods right now. Get involved if you want. Just look up things like the ODI, or the Open Data Conference, or Data Rights, or ...