What's new

I trained a neural net for co-creative composition in REAPER

Malandro

Member
It writes MIDI to empty MIDI items in your time selection, based on the other MIDI in your time selection.

Here's a video showing how it works:



Here's where you can download it:


The neural net can be used in a variety of different ways. Some things I find it useful for include:
-suggesting a variation on a melody that I've written,
-suggesting a variation on a whole section of a song (around 8 measures seems to work best for this),
-suggesting a harmony for a melody that I've written, and
-suggesting a drum part and/or drum fill.

It doesn't always do a great job, but often at least part of what it comes up with is compelling (within a few rerolls), and since you interface with it directly from REAPER its output can be cut and rearranged easily within your REAPER project.

You can download the neural net and code needed to run it on your computer for free from the link in the video description. The net was trained only on permissively-licensed MIDI files (think copyright-free classical / choral / folk stuff, although there are also some modern compositions in the training set that I had permission from the authors to use).

Among other things I'm a drummer, and a significant percentage of the examples of drum writing in the training dataset are actually my own drumming, so if you use it to write drum parts then you'll likely get parts that are similar to my style and which should work well with good drum VI's like Superior Drummer. I think it's pretty cool that I can put a robo-version of myself out there for others to collaborate with in this way. It can also write for all other 128 standard MIDI instruments (indexed 0-127).
 
Last edited:
@Ivan Duch asked how to personalize the model by training it on his own MIDI files. Here's how:

Have python 3.6+ as well as the following python packages installed:
-numpy
-pytorch
-miditoolkit
-portion
-transformers

Your system will need an Nvidia video card with >= 6GB of RAM. Expect the following process to take a few hours of computational time.

Gather 50-250 of your own MIDI files that you would like the model to learn the "style" of.

1) Open constants.py for editing. Edit PATH_TO_TRAIN_MIDI in the NEURAL NET TRAINING SETTINGS section. Also edit the variable MAX_LEN according to how much RAM your video card has.

2) Put your midi files in the folder PATH_TO_TRAIN_MIDI, as you defined in constants.py.

3) Run preprocess_midi.py. This will create large file(s) in the folder PATH_TO_PROCESSED_TRAIN_MIDI.

4) Open nn_training_functions.py for editing. Near the top, edit the value of N_FINETUNE_EXAMPLES_MULTIPLIER. The default value of 30 will probably be good if you have 50 midi files. Lower it if you have more midi files, aiming for roughly 1500/(# midi files). If you have more than 1500 midi files, just set it to 1.

5) Open build_finetune_train_data.py for editing, and ensure that it says EPOCHS = range(51, 52) near the top. Then run this file.

6) Open finetune_model.py for editing, and ensure it says EPOCHS = range(51, 52) near the top. Then run this file.

7) You should now have a model at composer assistant\models\unjoined\infill\finetuned_epoch_5 1_0. Open composer_assistant_nn_server.py for editing and change line 66 from

model_path = os.path.join(cs.UNJOINED_PATH_TO_MODELS, str(task), 'finetuned_epoch_50_0', 'model')

to

model_path = os.path.join(cs.UNJOINED_PATH_TO_MODELS, str(task), 'finetuned_epoch_51_0', 'model')

That's it! I hope to eventually have a GUI or a colab notebook or something that does all of this for you, but it's a ways down the road at this point.
 
@Ivan Duch asked how to personalize the model by training it on his own MIDI files. Here's how:

Have python 3.6+ as well as the following python packages installed:
-numpy
-pytorch
-miditoolkit
-portion
-transformers

Your system will need an Nvidia video card with >= 6GB of RAM. Expect the following process to take a few hours of computational time.

Gather 50-250 of your own MIDI files that you would like the model to learn the "style" of.

1) Open constants.py for editing. Edit PATH_TO_TRAIN_MIDI in the NEURAL NET TRAINING SETTINGS section. Also edit the variable MAX_LEN according to how much RAM your video card has.

2) Put your midi files in the folder PATH_TO_TRAIN_MIDI, as you defined in constants.py.

3) Run preprocess_midi.py. This will create large file(s) in the folder PATH_TO_PROCESSED_TRAIN_MIDI.

4) Open nn_training_functions.py for editing. Near the top, edit the value of N_FINETUNE_EXAMPLES_MULTIPLIER. The default value of 30 will probably be good if you have 50 midi files. Lower it if you have more midi files, aiming for roughly 1500/(# midi files). If you have more than 1500 midi files, just set it to 1.

5) Open build_finetune_train_data.py for editing, and ensure that it says EPOCHS = range(51, 52) near the top. Then run this file.

6) Open finetune_model.py for editing, and ensure it says EPOCHS = range(51, 52) near the top. Then run this file.

7) You should now have a model at composer assistant\models\unjoined\infill\finetuned_epoch_5 1_0. Open composer_assistant_nn_server.py for editing and change line 66 from

model_path = os.path.join(cs.UNJOINED_PATH_TO_MODELS, str(task), 'finetuned_epoch_50_0', 'model')

to

model_path = os.path.join(cs.UNJOINED_PATH_TO_MODELS, str(task), 'finetuned_epoch_51_0', 'model')

That's it! I hope to eventually have a GUI or a colab notebook or something that does all of this for you, but it's a ways down the road at this point.
Thanks a lot for sharing all of this. The job you've done with this is brilliant, to be honest. Pretty much equivalent to stable diffusion but for composers.

I have an AMD card, I ran some tests with Stable Diffusion on Linux using ROCm and it works pretty well, would this be compatible with ROCm drivers as well?
 
Sorry, I have no experience with ROCm. Maybe it would work? I have no idea. If you try it, please let me know how it goes!
 
Thanks for posting! I get much better results with some harmony. Quite surprised how well this works in fact. I'd love to have this in Renoise.
 
I'm I like the only one these days that isn't in awe of AI for music. It makes pretty pictures but everything musical I've heard just sounds God awful.

Not to knock your programming prowess but in all honesty by the time you flip through a few variants to find the one that's close to the one you want then edited the midi to make it really what you want, you could have programmed something better. Maybe not faster but certainly better.
 
I didn't even get to training it with Hans Zimmer!
Please don't ever do! I think we need to start treating AI training on material that isn't available under permissive free licenses the same as software piracy. It's a good thing the OP understood this issue and only used appropriate data for training.
 
Please don't ever do! I think we need to start treating AI training on material that isn't available under permissive free licenses the same as software piracy. It's a good thing the OP understood this issue and only used appropriate data for training.
Hans won't be the only one getting pirated. I don't think that merely training would be a problem. Training and releasing the results could.
 
I'm I like the only one these days that isn't in awe of AI for music. It makes pretty pictures but everything musical I've heard just sounds God awful.

Not to knock your programming prowess but in all honesty by the time you flip through a few variants to find the one that's close to the one you want then edited the midi to make it really what you want, you could have programmed something better. Maybe not faster but certainly better.
That was my experience so far with this. I write faster when I just focus on what I hear in my head instead of curating what AI has to output. Not to mention the writing sounds cohesive and more organic instead of a Frankenstein of music snippets. And it's way more enjoyable as well. Curating AI-generated stuff is a nightmare.

That said!

I think this thing is decent at counterpoint and can be a tool for those times where you have a melody, a harmony and you just need filling material for the arrangement. Actually, sometimes I do that within Dorico using the generate notes out of Chord Symbols functionality.

Unlike other music generation tools, this thing can work with your own composition, meaning that you could make a sketch of a whole song, add the midi to the DAW, and use this tool to help with voice leading and orchestration. It's also decent at generating variations of a given theme.

I have to say it's the most interesting AI tool for music I've seen so far. I haven't integrated it into my workflow and I doubt I will because the main reason I write music for a living is that I enjoy it. AI tools kill a good part of the enjoyment for me.
 
I tried to preprocess some midi files but I first get:92 files to process, followed shortly by: saving chunk 0, 0 files to process. Now I've got a reason to learn more Python.
 
Here, feed it a melody with chords, and you get the rest with a click of a button.
I don't know if I'd go quite that far with the description, especially since you would probably still need to humanize the output. It just outputs notes. It does not output a performance of those notes. I'm glad you're enthusiastic about it, though!

Not to knock your programming prowess but in all honesty by the time you flip through a few variants to find the one that's close to the one you want then edited the midi to make it really what you want, you could have programmed something better.
You know, I agree with you if you already know what you want. IMO this AI is more for inspiration.

I tried to preprocess some midi files but I first get:92 files to process, followed shortly by: saving chunk 0, 0 files to process.
If you have a file called 0.txt in your PATH_TO_PROCESSED_TRAIN_MIDI (defined in constants.py) then you are good to proceed with the next step.
 
Training on 92 midi files chosen haphazardly from 6000 worked fine, and took maybe an hour. I've been collecting midi files I like for this very occasion. I'll try training on all 6000 (a few dozen failed, I think mostly due to some key signature error). I think this great, far better than some closed off software.
 
Last edited:
Very interesting! @Malandro I'm attempting to train a model but running into some errors when running "preprocess_midi.py". Do you have any idea what this error could be caused by and how to fix it? Perhaps I'm using an incompatible version of numpy?

Code:
Traceback (most recent call last):
  File "C:\Users\X\AppData\Roaming\REAPER\Scripts\composer assistant\preprocess_midi.py", line 44, in <module>
    for i, res in enumerate(P.imap_unordered(pre.preprocess_midi_to_save_dict, paths[st: end], chunksize=10)):
  File "C:\Users\X\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 448, in <genexpr>
    return (item for chunk in result for item in chunk)
  File "C:\Users\X\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
    raise value
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
 
Very interesting! @Malandro I'm attempting to train a model but running into some errors when running "preprocess_midi.py". Do you have any idea what this error could be caused by and how to fix it? Perhaps I'm using an incompatible version of numpy?

Code:
Traceback (most recent call last):
  File "C:\Users\X\AppData\Roaming\REAPER\Scripts\composer assistant\preprocess_midi.py", line 44, in <module>
    for i, res in enumerate(P.imap_unordered(pre.preprocess_midi_to_save_dict, paths[st: end], chunksize=10)):
  File "C:\Users\X\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 448, in <genexpr>
    return (item for chunk in result for item in chunk)
  File "C:\Users\X\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
    raise value
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Now that you mention it, I ran into this issue recently myself.

Find your installation of miditoolkit. (In my system, it's at C:\Users\stuff\AppData\Local\Programs\Python\Python39\Lib\site-packages\miditoolkit). Navigate to miditoolkit\midi\parser.py and open this file in a text editor. On line 205, change

Code:
current_instrument = np.zeros(16, dtype=np.int)

to

Code:
current_instrument = np.zeros(16, dtype=int)

and save the file in place. Make sure the level of indentation of that line stays the same.

Then restart your computer and try again. Let me know if that works.

The bug is in someone else's code. I'm just tellin ya how to fix it :)
 
I only have a vague understanding of what a neural net does. The infilling can get stuck on generating the same notes. Would using multiple models trained on randomly picked 50-250 midi files from a larger pool, provide more varied 'solutions'? If one model is ruining in circles, switch to another one.
 
Last edited:
Now that you mention it, I ran into this issue recently myself.

Find your installation of miditoolkit. (In my system, it's at C:\Users\stuff\AppData\Local\Programs\Python\Python39\Lib\site-packages\miditoolkit). Navigate to miditoolkit\midi\parser.py and open this file in a text editor. On line 205, change

Code:
current_instrument = np.zeros(16, dtype=np.int)

to

Code:
current_instrument = np.zeros(16, dtype=int)

and save the file in place. Make sure the level of indentation of that line stays the same.

Then restart your computer and try again. Let me know if that works.

The bug is in someone else's code. I'm just tellin ya how to fix it :)
Thanks, it works after making that small change :)
 
Top Bottom