What's new

Prepare Song for Mixing

Transients

New Member
Hi

When composing cinematic orchestral music with kontakt sample libraries and a huge number of kontakt instances and instrument tracks with midi note information.

How do you guys prepare a song for mixing?

When the composing part is finished and its time to start mixing the song, everything kind of halts because im not sure whether i should bounce everything to audio before i start the mixing process. As a composer id like to keep everything as instrument tracks with midi note information, but as an audio engineer id like work with audio files when mixing. So maybe it would be best to have two separate projects of the same song. One song with instrument tracks (for editing/composing) and another with audio tracks only (for mixing purpose). Which gives a clean slate for the mixing process.

I know its a personal preference and daw dependant workflow thing, but would be nice to get some ideas and share some thoughts on this. I know i can bounce instrument tracks to audio along the way, but things tend to end up as a big mess of tracks with a combination of instrument and audio tracks which makes me loose concentration and overview of the song. So im just curious how you handle this. Not many tutorials covering this topic. So to wrap it up in a couple of questions:

1. Do you think its necessary or is there any advantages to bounce all instrument tracks to audio before mixing?
2. Is anyone having two copies of the same song (one with instrument tracks and one with audio tracks)? Or is it just me overcomplicating?

Thanks for reading.
 
1. Do you think its necessary or is there any advantages to bounce all instrument tracks to audio before mixing?
I wouldn't say it's necessary, but it can be an advantage.

Some people like to work with audio tracks and get rid of all MIDI before mixing. Besides obvious technical reasons (bounced audio frees up resources), mixing with audio tracks only can be beneficial for your decision-making process. You simply can't go back and tweak your MIDI data or open up Kontakt and fiddle with the attack of your cello shorts or the room mics. You gotta work with what you got.

I myself don't bounce tracks unless I need the extra power. But that's personal preference.
 
I'm using Cubase so YMMV. I have a composition template that doubles as my mixing template. Once the composition phase is done, I freeze tracks which will unload Kontakt instances from memory and also freeze the round robins in place as it generates an audio file behind the scenes. Yet, you still see your MIDI information on the events. If you need to go back to MIDI, simply unfreeze.

It's the best of both worlds minus one caveat: you can't batch freeze. You have to do your tracks one by one. I recommend using a SSD for this as it will speed up the freezing process (I usually have my project files on a spinning drive.)
 
I mix using the MIDI tracks that have been frozen to audio.

I've been using Studio One now more than Logic (which had been my DAW since Logic Platinum 4). I love the way it freezes because it lets you edit and move things around, and if you have to go back to MIDI, the changes/edits you made are reflected back in the MIDI. Going back and forth is very simply done, and you don't lose anything along the way in doing so. If I'm doing anything orchestra-centric / articulations, I usually will use Logic.

Before freezing though, I export all the MIDI tracks first, so that they are intact as MIDI. This way in the future, I'm good if: 1) that project ever got corrupted; 2) I sold the VI used and later needed to recreate the project; 3) I no longer use the DAW it was created in / DAW doesn't work in new OS and I don't want to upgrade it.

Once the MIDI is rendered to audio tracks, I export those as well for safe keeping.

All of my tracks (regions, events, clips—whatever your DAW calls them) are named incorporating the VI used and the preset I created for it with relevant info. This way if needed in the future, I will know what originally made up that sound, and because I also kept the corresponding audio track, I will know what it sounded like (e.g. if I sold that synth and had to recreate it with another). I also name the project with the title plus the key, BPM and sample rate.
 
Last edited:
Thanks for some great advice. I use Studio One as well. I was not aware of how the transform to audio track option actually behaves. Now after some testing i can see how this can be the solution.
As already mentioned above it seems i can use this option to double both composition and mixing template, it also keeps the routing to fx channels and busses if that routing has already been made as an instrument track. Thats great. And it can remove the instrument from the song as well with the option to transform back the instrument track if needed. Seems like a perfect hybrid solution. Guess i need some more experience before stating its perfect, but it looks promising:)
 
I love that transform to audio feature in Studio One, its implementation is fantastic. It lets you keep working while in audio, and if you go back to MIDI, all the work you did since then is not lost. When I have to use Logic, I really miss that and can't wait to be back in S1. And I never thought I would ever feel that way, because for so many years I wanted nothing to do with any other DAW than Logic because I loved it so.

S1 freezing makes it really easy to reduce the CPU while still being able to change audio things and know that your MIDI will still follow. It's a great way to work. When I first started with Logic Platinum 4, freeze was not yet invented so I was making two projects, one for MIDI (creation) and one for Audio (mix). I would find that I'd want to make some changes or add parts, but would end up doing them with new MIDI tracks in the audio versions to save time, instead of re-opening the MIDI and exporting out. This made for a total mess and out-of-sync projects.

Another handy thing for you is in S1 you can simply drag your MIDI and audio parts into your song folder in the browser>Files tab, so you don't have to go through an official export. Very quick. (hold down Option while dragging the MIDI to save it as a MIDI file instead of a music loop.)

Before saving the MIDI files, I join the MIDI on the track into one continuous length event (starting at bar 1), option-drag it to the song folder (new folder MIDI) to save it. I then hit undo so that the MIDI parts are split back to their original length (not starting at bar 1).
 
Last edited:
Thanks again for awesome tips. My initial question is hereby solved with transform to audio track.
Hope its ok to continue this thread with some mixing questions. At the moment im working with a song that i intend to use as a comp/mix template so i need to gather some general mixing information as well. I am already watching you tube tutorials and there are a lot of information out there. So instead of asking general questions i want to ask specifically about how you deal with audio level/volume. Before transforming to audio do you set all virtual instruments to a certain db, like in kontakt its default is -6db. Or do you normalize all audio events to a certain db level after they have been transformed to audio? In other words, what is the best starting point before using the faders to balance the mix and automate?
 
1. Do you think its necessary or is there any advantages to bounce all instrument tracks to audio before mixing?

Not an "advantage" but I try and mix my own tracks the way I would mix my client's tracks. So I always bounce the midi to audio because I request the same from others. The difference when mixing my own music is that I have control over the "performance" and arrangement of my tracks while I'm writing them. I focus on that during writing so during mixing, all I need to do is mix. All I'm trying to do is separate the writing and mixing as much as I can as it helps me get both done efficiently.

2. Is anyone having two copies of the same song (one with instrument tracks and one with audio tracks)? Or is it just me overcomplicating?

Sounds unnecessary to me.
 
If the composition is "done" mix the audio. It eliminates visual confusion when you have high track counts and the occasional mystery sounds popping up. Unless you are dealing with "live audio" as well you shouldn't have any level problems just bouncing what you hear. To paraphrase CharlieC, as long as there's no clipping, it's good.

If you need to do edits in form/structure then it might be better to stay in midi until you have that locked down.

You can also export your midi into a separate file for future tweaks as long as you are good about labeling your tracks with instruments used (Kontakt5-SA-trpts-muted-stac) etc... that will also let you create smaller backup files into a master folder. Just. in. case.
 
With orchestra across three server computers and the DAW I generate audio stems for each section, i.e. violins1, violins2, violas, celli, bass and separately anything I might treat differently such as solo violin or another library's unusual sound like very different playing technique to blend in. Then like this across the other orchestral sections. Percussion often gets the most tracks. Then I consolidate those tracks to fewer stems, such as high strings, low strings, etc... with perc again with more tracks (snares/timp/bass drum/melodic metals/larger mallets). I usually wind up with about 30-36 stereo stems and final mix those. (there may be low synth to reenforce contrabass, maybe choir, live performance, etc...) 30-36 for me is pretty manageable. I use mixer views of each major orchestral section in the mixer with any associated group / vca / fx tracks and shortcut switch among them quickly to match up to my 9 fader hardware control. I like this setup. Remember with midi, it rarely reproduces the same twice in a row. The round robins and other factors. When mixing, the audio is dependable and I have access to the first smaller stems if needed.
 
Top Bottom