What's new

Question about mixing in audio vs. midi

MarcusMaximus

Senior Member
I have a slave setup with Logic on a Mac and VePro on a PC. Just finished a big orchestral piece and about to mix it. I usually like to bounce all tracks to audio (using bounce in place) and mix with those as this has a lot of advantages, like saving resources and enabling easy clip-gain editing etc. Previously this worked fine when I was doing everything within Logic, no slave or VePro. However with the slave setup I’m now finding that the bounced tracks end up with problems such as many sustained notes having the attack only, the rest is cut off. This is when using Hollywood Orchestra with Play 5, which seems to be a well-documented problem. Apparently Play 6 fixes this but I haven’t upgraded and am loath to do so mid-project. This is obviously because the slave houses my template and I don’t want to risk destabilizing it at this stage.

I haven’t tried real-time or online bouncing yet but I imagine that should work ok. The problem is it will take ages to do that for every track and I don’t want to group them into stems at this stage cos I still need to work with the individual tracks. So I’m wondering what is the best way to approach this: mix in midi, take the leap and upgrade Play to hopefully enable proper off-line bouncing, or render every track to audio in real time? Or is there something else I’m missing?

Any thoughts or suggestions would be welcome. Thanks.
 
Last edited:
But surely if I bounce multiple tracks at once in real time they will come out as one consolidated track? I'm not aware of an option to do a 'bounce in place' on a per-track basis in real time. It appears to be only an off-line process but I'll have to check that again.

In terms of updating Play I absolutely will but would you say it would be safe enough to do that at this point, i.e. mid-project? Can I rely on it to keep all the settings, volume levels etc. that I have painstakingly made intact during the update?

Thanks for your input.
 
Hmm.

I try to mix quite a bit in midi in order to be able, later, to print at least some groupings. Those can be big (such as "all brass") or narrow (putting, say, trumpets, FHNs, trombones, and low brass on separate tracks). Some people take it further and put shorts and longs on separate tracks -- I do that for strings, but not for brass. I have 8 or 10 splits set up for percussion, then a final stem for each family (strings brass vocal percussion synths FX etc.)

I don't know if that helps you or not.

Kind regards,

John
 
Thanks John. What you describe makes sense however the way I prefer to mix is with each individual track as audio rather than with any groupings, at least initially. To be honest, like a lot of people I think I do quite a lot of the 'mixing' as I create the piece anyway so I've probably done as much as I need to with the midi tracks at this stage (but isn't there always more tweaking I could do..?!) I have adapted Jake Jackson's approach from a Thinkspace course I did some time ago so that's where my bias for audio tracks comes from and it has worked pretty well so far. However if I could have the same flexibility and ease editing the midi as I find with the audio then I might well be able to mix the whole thing in midi, or at least up to the final stages. I'll have to experiment a bit more. The slave setup has completely changed the way I work because it is so much more reliable and has enabled my system to cope with large projects without having to freeze most of the tracks. However the downside is this inability to cope with off-line bouncing-in-place.
 
I have a similar setup to yours (except for the usage of Play) and my DAW is Cubase but that shouldn't matter. I never print anything when mixing. I simply have the audio outputs from all of my MIDI instruments organized and grouped in my mix window as if they were audio tracks. I'll apply EQ, dynamics, FX, etc. to these channels as needed.

My groups output to 'stem' busses should I need stems (in Cubase I can do a batch bounce of these outputs). If not, I just leave these alone as they output to my mix buss.

Once I'm happy with the mix I bounce out whatever mixes I need (full mix, stems, etc.).

So regardless of what I need to deliver I never have to print individual MIDI tracks to an audio track.

Of course, I am not a mix engineer so I'm sure there is a better way. That said, this process has worked for me for years and I've always been able to quickly mix projects regardless of size.
 
Yes I could do that, i.e. apply all the effects directly to the midi tracks but I'm not sure my system would cope with that without choking up. I've already had to freeze a couple of tracks which use instruments that aren't on the slave due to the fact that I'm using two Logic Drummer tracks in the project and if I have either of them selected during playback I tend to get system overloads (a Logic thing!) and so on. So part of the preference for audio is to free up my system so that I can apply reverb, delay, EQ etc. to the tracks without issues but it is also because I find manipulating the audio so much easier. For example, if a phrase on one instrument is too loud or soft I can simply make that into a region and adjust the clip gain rather than having to go into the midi track and change the expression level or do volume automation or whatever, all much more time-consuming.

However, it's definitely a case of each to their own when it comes to mixing. I'm sure there are many people who would never dream of rendering all the tracks, or perhaps would never have to due to the power of their system. Equally I'm sure there are many people who swear by mixing in audio only.
 
No, I've steered clear of that based on a lot of advice! I am using the single instance-per-instrument approach that has often been recommended for Logic users. Each instance contains all the articulations for that instrument. I always work decoupled and I use ARTzID to manage the articulations etc. It all works pretty seamlessly.

I just tried a realtime bounce on one track and it worked fine so the problem is only with offline bouncing, which is the nature of the bounce-in-place process. No way to do that in realtime. It involves a few extra steps such as placing the audio track back in the project but that's no big deal I suppose. All quite time-consuming though, to play through the whole project in real time for each and every track - there are a lot of them as you can imagine! I haven't yet found a way to bounce multiple tracks in real time and get them to show up as separate tracks but I suppose it's probably down to sorting out the routing some more..
 
a couple comments.

  1. The VEP multiport macro works a lot better now then it used to, after I fixed it. it gets a bad rap because of two reasons, ONE, it does not fare well with artzID which thrives well itself on one-track-per-VEP-instance and doesn't really support use of any multi-port macros.

    And TWO, VSL released their multiport macro templates a few years back with bugs that made everyone frustrated, and never resolved them, instead saying "wait for AU3". However the version that I fixed works quite well, but I would not recommend if you are using Artzid.

  2. Bouncing midi tracks to audio is always a good idea with any project at some point regardless of CPU usage because ultimately down the road if you reopen an old project you don't want to have to depend on on your instrument tracks playing back exactly the same way as they did in 2001, or whatever. Bouncing to audio tracks gives you something you can mix later and no concerns about the software instrument having changed or become incompatible or something. Especially if you're using VEP...what if VEP changes or the interaction in some way...no promises your midi track will do what it did in 2001. I say once you have the midi track performance the way you want with the instrument library you plan to use, burn it to audio. Keep the midi around in case you need to update the performance or decide to use a different software instrument.

  3. Aside from the above, there can be some performance advantage to burning midi tracks to audio tracks, but my 2010 macpro is easily able to playback 100+ tracks of VSL instruments directly from midi, with plenty of CPU space to breathe...so..for me that's rarely a concern in terms of CPU, and certainly doesn't justify the manual labor to do it. But for point#2 above, its still not a bad idea to do it eventually.

  4. Freezing is a very handy feature in LPX, but you pretty much have to stick to one track per VEP instance to do that, one of the arguments for that approach.

  5. If you use any kind of environment instruments to route to your software instruments, such as is the case with the VEP Multiport templates, then you have to real time bounce, and you get no argument from me, its a PITA and time consuming in LogicPro today.
 
I think the only way to automate doing real time bounce on say 100 tracks, would be to write some kind of Keyboard Maestro macro or something of this nature.
 
The other comment I want to make is about where to "mix". In the midi track, in the instrument or audio track, etc.

I personally think its a good approach to automate the "performance" in the midi track. That is pretty much all your midi events and CC automation, EXCLUDING in most cases CC7, which I think should be left set to 127 at all times unless you're using it to balance out volume between articulations or something, but then probably it will have very few changes needed. Set it and forget it on CC7. CC11 and other "performance" oriented controllers used for "performance" dynamics...that all belongs in the midi track. Also, if you're using a software instrument that has performance oriented changes such as filter sweeps, etc.. automate that in the midi or instrument track.

The actual mix...automate on the audio track. Well if you're not bouncing it to an audio track, then you automate it on the same instrument channel you were using for the midi stuff mentioned above, except you're automating the channel fader (not CC7), pan, EQ's, etc.. as they exist on that instrument channel.

But if you bounce to audio, then you could put that kind of mix automation on that audio track, and keep it conceptually seperated from the midi performance.
 
Thanks Dewdman, will comment more on what you've written tomorrow as it's late at night here.

Of course it's easy enough to 'bounce' multiple tracks at once as I've just discovered, or rather remembered! It's just a matter of setting up the routing so that the individual midi tracks' outputs go to their respective audio track inputs and then recording to those tracks. So I could theoretically record all the tracks in one real-time pass if I had them all set up but I'll probably do it (orchestral) section by section to save on resources. Do-able but a bit of a pain to set up so if anyone has a more elegant solution I'm all ears!
 
Ok so I set up enough audio tracks for the wind section and recorded directly to those. Involved a load more buses and aux tracks but seems to have worked fine. So I’m going to do the whole project that way as I do want those audio tracks for mixing. I’d love to find a less time-consuming way to print the individual midi tracks to audio however probably on balance it’s better anyway to do that in real time. At least I’ve found a way to do it with multiple tracks at the same time.

I don’t need the multiport solution as the approach I use works fine, also I use ARTzID so that’s that really. I have used freezing in the past on whole projects just to get them to play back but that is no longer an issue with the slave setup, except for a couple of Mac-based software instrument tracks which seem to choke up resources during playback. Yes, I do all my CC automation on the midi tracks. Again, ARTzID comes into its own here because it allows you to apply any CC automation to all articulations within any one instrument. Brilliant.
 
Oh dear more problems! I've just checked some of the audio files I recorded from the midi tracks and there's an Oboe ostinato which sounds different in the audio version. The midi sounds smooth and the notes all connected whereas in the audio the notes sound more separate, as if the player were playing a sort of slurred staccato. In simpler terms, it's as if the notes are being cut off before they finish. This was what was happening with the offline bounce-in-place process except much more dramatically in that most of the sustained notes were being cut off. I've tried to bounce just that oboe track on its own in real time but I end up with the same result.

I'd post an example of what I'm hearing except I can't get a reliable recording of how the midi sounds - that's the problem! It seems that bouncing, or recording individual midi tracks to their audio counterparts, isn't working properly for some reason. This is obviously very problematic as I seem to be unable to properly render either individual tracks or the whole project to audio, even in real time. It must be some issue between Play, VEPro and Logic but I can't figure it out. Can anyone shed any light on this by any chance?
 
That is very interesting and I would definitely like to hear whatever you figure out or if you have some example project we can look at and try to reproduce here.
 
I think that this sounds completely possible, having something probably to do with sloppy coding somewhere. As Dewdman said, the easiest way to check it out would be if someone would have the same midi file you're using and the same library.

Not trying to hog your topic, but I have also encountered an issue in Cubase where offline-exporting BWW instrument tracks utilizing different midi channels for different articulations tends to leave a TON of hanging notes to the outputted audio file while the VST track plays completely fine in real time when listening the project in Cubase. Still haven't figured out how to tackle it, but in that sort of sense I can understand your frustration and am also interested to find out what could cause it for you.
 
Thanks guys. The thing is this never happened when I was doing everything on the one machine. All audio, whether bounced in realtime or offline, sounded exactly the same as the original midi track. It's only since I've been using the slave/VEPro setup that this issue has arisen. And I'm only noticing it now because this is the first project on that setup that I've got to the point of mixing/bouncing, i.e. the first time I've had to render tracks to audio. Interestingly, other tracks that I've checked are fine, as is the rest of that oboe track. It seems to be the fast ostinato figure that's throwing things off. I'll listen through the rest of the tracks to see how they turned out.

Perhaps this is related to the documented issue with Play 5. In which case the solution would be to update Play to version 6 which is supposed to have fixed these kinds of issues. However that takes me back to the risk of screwing up the existing Play settings in my template mid-project. I've posted this question re. safety on the Soundsonline forum but so far no replies.
 
Top Bottom