What's new

Mihkel Zilmer - Cinematic Music Production Tips

Mihkel Zilmer

Senior Member
Hello VI crowd!

Time for a bit of shameless promotion of my YouTube channel...
Some of you might remember my Cubase, VEPro & Lemur template video series from 2017.

Well, a little while back I started making another video series - a collection of short and (hopefully) sweet tips for cinematic music production.
Anything and everything to do with samples, audio plugins, gear, computers, workflow and so on.

I make these videos in my downtime between film and game gigs, which is why they tend to turn up at a rather erratic pace, thanks for your patience!

Also, if you have any questions about the topics I've covered, or any suggestions for other topics you'd like to hear more about, then let me know and I'll try to address them!

Thanks for watching! Here are the first 4 videos of the series, I'll post more as I upload them:

 

Alexandre

Member
yes thanks! I am very grateful for Pros taking time to make tutos on cubase...and even more so when they do it well!
 

Rob Elliott

Senior Member
Wonderful Mihkel. Still appreciate your routing scheme (for stems). Just finished a 120 feature and cannot tell you how much time I saved using your ideas. THANKS A TON!!!!! Looking forward to checking out these new ones.

edit: watching more of your production tips vids and WOW - these are so well laid out. Just excellent!
 
Last edited:
OP
Mihkel Zilmer

Mihkel Zilmer

Senior Member
Thanks Rob!

New video is up. Not so much about production this time, but more about how my template and setup has evolved since 2017 when I originally made the template videos. Details on how I route the multi-mic setup described in the multi-mic mixing video start around 3:07

 

Brueland

New Member
Any chance to share some of these projects or templates? Or would you rather recommend to go ahead and set it up on my own if I'm using Cubase Pro 10 and not Nuendo? Alternatively, it would be awesome if you made a video on the exact differences in terms of routing and setup. By just looking at the recent videos I'm unable to get a full grasp on what has evolved since 2017. Great work!
 
OP
Mihkel Zilmer

Mihkel Zilmer

Senior Member
Apart from the surround routing, everything else I've already pretty much covered in the latest video.
I do suggest building your own - no better way of getting your head wrapped around a more complex setup like this and more importantly everybody has their own preferences, I am more trying to provide new ways of thinking about things..

For surround routing, I'll definitely make a video at some point, but am getting pretty busy with a new film at the moment, so no promises.. and first, as I've already recorded two more production tips videos on other topics, will edit those as a priority when I can.

I'll give you a brief overview of the routing for now - I am using something that has been requested from me during the last couple of films I've worked on - instead of 5.1 files - stereo front and rear pairs. They wanted full control over when & how much Center channel is used and asked for stereo pairs rather than 4.0 for ease of use. So I am routing Room & Ambient mics into "rear" stems. These stems have their own reverb and FX sends, separate from the main stems. When delivering stereo I simply route these rear stems+FX into the main stems, otherwise I keep them separate.
 

rlw

Rod Wilson
Michael,

Super impressive videos. I really appreciate your tips, template overview and particularly your willingness to share such insightful experience and knowledge. Thank you very much.
 
OP
Mihkel Zilmer

Mihkel Zilmer

Senior Member
Michael,

Super impressive videos. I really appreciate your tips, template overview and particularly your willingness to share such insightful experience and knowledge. Thank you very much.
You're most welcome! I'm happy to share my knowledge with the community.

And - here's the next video - EQ: High Pass Filter. This one is a little controversial as many people blindly throw a HPF on every channel. But I don't agree with that and instead think that: 1) less is more when it comes to processing audio, especially if we are talking orchestral samples; 2) nothing should be done by default while mixing, everything should have a good reason and every tool should be used with a bit of caution.

 

kingy75

New Member
Hey Mikhel,

Your videos look super helpful, thanks so much. Looking forward to checking them out today.

I have a mixing question if I may.

All my work is currently using virtual instruments and I've gain staged all my tracks (in my initial project) so they're peaking at around a maximum of -18dB. With these MIDI tracks, would you export them as audio tracks, import them into a new project and begiin mixing/adding eq/fx from there?

Would you normalize your new audio tracks first in the new project?

I have a reference track for my current project but it would be good to have some guidelines for what kind of levels to aim for to avoid clipping, if that makes sense.

Thanks so much :)
 
OP
Mihkel Zilmer

Mihkel Zilmer

Senior Member
Your videos look super helpful, thanks so much. Looking forward to checking them out today.

I have a mixing question if I may.
Thanks! Some free form thoughts as a reply, excuse the slight lack of structure, only having my first cup of coffee as I write this :)

I personally keep everything in the same project. The main reason is that I need to be able to go back and quickly address changes to the picture edit and/or any feedback I've received. In fact, I print to audio as soon as I am happy with the MIDI performance, to avoid any inconsistencies with MIDI playback (different round-robins for example). I print mic positions separately, so I can still work on the mic balance after I've printed to audio.

If you never need to go back and revisit the music itself, and like keeping things neat and tidy, then exporting and starting a new project can be a nice way to work. That's down to personal preference.

But I would say, that sometimes during mixing I find that instead of reaching for EQ, or tweaking volume automation, I get the best result from shaping the MIDI performance. So having the flexibility to be able to go back and forth is still nice.

I never normalize any audio files. This would destroy the balance between different orchestral sections, a balance that I have worked hard to maintain with good orchestration and a mostly light-handed mixing approach.

Sounds like you have spent a lot of time on gain staging. How did you approach this? What is your goal?

Some people aim for a 'realistic' orchestral mix. What's actually 'realistic' is kind of open for interpretation. In the era of multi-micing and all kinds of signal processing, even a 'real' orchestral recording isn't a perfect representation of instrument balance, or how the orchestra sounds in the room.

Others take a more creative approach to mixing, where you can turn things on their head. And why not, mixing is a creative process after all. You can make the softest sound the most dominant in the mix if you want. If it sounds good, and it gets you the emotional effect you are looking for, then it's all good.

If you are after some resemblance of a realistic balance, then setting every instrument to peak at the same level is a really bad idea. A solo violin does not put out the same amount of volume as 12 horns. If you are in the latter group of people, then by all means go for it, if it somehow helps you mix faster or better. This is why I asked about your goal with gain-staging.

And finally about levels - headroom is generally good. Individual tracks peaking at -18db should leave lots of space for mix bus processing and mastering. But there are other variables here, like dynamic range (depends on style of the music and format of delivery). I usually aim for the mix peaking around -6db before applying mastering processing when the music is supposed to be released separately. In the film (the original delivery) things are quite different and peaks are usually much lower. But then it's the dubbing engineers job to set those levels, I just need to give them deliveries with levels roughly matching the rest of the mix and they will set the final levels.

Ok, that was a bit long-winded, hope some of it helps!
 

kingy75

New Member
Thanks! Some free form thoughts as a reply, excuse the slight lack of structure, only having my first cup of coffee as I write this :)

I personally keep everything in the same project. The main reason is that I need to be able to go back and quickly address changes to the picture edit and/or any feedback I've received. In fact, I print to audio as soon as I am happy with the MIDI performance, to avoid any inconsistencies with MIDI playback (different round-robins for example). I print mic positions separately, so I can still work on the mic balance after I've printed to audio.

If you never need to go back and revisit the music itself, and like keeping things neat and tidy, then exporting and starting a new project can be a nice way to work. That's down to personal preference.

But I would say, that sometimes during mixing I find that instead of reaching for EQ, or tweaking volume automation, I get the best result from shaping the MIDI performance. So having the flexibility to be able to go back and forth is still nice.

I never normalize any audio files. This would destroy the balance between different orchestral sections, a balance that I have worked hard to maintain with good orchestration and a mostly light-handed mixing approach.

Sounds like you have spent a lot of time on gain staging. How did you approach this? What is your goal?

Some people aim for a 'realistic' orchestral mix. What's actually 'realistic' is kind of open for interpretation. In the era of multi-micing and all kinds of signal processing, even a 'real' orchestral recording isn't a perfect representation of instrument balance, or how the orchestra sounds in the room.

Others take a more creative approach to mixing, where you can turn things on their head. And why not, mixing is a creative process after all. You can make the softest sound the most dominant in the mix if you want. If it sounds good, and it gets you the emotional effect you are looking for, then it's all good.

If you are after some resemblance of a realistic balance, then setting every instrument to peak at the same level is a really bad idea. A solo violin does not put out the same amount of volume as 12 horns. If you are in the latter group of people, then by all means go for it, if it somehow helps you mix faster or better. This is why I asked about your goal with gain-staging.

And finally about levels - headroom is generally good. Individual tracks peaking at -18db should leave lots of space for mix bus processing and mastering. But there are other variables here, like dynamic range (depends on style of the music and format of delivery). I usually aim for the mix peaking around -6db before applying mastering processing when the music is supposed to be released separately. In the film (the original delivery) things are quite different and peaks are usually much lower. But then it's the dubbing engineers job to set those levels, I just need to give them deliveries with levels roughly matching the rest of the mix and they will set the final levels.

Ok, that was a bit long-winded, hope some of it helps!
Hey, thanks for your reply, super helpful.

Re: gain staging, I did that in my initial project, with the MIDI tracks as I went, before freezing the synths. Not sure if that's a sensible approach but I'd heard from others that was one way to do it. I'd also heard from others that the -18db peak level was a good idea too.

I should say I'm quite experiencing in MIDI programming but less so with audio processing of exported/bounced MIDI tracks.

I prefer doing all my volume automation in the MIDI performance too but I think i can improve my tracks through learning more about audio processing.

It seems simple really, shape your MIDI tracks, bounce to audio and there you go. But I always seem to run into issues where my finished tracks aren't as sonically polished (and definitely not as loud) as commercial/radio tracks. That's the reason I normalized these tracks before further audio processing this time, not sure whether that was a smart move or not.

I always run into trouble with one particular trumpet modelling synth. It sounds great but I can never get it as bright and punchy as I'd like it without it going over 0db and/or hearing clipping. Perhaps normalizing is my enemy there? I'm just not sure how to get it loud enough to sound like a real brass section (jazz/big band style in this case).

Does any of that make sense? :)
 
Top Bottom