What's new

What is gain staging and is it important?

I've found that mixing became a lot easier for me when I started "gain-staging" everything correctly. Many VST-emulations of analogue gear "expect" a certain input level and sound best (subjectively of course) when you hit them within a specific range... Also, not having to pull your faders down so much gives you a lot more precise control over the mix, since it all tends to get fiddly with lower fader levels (naturally). So, since I try to pay attention to correct levelling before it hits the level-fader in the mixer, it all falls into place a lot easier for me.
 
How does it apply to an orchestral template that has so many more instruments than the average song and can cover such a wider dynamic range?

I've always balanced my template using the top layer as my ceiling, but most of the time my music tends to sit on mp with the occasional ff. That obviously almost always results in having a much lower than optimal output level. I haven't tested this myself but surely if you make all the tracks hit -6db max, the output must clip on a ff moment. On the the hand when you don't you do that it gets nearly impossible to average -18db on every track. Of course you can always reduce or the gain at the end using a gain on the out but if that's the case what's the better option? Or should one use multiple templates depending the overall dynamic level of the piece?
 
I've found that mixing became a lot easier for me when I started "gain-staging" everything correctly. Many VST-emulations of analogue gear "expect" a certain input level and sound best (subjectively of course) when you hit them within a specific range... Also, not having to pull your faders down so much gives you a lot more precise control over the mix, since it all tends to get fiddly with lower fader levels (naturally). So, since I try to pay attention to correct levelling before it hits the level-fader in the mixer, it all falls into place a lot easier for me.

This. Also true with certain FX plugins.
 
Yes. As the first insert. I play the track and let the autogain work.
Got VUmeter on the strength of your mention. Loaded a 55 track project which had taken quite some time to gain stage originally. Track by track finding the highest level and playing just that section a couple of times while watching meter and tweaking gain.

Printscreened the gains and reset to 0. Took 5 minutes or so to insert VuMeter on all the tracks, hit Auto and played the mix once all thru, then compared to my original gain staging. Most were within 1 or 2 db, a couple varied more but nothing off the charts. A couple were spot on. Plenty good enough to start mixing with.

So what was previously a long slog will now be easy peasy on my next large project. Well worth the US$7.34.
 
I use Logic and various orchestral and synth virtual instruments. Now for controlling track volume... could someone please explain the difference between:

1) using the virtual instrument's own volume control in a given track... and
2) using Logic's "gain" plugin on that same track... and
3) Using Logics mixer and the individual volume slider for that same track

My normal work flow is:

1) leave all of Logic's mixer volume faders at "0"
2) record instrument track (by regions) via controller keyboard.
3) adjust track volume within the instrument (as in Kontakt or Play or SINE volume control)
4) either during or after recording, use cc1 and cc11 for dynamic contrast within each region.
5) sometimes when needing larger volume changes, I'll use Logic's gain plugin
6) after panning, EQ, reverb, other plugins, etc... make final balancing adjustments via Logic's mixer volume sliders.
7) if needed (only after balancing with no clipping) add Logic's limiter plugin (set to -.5) to the final mix bus. I don't do this often, especially since much of my music has very soft passages that I do not want louder.

Thanks
 
How does it apply to an orchestral template that has so many more instruments than the average song and can cover such a wider dynamic range?

I've always balanced my template using the top layer as my ceiling, but most of the time my music tends to sit on mp with the occasional ff. That obviously almost always results in having a much lower than optimal output level. I haven't tested this myself but surely if you make all the tracks hit -6db max, the output must clip on a ff moment. On the the hand when you don't you do that it gets nearly impossible to average -18db on every track. Of course you can always reduce or the gain at the end using a gain on the out but if that's the case what's the better option? Or should one use multiple templates depending the overall dynamic level of the piece?
Hey Mattia...As you know, orchestral music is far from being slammed like pop music...so you have nothing to worry about in terms of trying to get "hot" levels...From a mixing standpoint...it is utterly important to set your MASTER bus level to no more than 75%...so this works with any DAW, keep it barely hitting yellow. Your individual tracks can certainly peak red from time to time...there is plenty of headroom and it's normal. In the analog world, we used to hit red because it sounded better...LOL BUT the master bus was always reasonable. I would not worry about individual tracks, but rather your STEM tracks in terms of level...then that level into the master bus.

The -18 comes into play when you need to feed analog type plugins. Most analog plugin emulations sound best when fed with -18 DBFS which is zero in analog, but good practice for digital as well. This is optimum for the plugin, NOT for your tracks. Your track output can certainly hit red when mixing. So as Akarin pointed out, use a VU meter as your first insert but set it to RMS...lower your SOURCE audio wave (clip), not the track level to hit the meter at -18. You can also use a trim plugin BEFORE the meter plugin to get your levels...this might be easier. Truthfully...what I do, is use one instance of a VU meter...once my level is set, I move it over to the next track...no need to keep the VU meter on every track once it's set, as this will not change unless you manually increase the audio clip. It is certainly a bit of a pain to do but worth it...again, this is ONLY during mixing once your tracks are rendered. If you don't render...you should. You will have much more flexibility to manipulate the audio than you do MIDI.

In terms of multiple templates: The principal of level still applies no matter what. The difference in dynamics becomes relevant in the mastering stage...how hard are you limiting. Orchestral/movie scores are very dynamic and have very little processing...trailer/hybrid tracks and scores like a Zimmer type or Goransson track can certainly be heavier on the processing in the mastering stage. But anything BEFORE mastering, should always fall within the proper gain staging (usually -14 at this point)...this is of course for best sonics.

If you have a very transparent limiting plugin like the (WEISS MM-1), then it should be easier to get your levels up without sacrificing dynamics.

Hope this makes some kind of sense and answers your question. Sorry for the winded reply...you have helped me with BBCSO and answered some of my questions, just wanted to return the favor. :)
 
Firmly in the "gain staging is still important" crowd. Most of the reasons have been covered in this thread regarding analog-modeled plugins. I did want to add one more point I didn't see mentioned yet (apologies if overlooked)...

  • While it is true that new 32bit interfaces can provide an insane amount of headroom and recovery potential, objectively, proper gain staging is still important to get the best sound out of the analog parts of the signal chain. You should view those 32bit interfaces as tools to help fix something that occurred with proper gain staging in place... like recording dialogue and then there is a louder-than-expected siren or explosion in the background that would normally distort the audio being recorded. Or... say a singer gets really explosive with a certain part of the song that was set properly during demo takes but would clip in a traditional 24bit signal flow. Use the 32bit capabilities to recover a potentially great take.
 
Thanks @jaketanner really appreciate it. I never render or hardly even mix my music tbh. I kinda like to do it once, when building a new template and then just make really minor adjustments. This thread though as been helpful as I'd like to learn more about these technical things.
 
I gain-stage with the "Gain-fader" in Cubase (recordings; for most virtual instruments I simply mix via sub-mixes in VEP). You can open it by clicking on the MixConsole toolbar, click Racks and activate "Pre(Filters/Gain/Phase)". This adds the Pre-Section on top of the mix-rack.

I simply set the gain-level so the channel-meter is around -10dB during playback.
this makes it easier to mix with the faders, most plugins (analog-simulation, compressors) work with this input-level - including the presets, and I have enough headroom for the mixing channels without digital clipping/distortion.
 
Thanks @jaketanner really appreciate it. I never render or hardly even mix my music tbh. I kinda like to do it once, when building a new template and then just make really minor adjustments. This thread though as been helpful as I'd like to learn more about these technical things.
If you don't really mix, then don't sweat the levels really. As long as your master is not peaking constantly, you're good. A trick I like to use: GROUP all your stems, and lower them all evenly keeping the appropriate balance...this way your master bus is always at zero as it should be.
 
I’m in the middle of building a new orchestral template and wondered if you guys use gain staging only in the mixing stage or for example you utilise VU meters on the tracks? So basically have your gain staging pre built into the template?

My idea was to use VU meters on the busses of each section. If I use VU meters on every track on a large template my CPU would be choke, so that makes no sense to me. Is this correct? I find building a large template and balancing each section daunting enough without the added confusion of gain staging.

I watched a video recently were a guy used a VU meter on all tracks, first the VU meter and I think it was set to -18. Then he applied the necessary processing, (most emulation plugins) and then added another VU meter at the end of the chain also set to -18. Is this correct? Seems quite confusing.

I’ll most likely use analog emulation plugins so I know I will have to feed them a certain amount volume and I won’t be using only VI’s so I’m know it’s necessary when recording instruments live. Can anyone make this clearly for me?
 
skip the VU on every track. put it online on the buses going to the stems. If you need to feed an analog plugin, then put the meter before the plugin, but then just shift the plugin to another track when you need it...unless you are tracking your entire template at the same time, you don't need a VU on every track. Also, doesn't your DAW have various different meters for the tracks? I work in Pro Tools, and we have like 14 different variations of meters...LOL But I still use the Waves WLM plus to see LUFS...but check or set your DAWs meters first.
 
skip the VU on every track. put it online on the buses going to the stems. If you need to feed an analog plugin, then put the meter before the plugin, but then just shift the plugin to another track when you need it...unless you are tracking your entire template at the same time, you don't need a VU on every track. Also, doesn't your DAW have various different meters for the tracks? I work in Pro Tools, and we have like 14 different variations of meters...LOL But I still use the Waves WLM plus to see LUFS...but check or set your DAWs meters first.

Hey Jake thanks for the advice, however I am a little confused when you mentioned to shift the VU meter to another track after using it. Do you mean I put the VU meter first then send in the required volume to the plug-in and when I achieved this take the same VU meter and apply to the next track?

Would this not defeat the purpose as once I take the VU meter off the track the volume will automatically change? Also do you set up a template with gain staging or only while mixing?
 
Hey Jake thanks for the advice, however I am a little confused when you mentioned to shift the VU meter to another track after using it. Do you mean I put the VU meter first then send in the required volume to the plug-in and when I achieved this take the same VU meter and apply to the next track?

Would this not defeat the purpose as once I take the VU meter off the track the volume will automatically change? Also do you set up a template with gain staging or only while mixing?
Hi. The meter plug-in doesn’t do anything but show you how loud the signal is. If you’re talking about placing a TRIM as your first plug-in that’s different. But a VU meter plug-in does not change the signal at all. So you would use the output of whatever is feeding your stem mixes to set the level. So after you have it set, you can move the VU. I do not have the individual tracks gainstaged. That is pointless because it’s the stem tracks or audio you use to mix that needs to be set for the plugins. At least for me...I mix professionally, and do not gain stage instrument tracks until I’m ready to mix audio. No real right or wrong way, just how I do it. But feel free to love the VU once the level is set.
 
Most DAWs have 64-bit Floating Point signal paths these days. IF (be careful of that word -- it's a tricky one) you are ONLY usiing the DAW mixer gains and staying inside the computer you could:

* Add an audio file to a track that already peaks at 0dB, somehow gain boost it +100dB
* Send it a group, boost that group +100dB
* Send it to the master and reduce the master by 200dB

...and there will be zero distortion. ZERO. The track is "clipping" by 100dB, and the group is "clipping" by 200dB according to your meters (not really true, but your meter will say so), but zero disotrion.

If you are working with a (*LINEAR*) plug-in that processes in float (most do), and even better, one that keeps the signal path double precision 64-bit float intnerally you can insert 1, or 10, or 100 etc plugs into the above track/group and the result is still fine. Try it with our Breeze or Precedence plugs. No problem. The result will be identical to keeping gains at 0dB.

If all processing is done in float (and ideally 64-bit float), and is *linear*, and the MASTER does not go over 0 gain staging is basically irrelevant.

Here is when it becomes relevent:

1) If any plug-in in the chain if processing in fixed point. (Some Waves plug-ins used to. Not sure recently. There are likely others, but they are rare these days. These can/will clip if signal exceeds 0dB)

2) Any plug-in in the chain uses any kind of NON-Linear processing. i.e. wave-shaping, distortion, saturation, etc. (anything "analog modeled" likely falls into this group.) eg: the result of f(x) = tanh( g * x) is highly dependant on the value of g (gain).

in other words in a LINEAR process such as simple gain:

w = g * x;

can be undone with another (inverse) gain:

y = (1/ g) * w
y = (g /g) * x = 1 * x = x
y = x // the same!

but a non-linear process can not be undone via a later gain:

w = tanh(g * x)
y = (1/ g) * w
y = tanh(g * x) / g
x != tanh(g * x) / g
y != x // NOT the same!

So for any non-linear transfer-function based process the result will depend on the gain and various analog models use such things and have expected ranges of optimal signal levels.

Note however, most plugs that have any kind of non-linear processing have input gain (or "Drive") and an output gain specially to allow you to control exactly what levels are hitting the non-linear part of the process.

3) You are integrating real-world physical processors that are going through fixed-point/integer DA/AD conversion.

4) You are integrating real-world physical processors that are connected digitally and the signal is 24bit (or any bit-depth really) FIXED-point/integer, such as standard AES/spdif/toslink etc.


The various recommendations to keep tracks peaking at around -12dB or similar is because of these 4 considerations. Gain staging in this manner is most important if/when these apply to your workflow and tools.
 
If you're rendering in place, the audio is still being drawn from the master bus, so I'm not sure why you would be baking in any distortion unless you're overloading the master.

As Kenny says, we're talking only about mixing, not recording. Recording input levels are still important.


... it is possible his DAW is writing 24-bit fixed/integer files for render-in place. If you clip a fixed-point file it is exceptionally hard to undo it. There are some "unclip" dsp products/processes, but they are not perfect. For sure simple gain adjustment post clip will not do anything about the distortion from clipping. So I guess his temp files are fixed point. What DAW is it? Prob there is a preference to use float files for temp files somewhere in the DAW. Use that, and then no problem.

Additionally, note you can "psuedo clip" even the master out of the daw if you are rendering 32bit or 64-bit float, and then gain reduce it somewhere else before it hits the DA converter. The float file itself has no problem at all having levels over abs(1.0) i.e. 0dB.

I.e. make your master peak at +12dB. Render a 32-bit float file. Load it in iTunes and turn the gain town to 25% or less. Should be fine. Magic. ;)

I am NOT suggesting to do that. Just trying to point out that it is important to realize the relevence of float vs fixed point files in these topics.
 
Actually gain staging is just useful in the digital world when it comes to mixing.

When the faders are at unity (and you've set the gain for each track to an approximation for each channel) you'll have more precise control over fine volume adjustments when increasing or decresing the fader volume by fractions of a db.
 
Top Bottom