# How loud are your masters and why?



## CDB (Oct 4, 2017)

Hey folks,

Wanted a discussion about mastering, more specifically what measurement do you use for your own tracks? LUF’s for example. Do you try and keep your masters consistent? Obviously an orchestral track is going to be louder than a piano piece but what level do you aim for on average?

The trouble is and always has been the loudness wars. We are all competing with other composers, producers, engineers etc. Do you go loud to match/stand out against your competitors? Or do you try and get somewhere in-between to maintain a good level of dynamics within your tracks? Personally I am tidying up my portfolio and at the mastering stage and wondering all of the questions above, it’s a difficult call to make.


----------



## Replicant (Oct 4, 2017)

I stopped giving a damn about this a long time ago. I think the "loudness war" is also becoming less relevant given that streaming services and what not are implementing volume normalization. 

The track is as loud or as quiet as it naturally winds up being and the limiter catches anything that would go over 0db.


----------



## brett (Oct 4, 2017)

I agree it's somewhat irrelevant now for streaming services and so you are better off aiming for a great sounding dynamic master, irrespective of overall level (genre dependent of course)

However, I am interested in average levels (LUFS if you wish) to be sent to the mix / dub stage

Again, I suspect it's not terribly important but as a courtesy to the mixer I'd like to hear people's opinions.


----------



## CDB (Oct 4, 2017)

Yea that's true about streaming services implementing restrictions, suppose I was referring more to your tracks that you use as your portfolio etc. Do you at least try to keep it consistent amongst your tracks?


----------



## charlieclouser (Oct 4, 2017)

I make my stems as loud as they can be without causing any clipping when all of the stems are summed together. I set limiters (Waves L3-LL Multimaximizer) on each stem's sub master, but no further limiting on the summed mix, and these limiters help control the errant spikes and provide a bit more of a solid sound. I use the same settings on all of the limiters, adjusting the thresholds so that the loudest stems (drums?) are getting anywhere from 3db to 12db of limiting depending on the sound I'm going for, and then trim their output ceilings so that when all of the stems are summed the result comes as close to normalized as possible. Typical settings would be: Threshold = -15db, Output Ceiling = -9db. Whenever I see that the summed mix is close to clipping, I open my screen set that shows all 14 (!) instances of L3-LL, adjust the Output Ceiling on the first instance, and then copy / paste those settings to all instances. 

I always adjust the Output Ceiling in 3db increments, never fine tuning to a value of -7.3db or whatever - always -6, -9, -12, etc. This is just to avoid endless fiddling. 

I always copy/paste the settings so that all limiters are set the same way - again, to avoid endless fiddling and maintain a balanced sound.

These limiters are always in place and active, from the first moment I start building a template, through selecting and designing sounds, to composing... all the time, right through to the printing of the mixes. This way I can hear how sounds that stack up within a stem, and across multiple stems, will be affected by hitting the limiters. I want to hear what the limiter is going to do when I'm auditioning different kick drums, stacking bass sounds, or whatever.

I pay no mind to LUFs, hearing "dosage meters", or anything other than absolute peak level. To me, that's all that matters - just don't clip. Since the re-recording mixers will be changing the absolute level of the music, and even the relative level between the stems, during their mixing process, I feel that none of the metering that relates to ear fatigue or anything else would be stuff I need to be concerned about. That's THEIR problem, not mine. My duty is to provide nice, healthy level and freedom from clipping. I know that the re-recording mixers will ALWAYS be reducing the level of the material I give them - which is, I feel, the way it should be. I never want them to feel the need to boost the level of my source material, fishing for more level out of a tiny, feeble little signal - and I want them to be free to leave all of the stems at 0db and not be worried that anything will clip when it's all summed together. This way they can do their "fader rides" on a single master fader that controls all of the music stems together, and not upset the balance between the stems.

That's it really. 

When it comes to further processing of my final stereo composite mixes, for release on CD / vinyl / streaming / whatever, well.... that's another story, and relates more to "mastering" in the conventional sense. For these tasks I take those original straight-fader, unity-gain straight sums of my stems, and put the hurt on with more conventional mastering processors. For this task I get pretty tweak-y, adding some eq and harmonic enhancement / saturation before hitting a "finalizing" or "maximizing" limiter. In this case I'm shooting for a FAT sound that gets limited by between 1db and 12db depending on the program material, and my desired ceiling is within 1db of full scale, with inter-sample peak detection turned ON to avoid nasty surprises as complex waveforms are reconstructed at the end-user's D>A converter. 

If there is a CD / vinyl / streaming release of my scores, they're always sent out for "actual" mastering after I've done my worst with plugins, and I always provide my "dry" mixes, which are totally un-processed, un-limited straight sums of my original (limited) stems, as well as providing my "slammed" mixes that were "home mastered" by me, but in all cases so far the mastering engineer has used my "home mastered" versions as a starting point, adding some very slight (+/- 2db) eq and some level adjustment between tracks to avoid ear fatigue for the end listener. I just completed this task for the release of my score for Jigsaw as well as a Saw Anthology which spans all eight movies in the franchise, so I was dealing with mixes that spanned fourteen years of time, from 2003 mixes which were three stems limited with MasterX5 to 2017 mixes which were seven stems limited with Waves L3-LL, and I did the post-processing with Ozone v7. I was able to compare the dry mixes, to mixes that were home-mastered years ago with MasterX5, to today's mixes that I was mastering with Ozone, and Ozone was the big winner. MasterX5 used to be my absolute favorite plugin in the world, and it has a unique floating make-up gain that lets you "fire-and-forget" and seems to "float" the quiet bits to make them louder and then instantly get out of the way when the loud stuff hits - so it's perfect for "run-and-gun" finalizing of television scores on a crazy deadline. But, alas, MasterX5 only runs on the PowerCore DSP platform, which is a dead product that's no longer supported - so I had to find a replacement. I was surprised to find that Waves L3-LL Multimaximizer had the closest sound to MasterX5, but Ozone goes one step further if you're willing to put the time into tweaking. For "set and forget" I still use L3-LL on my stems, but for final "home mastering" I use Ozone and I automate the threshold of the maximizer module to "chase" the low-level passages and bring them up in level, which is basically the same result as automating the level of the signal going into the limiter.

Ozone is much more precise, and although the resulting waveform looks more "bricked" than what MasterX5 puts out, and this might be alarming, you just can't hear it working when it's only shaving 3db off of the peaks. Often there will be moments where a big drum hit on stem A coincides with a bit full-orch Symphobia hit on stem E and causes a momentary spike in level that might be 6db above the other peaks in the mix - and Ozone will catch these and push them down with almost no audible artifacts - it makes stuff sound like I MEANT it to sound, and there is no sensation of clipping or compression at that moment. It's a freaking miracle worker.

I am, however, really looking forward to diving deep into the new Eventide Elevate mastering plugin, since it's a "multi-multi-multi-band" design with up to 26 frequency bands, each with its own look-ahead limiter and transient recovery controls. This might be even better than L3-LL or Ozone, and allow heavy limiting in one band (sub-bass for example) that doesn't effect adjacent frequencies (like the fundamental pitched range of low strings for example). With any luck and a strong enough CPU, I might even be able to use Elevate as my new processor on each stem's sub-master. I'll report in if my CPU bursts into flames when I try this.


----------



## John Busby (Oct 4, 2017)

@charlieclouser 
you're a wealth of knowledge man, thank you so much for posts like this!

quick question about Oz7 - when you automate the maximizer threshold, are we talkin like fine tuning automation i.e. 0.5 to like 1.5 db?


----------



## rvb (Oct 4, 2017)

charlieclouser said:


> I make my stems as loud as they can be without causing any clipping when all of the stems are summed together. I set limiters (Waves L3-LL Multimaximizer) on each stem's sub master, but no further limiting on the summed mix, and these limiters help control the errant spikes and provide a bit more of a solid sound. I use the same settings on all of the limiters, adjusting the thresholds so that the loudest stems (drums?) are getting anywhere from 3db to 12db of limiting depending on the sound I'm going for, and then trim their output ceilings so that when all of the stems are summed the result comes as close to normalized as possible. Typical settings would be: Threshold = -15db, Output Ceiling = -9db. Whenever I see that the summed mix is close to clipping, I open my screen set that shows all 14 (!) instances of L3-LL, adjust the Output Ceiling on the first instance, and then copy / paste those settings to all instances.
> 
> I always adjust the Output Ceiling in 3db increments, never fine tuning to a value of -7.3db or whatever - always -6, -9, -12, etc. This is just to avoid endless fiddling.
> 
> ...



Awesome post! Thanks for that! Just wanted to clarify you are using L3 on a drum bus, than another L3 on the bass bus and another on the piano bus etc. before hitting the home master limiter?


----------



## charlieclouser (Oct 4, 2017)

johnbusbymusic said:


> @charlieclouser
> you're a wealth of knowledge man, thank you so much for posts like this!
> 
> quick question about Oz7 - when you automate the maximizer threshold, are we talkin like fine tuning automation i.e. 0.5 to like 1.5 db?



Yeah, it's pretty small increments - but not THAT small. I'm automating that threshold across a range between -12db and -3db. So let's say a cue has a quiet, ambient, floaty section followed by a totally slamming industrial drums+bit crushed guitar section (typical for me!). I might start with the Maximizer threshold way down at -10db to give me that 10db boost for the quiet section - but with the incoming signal peaking at -12db there's no actual limiting taking place, just an overall boost in level for that section of 10db. Then when the loud stuff comes slamming in, and that section is already peaking at -2db, I'll do a super quick automation move (100ms slope or something) to raise the threshold to -4db in that instant just before the first loud hit, so that the slamming section is only getting a couple of db shaved off of the peaks. In the end, this is just like automating the level of the dry mix that's coming into the processor - but I don't like doing that. I prefer to leave the actual audio file and its playback level alone - fader at zero, etc. - that way the full-level signal is what's hitting any eq or harmonic saturation that occurs before the Maximizer. I don't want that dry signal jumping around in level as this might change how the saturator responds. So I hit all of the stuff in the chain with that full-level dry signal and then pretend that my Maximizer threshold control is like an upside-down volume fader - which it is, if you think about it. I try to not use increments smaller than 0.5db - just because I don't want to be fiddling with quarter-db increments all day.

Sometimes I might be doing some mild eq boost (like a +3db hi shelf at 6k or something) to add some sizzle, or some corrective notching out of troublesome resonances, BEFORE the saturator / maximizer, and this can occasionally raise the level of the incoming signal so that it's actually indicating clipping (or over-full-scale) signal coming into Ozone's Maximizer. Although I never can actually HEAR any clipping or distortion in these cases (thank you floating-point mix engine), I get scared when I see red lights. So in these cases I will use clip gain in the timeline to reduce the level of that entire file by 3db or something and then get that back from the maximizer. I'm pretty diligent about checking my metering at any and all points in the path between all of the plugins to prevent any overs.


----------



## John Busby (Oct 4, 2017)

i'm definitely gonna try this!
my usual approach is to just automate the master fader for my two-track before Ozone but this makes more sense in controlling the coloring from the effects modules too i would think.

You're awesome man!


----------



## charlieclouser (Oct 4, 2017)

rvb said:


> Awesome post! Thanks for that! Just wanted to clarify you are using L3 on a drum bus, than another L3 on the bass bus and another on the piano bus etc. before hitting the home master limiter?



Yep. One L3-LL Multimaximizer on each stem sub-master. So for seven stems in quad, that's fourteen L3-LL instances altogether. Negligible CPU hit, which is part of the reason I don't try to use fourteen instances of Ozone at once! Also, the "LL" in the name of the plugin indicates "low latency" which are special versions of some of the Waves limiters that have very low latency, so I can leave them on all the time and "play through" them even when I'm sequencing MIDI drum parts or whatever. I don't know the actual latency figure, but it "feels" like it's a lot less than the old MasterX5 on PowerCore, which had a fixed 10ms latency because of its look-ahead buffer, and probably and additional 2x the host buffer for the round-trip to the PowerCore card and back - but not sure about that.

My stems are generally laid out like this in a seven stem session:

A = main drums
B = aux drum split and / or high percussion
C = bowed metal weirdness
D = synths and / or piano
E = strings
F = brass
G = wild card stem. Orch fx, choirs, any "bonus" elements.

When I'm actually working on the score and delivering stems, there's no additional "home master limiter". Nothing at all on the composite mix sub-master, where all of the stems are summed together. That final stereo processing I was talking about, where I use harmonic saturators and Ozone's Maximizer, is ONLY when I'm post-processing the final stereo mixes for release by themselves as a CD or whatever (and for delivery to my agents for their score library that they make demo reels from, etc.). Basically for "listening". The stuff delivered to the dub stage is ONLY limited on a per-stem basis. Then, once the smoke has cleared and the movie is in the can, I go back and take an empty Logic project and use that just for post-processing the final stereo mixes. These are just the front L+R pair peeled off of the summed composite mix of all the stems. 

I do not try to do a "fold-down" of the rear surrounds into the front pair - I always treat the front pair as the ones that matter, and if the rear pair goes away it's no biggie - there's nothing meaningful in the rear speakers that isn't also in the fronts in some form. Usually my rear pair is just an alternate reverb and delay pair to create a sense of quad space, with only some of the elements hitting those effects, and sometimes even a dry signal of a track is going to the rears and not just splashing those extra rear reverb and delays. But nothing ever goes to the rear and doesn't go to the front. If they want to create that effect on the dub stage, the mixers will have to reduce the level of the front pair on a given stem to let that stem's rear pair sing out all by itself - and they have done this a few times but it's not all the time. I'm also not recording real orchestra in 5.1 and trying to retain that life-like sonic image, and even with all of the fancy orchestral libraries I generally don't lay out all their surround mic positions to replicate the room at Air Lyndhurst or whatever - I pick whichever pair I like, send that to the fronts, and then use reverb plugins in the rears to get that sense of surround space. It's fake, but I like how it sounds.


----------



## charlieclouser (Oct 4, 2017)

One thing I forgot to mention about my "mastering sessions" in Logic: I don't put all of those plugins on an Aux or on the stereo master output object - I put them on an individual audio track. That way I can do a few handy things:

I set up a few different processing chains on adjacent audio tracks, and then drag the audio files from track to track to audition them through different plugins / settings. So, track 1 might have nothing on it so I can always hear an unprocessed signal without clicking all of those Bypass buttons - just drag the audio file to track 1 to hear it dry. Track 2 might have a basic L3-LL setting so I can hear its "plain jane" limiting and compare it to fancy-pants Ozone, which is on track 3. Track 3 will be Ozone with the Harmonic Enhancer off, or set to a very mild setting. Track 4 might be my high-shelf eq feeding into Ozone with more Harmonic Enhancer, etc.

Note: These "mastering sessions" are just completely empty Logic projects - no instrument objects, none of my usual complex set of stem sub-masters, no Auxes for effects, no VEP returns - just a few audio tracks feeding a single stereo output and nothing else at all. Keeps it simple. Since I'm not trying to overdub just one more high string line or whatever, I don't need all of my usual template stuff cluttering up the place.

I've always used this workflow to allow me to set up a bunch of different plugin chains and quickly compare them by just dragging an audio file from one track to another - way better than trying to juggle Bypass buttons. When I'm trying to decide on which processing chain I want to use for a mastering session, I might have 10 or more setups on adjacent tracks, and then once I decide on the basic set of plugins I want to use, I'll get rid of all the ones I was just checking out but didn't want to use, and then set up a few similar chains with different settings. So if one cue needs that high shelf, but another one doesn't, I can just drag that audio file to the appropriate track that has the eq on it. This way I don't have to automate the on/off of that eq for every cue.

You can expand upon this technique and get pretty crazy, setting up a bunch of processing chains and then you can quickly compare them and stick your audio file on the track in which it sounds best.

Then I set my cycle range to be exactly the length of the audio file in question and bounce in real time so I can hear it as it goes down.


----------



## JohnG (Oct 4, 2017)

@charlieclouser 

Thanks -- You are one nice person, Charlie, especially for someone who writes both music and prose with such intensity. I usually use an engineer, but we don't always have time / budget for that. I always learn something from your posts.

Kind regards,

John


----------



## jmauz (Oct 4, 2017)

Charlie - thanks so much for your posts. I wasn't getting anywhere with this goddamned cue and I needed to feel productive somehow. I owe you one. 

Now back to auditioning drones....sigh.


----------



## givemenoughrope (Oct 4, 2017)

charlieclouser said:


> I make my stems as loud as they can be



Charlie, I was the annoying guy at the end of that Spitfire event in K-Town several months back that cornered you and bugged you about this setup. And now it's what I do too. I'm glad everyone has all of this in posted in in one spot now. Thanks, man!


----------



## Gerhard Westphalen (Oct 4, 2017)

When I'm mixing, I don't look at meters. Since my system is at a calibrated level, I know roughly where I'm at and don't need to worry about clipping. I rarely clip as that's so loud. Mixing film stuff I might hit near -2dBFS and for more compressed stuff like pop usually -6dBFS.

When I'm mastering, I generally only look at LUFS and how much my limiter is working. No rms or peak metering. I'll generally hit -12LUFS and from there get it to sound the best that it can. In certain cases it'll get pushed up to -8LUFS but that's because I want it to sound like that, not because I'm trying to make it louder. I know that it won't sound any louder but I'll make it sound better than if it was only -12LUFS even though it'll play at the same level. I think I've had score stuff so album release be as low as -18LUFS. I almost always limit at -1dBFS since that's what most streaming will do. I don't care to get those additional 0.7dB (room left for issues with creating MP3's).


----------



## Greg (Oct 4, 2017)

Eventide Elevate is absolutely fantastic. I just used it on a custom trailer cue and the loudness I could get without sounding like shit was amazing. Definitely replacing Ozone. Beware that it doesn't detect ISP. I put a PRO-L after it to take care of true peaks.


----------



## Gerhard Westphalen (Oct 4, 2017)

I've heard that the Junger Audio Level Magic is able to do some crazy loudness things while still sticking to standards but I haven't tried it myself. It's pretty expensive and is a huge cpu hog. I saw it grind a mac pro to a halt with 4 instances. Macbook pro couldn't handle more than 1.


----------



## gsilbers (Oct 4, 2017)

CDB said:


> Hey folks,
> 
> Wanted a discussion about mastering, more specifically what measurement do you use for your own tracks? LUF’s for example. Do you try and keep your masters consistent? Obviously an orchestral track is going to be louder than a piano piece but what level do you aim for on average?
> 
> The trouble is and always has been the loudness wars. We are all competing with other composers, producers, engineers etc. Do you go loud to match/stand out against your competitors? Or do you try and get somewhere in-between to maintain a good level of dynamics within your tracks? Personally I am tidying up my portfolio and at the mastering stage and wondering all of the questions above, it’s a difficult call to make.



LUFS is mainly for your re-recording mixer. He will lower or gain up your music to get those levels. 

with that said, if you doing a score for a film, you can try and do a quic mix of the dialog and/or fx so they reach around the -24 target level. and that way you can have an easier time guestimating the end result. 
you could also try and use the DAWs LUFS meters and reach -18 to -14 to make sure the music is "compressed" in a sense that its not too hot or too soft volume/mixing wise. 

not many people are doing the loudness war thingy anymore. since the streaming companies/specially youtube started doing its own normalization now mixers are trying to get a good level that can "breath" , as opose to that square wave everyone aimed at before. mainly because if you mix very loud/square wavy - then youtube just lowers the volume overall and it really sound low. if it has dymamic levels then it will try to keep that. 

but it depends. try to do A/B comparisons with commercial tracks. i use magic A/B to quickly compare.


----------



## rayinstirling (Oct 5, 2017)

DMG Audio's Limitless.
Now there's a tool that can crush a mix in the wrong hands (relatively cleanly).
Dynamic range! what dynamic range?


----------



## Blake Ewing (Oct 6, 2017)

As almost all music is now streamed, and almost all streaming companies are loudness normalizing, not peak normalizing using LUFS. I think it is beneficial to mix with that in mind.

My masters are almost always -11 to -16 LUFS-Integrated (which is an formulaic average over the track) with a LU-R (range) of 8+ (usually much higher since orchestral music is so dynamic). I also limit to -0.3 to -1 dB True Peak (usually the former, but online streaming conversion(s) can cause peaking if you go much higher sometimes).

The benefit to me of LUFS is that it is a very visual, practical way to know how "loud" your mix will sound regardless of where it's played (or if you've calibrated your monitors, etc.) and is more mathematically relevant than RMS for the how loud it really sounds to humans. 

I know Apple Music normalizes to -16LUFS
and Spotify and TIDAL to -14LUFS
others will follow.

To my knowledge, Spotify is the only streaming company that will raise levels to reach -14 if you're lower. But they all will lower levels if you're hotter. So, to me shooting for -14 if you're gonna be streaming the track is a good rule of thumb.

Obviously, if your final delivery is head elsewhere, the specifics are malleable.

Of course, above all, it should sound good when you're done or none of that matters I reckon!


----------



## Vakhtang (Oct 7, 2017)

Always around -7.5 / -6.5 LUFS (integrated) here for electronic stuff


----------



## CDB (Oct 18, 2017)

Vakhtang said:


> Always around -7.5 / -6.5 LUFS (integrated) here for electronic stuff


That seems pretty loud!?


----------



## CDB (Oct 18, 2017)

Wow Charlie that's a wealth of great informtation there, thanks for that. Althought I need to re-read it to understand exactly what you are describing. Just shows I should be on this forum more often!!


----------



## Mornats (Oct 18, 2017)

Blake Ewing said:


> To my knowledge, Spotify is the only streaming company that will raise levels to reach -14 if you're lower. But they all will lower levels if you're hotter. So, to me shooting for -14 if you're gonna be streaming the track is a good rule of thumb.



I read an article about Spotify and Soundcloud normalising at -14 as my only outlet is to stream my music (to very few people!) on Soundcloud that's what I go for. Commercial tracks do seem louder than mine but as this is my platform I'll stick with that.


----------



## CDB (Oct 18, 2017)

Mornats said:


> I read an article about Spotify and Soundcloud normalising at -14 as my only outlet is to stream my music (to very few people!) on Soundcloud that's what I go for. Commercial tracks do seem louder than mine but as this is my platform I'll stick with that.



Is that measured short term or long term?


----------



## Mornats (Oct 18, 2017)

Might be easier if I shared a screenshot of YouLean. It's the integrated one that I aim to get at -14LUFS.


----------



## robshrock (Oct 19, 2017)

charlieclouser said:


> Yep. One L3-LL Multimaximizer on each stem sub-master.



Hi Charlie,

Curious... Do you tweak the L3-LL settings (crossovers, release, etc.) per stem or leave everything consistent as you do with levels? Are you using the L3-LL as it comes up default or using a specific preset or one you've created?

Not trying to cop your actual settings; just wondering if that's a tweak thing process that moves around based on the varying stem content.

Thx


----------



## synthpunk (Oct 19, 2017)

Another great Charlie Clouser masterclass.


----------



## inspiringaudio (Oct 20, 2017)

CDB said:


> Hey folks,
> 
> Wanted a discussion about mastering, more specifically what measurement do you use for your own tracks? LUF’s for example. Do you try and keep your masters consistent? Obviously an orchestral track is going to be louder than a piano piece but what level do you aim for on average?
> 
> The trouble is and always has been the loudness wars. We are all competing with other composers, producers, engineers etc. Do you go loud to match/stand out against your competitors? Or do you try and get somewhere in-between to maintain a good level of dynamics within your tracks? Personally I am tidying up my portfolio and at the mastering stage and wondering all of the questions above, it’s a difficult call to make.



Touching the red on Dorough Meter


----------



## charlieclouser (Oct 20, 2017)

robshrock said:


> Hi Charlie,
> 
> Curious... Do you tweak the L3-LL settings (crossovers, release, etc.) per stem or leave everything consistent as you do with levels? Are you using the L3-LL as it comes up default or using a specific preset or one you've created?
> 
> ...



I keep all of the settings identical for every instance of L3 across all of the stems - in fact, when I adjust anything on one stem's L3 I copy/paste those settings to all other instances. The main reason is time - it would take an extra hour per cue to analyze and decide what the heck to do! But also the various settings in L3 don't make such a huge change in the sound oat of the time, unless you're operating at extremes. And there's no real magic secret to my settings - I'd put up a screen shot but my rig is turned off. The only settings I really tweaked were the crossover points between frequency bands, all the rest is pretty plain Jane. I think I'm using auto release and the shortest possible attack times but I will check.


----------



## ghandizilla (Nov 6, 2017)

I'll try to apply a similar set of submaster limiters with bx_limiter. A very informative topic, thanks a lot!


----------



## ceemusic (Nov 6, 2017)

I'll meter various ways using K-14, vu's & audio refs. I've been going for -14 LUFS/ -1 peak for the last 5 years. Of course it depends on the material & medium. I can use any limiter in my arsenal w/o any problems with overs.


----------



## Serg Halen (Nov 12, 2017)

~-8, -10 rms


----------



## Sekkle (Nov 13, 2017)

Thanks Charlie, this is really going to help me get some stems out. Appreciate you sharing your knowledge!


----------



## Aeonata (Nov 14, 2017)

Around -13 to -10.5 LUFS integrated. This is pretty much equivalent to "touching the red" of the k-12 scale in the loudest sections. Limit peaks at -0.5 dB.


----------

