# My tracks are too quiet...



## Agondonter (Apr 1, 2019)

Hi there! 

I have composed a couple of slow solo piano pieces which are intended to be played quietly, but the end result is far too quiet and I don't know what to do to make it louder. I am using The Giant piano library set to a softer tone quality The RMS averages at -40 and the volume output from Kontakt is set at 0 dB. Thanks for any help!

Cheers,
Alex


----------



## KallumS (Apr 1, 2019)

You could try bouncing the audio and normalizing the result.


----------



## FriFlo (Apr 1, 2019)

If you still think it's to quiet after normalising the audio file, then tell us: tomquiet compared to what? What is the musical style of your track and do you have a reference in the same style you compare it to?


----------



## Jimmy Hellfire (Apr 1, 2019)

Why not just increase the gain (Cubase can do it natively - in DAWs that can't, any plugin that can increase gain is enough) of the channel? It doesn't get any simpler than that.


----------



## robgb (Apr 1, 2019)

Increase the gain, then gentle limiting on your master bus. Maybe more than one limiter, each with subtle increases in gain, but not the point where you lose all dynamic range. Trust your ears.


----------



## Dave Connor (Apr 1, 2019)

Even of the piano part is itself soft - you want to capture it (record it) in a way where it is printed at a healthy volume. That's why people want to see your waveform - to see whether you are capturing the audio information efficiently enough. If someone is singing softly - live on a microphone - they bring the mike closer to their mouth while the engineer who's recording makes sure the input from the mike is turned up enough to get a strong signal into the mixer. If either of these variables is off (mike too far from source or input to low) you won't get proper signal strength which causes problems in a few ways.


----------



## dzilizzi (Apr 1, 2019)

I've been having this issue with Kontakt as well. The audio files are almost flat. And it isn't actually supposed to be quiet. I ended up having the volume at +6 and using Waves Maxim on the master bus. But any limiter/compressor would probably work. I did lose some of my dynamics though. I need to go back and fix it.


----------



## Agondonter (Apr 1, 2019)

Thank you all for your replies! I boosted the gain by 5 dB and it did sound louder, but not greatly. The master bus showed a peak of -10 at some point I think, but generally stayed below -20. I listened the track with my cheap earbuds on my phone while being outside and at some points especially near the end it was barely audible.

Here is the .wav file:



Cheers,
Alex


----------



## Agondonter (Apr 1, 2019)

I am not sure if you can download it. If not, please let me know how to share it with you.


----------



## José Herring (Apr 1, 2019)

Sounds like you're trying to stick to the lower dynamic layers, I think, due to the fact that the higher dynamics probably jump out at you which means that the library itself could be at fault. Unnatural dynamic range in the samples themselves. 

It's my main problem with piano libraries. The dynamic range gets so out of whack that the entire piano spans from barely audible to maxing out a 0db. Which of course is impossibly hard to work with and making the actual usable dynamic range for any real non pop related work around the mp and below range. 

I'd be curious to find out if there is a piano library where they just left the dynamic range of the instrument intact and not messed with.


----------



## AlexanderSchiborr (Apr 1, 2019)

why not just keeping the quite composed music how it is? If it is too quite people can turn up their volume..I mean what is the purpose of a piano piece which is intended to sound quite to mutilate in order to sound loud(er). I will never understand that logic..sorry. Listen to telarc recordings of Tschaikowsky pieces, the dynamic range is so great.


----------



## dzilizzi (Apr 1, 2019)

AlexanderSchiborr said:


> why not just keeping the quite composed music how it is? If it is too quite people can turn up their volume..I mean what is the purpose of a piano piece which is intended to sound quite to mutilate in order to sound loud(er). I will never understand that logic..sorry. Listen to telarc recordings of Tschaikowsky pieces, the dynamic range is so great.


I think it isn't the louder so much as just being able to hear it on something that isn't a perfect sound system. For example, if I need to turn it past the halfway point on my audio interface to hear it on my studio computer, I can barely hear it over the speaker hiss on my desktop computer, if that makes sense. Which means on my phone, well, it's basically inaudible.


----------



## José Herring (Apr 1, 2019)

AlexanderSchiborr said:


> why not just keeping the quite composed music how it is? If it is too quite people can turn up their volume..I mean what is the purpose of a piano piece which is intended to sound quite to mutilate in order to sound loud(er). I will never understand that logic..sorry. Listen to telarc recordings of Tschaikowsky pieces, the dynamic range is so great.


This is what I was thinking at first. But, then it does seem like as the piece progresses he is trying to build the dynamic up a bit and then seemed restrained, thus I figured that he ran into problems with the louder dynamics being out of balance and thus held himself down to the lower dynamics.


----------



## Nick Batzdorf (Apr 1, 2019)

April Fool's?


----------



## Agondonter (Apr 1, 2019)

Now, I am even more confused... My aim is to be able to hear the sound on my phone with my earbuds without having to turn up the volume to max. Right now it is either barely audible or relatively loud (where the loud chords are).



Nick Batzdorf said:


> April Fool's?



It's a genuine inquiry. If you could help, it would be much appreciated.

Cheers,
Alex


----------



## Nick Batzdorf (Apr 1, 2019)

Turn it up and bounce it to disk again louder!


----------



## JamieLang (Apr 1, 2019)

You have about 10db of empty headroom in that track. As was pointed out, peak normalize. I would go further and point out that peak normalize it to -.5dbfs. It's just a safe spot. Little tiny more won't make a bit of difference UNLESS it's a bad difference played on a shit system. That will leave enough room someone can make an mp3 of it and not push it up over 0dbfs.

It's ALSO very dark and wooly. There's a whole "perceived loudness" to that...but, first off, you need to get it to full scale to play it back on any consumer system. 

Side note: You're recording too low a level. You should be seeing -20'ish give or take on a VU meter. I calibrated mine DOWN to as low as it will go (-24dbfs) and it doesn't even MOVE the needle on the loudest part in the middle. If you're misunderstanding VU vs peak, that's the next lesson for you...in the analog world, what you recorded would be buried in hiss. Just saying, lesson one in recording—lets get the levels set properly.


----------



## Agondonter (Apr 2, 2019)

Thank you Jamie! Very helpful and informative post!

So, in order to peak normalize I just have to increase the gain so that the master channel peaks at -0.5dbfs, right?

Also, I don't have a VU plugin. Do you think VUMT from Klanghelm is a good one to start with?

Cheers,
Alex


----------



## AlexanderSchiborr (Apr 2, 2019)




----------



## Agondonter (Apr 2, 2019)

Never mind, I just used the built-in normalize function from Cubase. 



The normalized audio sounds much closer to my expectations as far as the loudness goes, but you are right that it does sound kind of dark.


----------



## JamieLang (Apr 2, 2019)

So, heres the thing. It was always that dark. I could hear that at the low volume...or the full scale. 

VU meters (real and digital) are a little different....but the good news is, with the forgiving noise floor of digital, its all really a ballpark setting. As long as the Klanghem thing can calibrate to different levels...and the needle bounces nicely, it should be fine. Work for film should use their -22dbfs=0vu...a majority of analog modeling plug ins use -18....i generally set mine to -20 to sort of hedge those bets. But, again, ballpark—so, its not really THAT important which one you pick right there so long as that you pick one. 

As a related note, recent Cubase has a right pannel K meter. For all intents and purposes, thats a modern digital replacement for a VU—for ME, as someone who grew up with VU ballistics and just know what theyre “supposed to look like” on different kinds of material, Ive never warmed to the K stuff....but, it should provide a similar function, and might be easier since its built in. Just know youre trying to get into the YELLOW area on it.


----------



## Agondonter (Apr 2, 2019)

JamieLang said:


> So, heres the thing. It was always that dark. I could hear that at the low volume...or the full scale.



What can I do to bring some light to the sound, but retain the soft quality? Is this simply a matter of EQ or the recording also plays a role in that? As far as the recording goes, I just enable Write mode and then click on Record button in Cubase and play the score.

Cheers,
Alex


----------



## ProfoundSilence (Apr 2, 2019)

any eq with a tilt and a wet/dry knob would work fine. Parallel EQ is another way to handle it(although that's not significantly different than what I suggested)


----------



## Nick Batzdorf (Apr 2, 2019)

Parallel EQ is what most people call a phaser.

Up is louder. I'm truly baffled why this thread is still going.


----------



## ProfoundSilence (Apr 2, 2019)

Nick Batzdorf said:


> Parallel EQ is what most people call a phaser.
> 
> Up is louder. I'm truly baffled why this thread is still going.



There is 0 phase issue with parallel EQ, since you aren't moving the wave form out of alignment. 

and db =/= perception of loudness. That's why clipped sausage mixes don't sound as loud as professionally mixed tracks.


----------



## ProfoundSilence (Apr 2, 2019)

The only reason you might have phase issues with parallel EQ is if you're going out of box - and not re-aligning after to correct for latency of physically travelling out of an interface, into a device, and back into the interface.


----------



## Nick Batzdorf (Apr 2, 2019)

If you really want a good mix, try coincident pair mono with omni mics - but you have to use a 1/4" TRS-to-220V converter to carry the signals.


----------



## JamieLang (Apr 2, 2019)

It's a sample that you selected the "softer" tone quality. Sure--you can excite it or EQ it....but, it would be better to fix it at the source, which in this case is a sample.

While your'e getting your level right....mess with NOT the softest voicing....I don't know the sample--but often they're gradations....you don't want "hard" probably but if it's all the way softest....just move it a little toward hard....


----------



## shawnsingh (Apr 3, 2019)

ProfoundSilence said:


> There is 0 phase issue with parallel EQ, since you aren't moving the wave form out of alignment.


I'm not so sure about this...

Minimum phase EQ (which will change phase in a frequency dependent way) or linear phase (which will introduce additional echoes) - either way there is some smearing of the audio signal over time. If that gets summed with the original signal, seems like there could be phase issues no matter how you try to align their timing.

Maybe it's just not audible in many cases, or maybe it produces a pleasing result? Or maybe a wet/dry knob on an EQ just scales all the gain factors of all bands, and isn't really doing parallel versions of the signal at all.


----------



## Agondonter (Apr 3, 2019)

JamieLang said:


> It's a sample that you selected the "softer" tone quality. Sure--you can excite it or EQ it....but, it would be better to fix it at the source, which in this case is a sample.



Will do that first, thanks! Perhaps the EQ settings from the sample also need some tweaking...

Thank you all for your help! I have learnt a lot from your replies no matter how self-explanatory my inquiries might be to some...

Cheers,
Alex


----------



## ProfoundSilence (Apr 3, 2019)

or just buy the only EQ you'll ever need(Fabfilter) which gives you multiple modes

I can't imagine any sane audio engineer steering you clear of EQ to avoid phasing. Ofcourse you'd also be told to EQ at the instrument level, rather than the master - but all in all - it'll be fine. 

if you use more than one mic you're introducing phasing, which means all stereo recordings also have some phasing far more than an EQ is going to introduce. I'm trying to imagine what you could possibly be listening to that doesn't make use of an EQ, or that is phase free for that matter. Old mono 1 mic recordings? is that the benchmark?

Best advice is simply trying what people suggest, and if you like the sound of it - go with it. I intentionally introduce phasing by applying haas to mono close mics, because of the psychoacoustic benefits of positioning. You would encounter some combing when it's crunched down to mono, but the only time that matters is when someone is listening to your music on a phone speaker(because earbuds are too expensive?)


----------



## Agondonter (Apr 3, 2019)

Thanks for the suggestion! I didn't know about Fabfilter EQ. I will look into it once I sort some of the more basic things out! 

As you said, right now, all I do is trying people's suggestions out. I liked both the peak normalization and the Gain methods to increase the volume. Now I am trying to bring some balance to the sound, but I am finding it difficult with The Giant library. Even though it is very customizable, the raw sound you have to work with is in my opinion not suitable for the kind of music I am trying to write...

Cheers,
Alex


----------



## ProfoundSilence (Apr 4, 2019)

Agondonter said:


> Thanks for the suggestion! I didn't know about Fabfilter EQ. I will look into it once I sort some of the more basic things out!
> 
> As you said, right now, all I do is trying people's suggestions out. I liked both the peak normalization and the Gain methods to increase the volume. Now I am trying to bring some balance to the sound, but I am finding it difficult with The Giant library. Even though it is very customizable, the raw sound you have to work with is in my opinion not suitable for the kind of music I am trying to write...
> 
> ...




I can tell you that fab filter plugins are generally worth their weight in gold. When I first started, the idea of paying money for an EQ when a DAW usually has one stock was nuts to me. But fabfilter has just extremely well performing, well featured, well implemented tools for audio work. It's not some weird dirty analog modelled EQ - it's just an EQ, that if you know what you actually WANT the EQ to do, it can do. EQ is probably the single most needed tool for audio work, and if you can get in and do exactly what you need right away, with a very streamlined UI then you're saving yourself frustration, and getting straight to results. 

problem area? solo a band in like 1 click, reverse the gain, change the Q with your mouse wheel - done. 

plus with analyzer is really nice, because you can learn a lot about where your instruments sit in the mix simply by studying it, either in real time or just grabbing the spectrum.


----------



## Pudge (Apr 4, 2019)

Why not use something like MAutoVolume to automatically balance the level for you? Basically like using automation for volume rides. But much easier.


----------



## Pantonal (Apr 4, 2019)

Another way to increase volume would be to increase the midi volume level. I am not suggesting you increase velocity because that would change the quiet dynamic level of the piece, increasing midi volume simply tells Kontakt (or Play) to output more amplitude. It's a useful factor in gain staging and some Sample Libraries default to a fairly low setting. Normalizing does pretty much the same thing as long as you're working in the digital domain. Things are different when you're talking recorded audio.


----------



## Nick Batzdorf (Apr 4, 2019)

ProfoundSilence said:


> or just buy the only EQ you'll ever need(Fabfilter) which gives you multiple modes
> 
> I can't imagine any sane audio engineer steering you clear of EQ to avoid phasing. Ofcourse you'd also be told to EQ at the instrument level, rather than the master - but all in all - it'll be fine.
> 
> ...



Oy gevalt.

I'm kidding about coincident pair mono. For heaven's sake.

Nobody tells you to avoid EQ. What any engineer will tell you is that standard EQ will have some ringing at the edges of the passbands; that's how it works. Analog synths exploit that - it's called Resonance. In smaller doses, like on standard EQ, ringing can add a subtle color or it can sound like total shite - for example if you use too much on a sampled piano, sampled strings, and sometimes ringing cymbals.

(For reasons I don't understand, that's usually less true on real piano).

Linear phase EQ works much better for that - usually. Usually.

Next:

"Phasing" - the different time of arrival at the mics in a stereo recording... that's pretty much what stereo *is*! (Edit: Well, that plus amplitude differences.)

If I were ornery I'd even pick apart some of what you're saying about the short-delay positioning technique. You absolutely do have to be cognizant of phasing issues if it's going to be played in mono!

Turn up the freaking piano, or boost the MIDI velocities if you want. But parallel EQ is not a thing.


----------



## marclawsonmusic (Apr 4, 2019)

Pantonal said:


> Another way to increase volume would be to increase the midi volume level. I am not suggesting you increase velocity because that would change the quiet dynamic level of the piece, increasing midi volume simply tells Kontakt (or Play) to output more amplitude. It's a useful factor in gain staging and some Sample Libraries default to a fairly low setting. Normalizing does pretty much the same thing as long as you're working in the digital domain. Things are different when you're talking recorded audio.



When I mix, I often start with a gain control on my mix bus. I use this to raise the volume of the track to a healthy level (depending on the piece) and then add other plugins after that for other processing.

I think Nick is right - turn it up. When you don't get enough signal by moving the fader - use a gain plugin.

I don't think normalization is a good idea if you are trying to make a pro mix.


----------



## colony nofi (Apr 5, 2019)

Hey @Nick Batzdorf - I really think OP is looking for genuine advice, not knowing a bunch of things a lot of us probably dismiss as assumed knowledge. There are no such things as stupid questions, and information that we all know can appear much simpler to us than to those that don't know (and may not know the right questions to ask.)

There's some good info in this thread already. Given I'm procrastinating before trying to edit a piece of music that I really don't want to, maybe I'll try have a little go at helping out too. Also, I need coffee. 

Lets talk about audio level, gain, mix outputs and listening.

The audio level that you work at while composing is (often) very different to the final delivery level that someone listens to on a phone, laptop, TV, movie theatre - whatever!

Its all about dynamic range, as well as some quirks of digital audio.

I'm going to talk only about digital audio here - there are big differences with analog audio, although some concepts remain the same.

Digital audio has an absolute, nothing can go past this* maximum peak level. This is confusingly measured as 0dBFS. (There's a good reason for that, but probably outside of the scope of this little post.) 

This maximum refers to the loudest possible peak. The difference between the peak and measured silence is known as dynamic range. You've probably heard of people talk about dynamic range and bit depth before. The "format" or bit depth that you are working at, or the device you are playing back from dictates how big a difference of sound level you can represent in the file. 

Another way of thinking about it is that CD's don't have a way of storing an audio file that accurately represents a single audio files with the sound of a jet taking off over your head (around 140dB) as well as the sound of leaves rustling in the distance. (around 10dB) Indeed, that can't be done with many delivery formats at all - or even with much recording equipment. A CD has a dynamic range of 96dB - which is like the difference between the hubbub in a library (40dB) and a jet (140dB) - or a quiet studio environment (20dB) and a chainsaw (120dB). Top of the line pro equipment has a dynamic range of less than 144dB (24 bit) - its actually closer to 21 or 22bit in most cases - which is better than a chainsaw compared to the threshold of hearing (what we refer to as 0dB - but that is a different measurement to 0dBFS maximum peak level - the FS means "full scale")

(another aside : digital audio samples the voltage of audio thousands of times a second - 44100 to be exact for CD audio, and other amounts for other equipment (48000 on iphones these days if my memory serves me correctly!). Some equipment does this 88200 or 96000 a second, or 192000 (!) or even 384000 (!!!) for all sorts of different reasons. Knowing the reason behind these numbers is for another day....)

So - why all this talk of dynamic range?

Well - in all parts of the recording / sampling / creating process on a computer (or digital equipment) we need to know what our dynamic range is - as well as where our peak is. The peak is absolute. You can't go past it - so if you are recording a piano piece in a studio with microphones, you want to set up all parts of the recording that make sure that the absolute loudest part of the performance doesn't go to 0dB, else it will "peak". If audio levels go over 0, they "clip" and it makes a distorted sound. (Many engineers have used this sound to great effect, but not usually for piano. Ok, maybe Jan Jelenic or Adrian Klumpes or....  . )

So we give the recording "headroom". A good engineer will know how loud an instrument is going to be, and set the gain structure on all his equipment so it doesn't overload any piece of the recording chain, AND won't go to 0DbFS on his recording equipment.

This means that for a quiet piano piece, the recording may have peaks around -10dBFS, but for another piano piece, this time louder, our engineer may also set things so that the peaks are ALSO around -10dBFS. 

Hang on!!! (I've reached the 10000 character limit so I'm cutting here!)


----------



## colony nofi (Apr 5, 2019)

Continued....


The quiet piece will have peaks around the same level as a loud piece??! It can be tricky to get your head around, but this one little concept is really quite important to understand. (especially when we start to think about sample libraries that we use when composing - I'll get to this later!)

Both these recordings ALSO go through another mix (and master) stage before being played on Spotify or Radio or where ever. This is because people like to listen to music in a way that they are not turning it up and down constantly, even if the piece is quiet, or super loud. People like to hear things at approximately the same level all the time. 
For radio / spotify etc, this average level (compared to our 0dBFS maximum) is quite loud. So a mix engineer and then mastering engineer will do a tonne of things to make sure that the average listening level is at the intended level for the audience. It is a REALLY important part of the process, and the part that you are coming up against with your question. (EQ can also be dealt with by mix AND mastering engineers.)

So here's our situation for spotify/itunes/radio etc. The long term (approximate) average level of a mix needs to be set at a particular point so that the perceived loudness is at around the same level as similar types of music. (Hiphop will have a louder average than classical, but I'm trying not to get too complicated here!)
There are many many ways to measure average level. In the past, VU's were used (and were extremely useful for analog audio and for many folk still useful in digital audio!) . We have newer tools dedicated to digital audio that also can help like a loudness standard known as LUFS which I personally think a lot more folk should use / know about! 

There's even MORE reasons to start to try and understand all this when we start to think of different delivery formats. If your music is going on an ad, what level do you deliver it to the mix engineer at? There's more than one right answer to that - depending on a number of factors. But arguments around it come down to the fact that ads are delivered to different mediums at very different levels.

For TV in Europe, the final ad needs to measure -23dB LUFS. In australia, it needs to be -24dB LUFS (and they are all quite strict about this!) . Radio can be -1 dBFS, Spotify is now asking for -14dB LUFS. Cinema needs another different level entirely (measured sometimes by a dolby method, other times just by what the ad distributer wants!). 

For music deliveries, different folk want different amounts too. If I deliver a cue to a mix stage with 0dBFS peaks and loudness around -12dBLUFS, the rerecording engineer will probably pull her hair out. (Thats VERY loud, and will have a very small dynamic range...) yet that is sometimes what is delivered to play on radio. 

I realise I've spoken a lot about different ways of measuring loudness without explaining them. It is worth looking up and understanding the difference between digital peak metering (dBFS), analog average metering (VU), digital loudness metering (dB LUFS) and other forms of metering as well (BBC meters, K metering etc!). Waves has a nice primer - there are probably many better ones out there. It was just one of the first google searches that came up! https://www.waves.com/loudness-metering-explained

I guess the takeaway from all this is : When you listen to spotify, or just a file on a phone or computer, there is a maximum PEAK level that can be played back. (0dBFS)
The closer the AVERAGE level is to this amount, the louder the music will be perceived to be. 

ASIDE : Adjusting volume on a computer / phone / device. 
If you have a piece of music playing back with peaks of 0dBFS on a phone, yet it is playing back softly, it means somewhere in the system, there is a gain change going on. This gain change can occur either digitally or in the analog domain - but these days its normally digitally.

What happens is this. Your device has a maximum digital level. Thats when you turn the phone up to maximum. When this happens, your 0dBFS peaks will be played back at 0dBFS... and be converted to analog from that level (headphones are analog devices!)
When you turn it down, its still reading the audio file peaks at 0dBFS, but the level is being reduced by a simple math algorithm - say to -20dBFS or even much softer depending on where the volume control is at. 

So say you deliver an audio file with peaks at -10dBFS, and your volume on the phone is set to -20dBFS. (You don't get to see that number - it might just be around half volume!) Your audio file will sound softer than one delivered with peaks at 0dBFS (assuming that its the same piece of music or just has similar overall dynamic range!). Indeed it will sound 10dB softer - which is quite a lot! 

I did mention earlier that I would talk about levels in sampling.

Its quite interesting. I spoke about recording engineers setting equipment levels to record for an instrument - and even the song that is played on that instrument. With sample libraries, the sampling engineer doesn't know how the samples will be used. So she needs to record the samples so that the softest and loudest samples all fit within the required dynamic range. Some sample libraries compress the dynamic range too - but many don't too much or at all! This means that for a piano, the softest notes will play back at a very low level in your DAW indeed. I just tested one here - and the softest note of a soft patch was below -50dBFS peak. The loudest was -3dBFS. Thats a BIG dynamic range. And is from memory around the range of a concert piano. I'm lucky enough to have a really lovely piano to record, and I've only ever had around 35dB range from my somewhat uncooth playing! I'm not that controlled! Ha!)

So then, play that back in Kontakt. Often library developers put the kontakt fader at -6dBFS by default. Why? So that if you play back a number of sounds together, they don't peak straight away. (When you add sound together, the peak levels can increase. Loudness increases by 6dB (measured) when two identical, coherent signals are added together). So they're being slightly - but not TOO - conservative with levels, trying to make it easier for a user to not cause peaks.

What this all means is playing back some super soft velocities on piano can mean very low average levels compared to digital zero (thats 0dBFS!). 

And this is where your mix / mastering decisions come into play - that others in this thread have started to talk about.

Normalising is just an algorithm that looks for your biggest peak (lets say its -12dBFS) and brings it up to another value (say 0dBFS). So then it applies a gain of 12dB to the entire track (equally to EVERY sample) and the overall volume of your track is increased. There is nothing wrong with doing this if needed. Its a great tool. Its just gain. You NEED to change gain for different delivery formats.

But you can do more than this. You can reduce slightly dynamic range using compressors. This can be done almost imperceptibly, or horribly (or for deliberate artistic effect) depending on how you use the compressor and exactly what the compressor is doing (they don't JUST compress a lot of the time... and there's tonnes of different models for compressing!) . You can equalise to help balance the frequencies you hear (which WILL change your dynamic range slightly) - or allow slightly more compression at times - or so many different creative things.

Generally speaking, you balance for tone, then compress slightly, then bring the level as close to digital zero as possible, and then measure for loudness (say in LUFS!) . Is the measured LUFS level what you want it to be? Great. If not, what decisions can you make to change it. Note : you can also automate the level of your whole performance - which is like a really SLOW compressor - in order to reduce dynamic range if needed. Once you get into mixing, theres a BUNCH of things you can do / try. 

So there you have it. I've skipped over many things, probably oversimplified some bits, over explained others. But hopefully it gives you a starting point to learn some of the concepts. And to ask more questions. Never stop asking. My old assistant - now our head mix engineer here, went from knowing not much about digital audio (even with a masters in film sound design) to going back to university this year to study acoustics part time. His questions keep coming - but damn he's learning a tonne and doing some awesome work!

Cheers!


*I've made LOTS of assumptions and shortcuts all thru this post. This is a fun one though, since it is possible to measure above 0dB - intersample peaks - the calculated level of audio between samples - can go above 0dB, though we try avoid this as much as possible!)


----------



## Nick Batzdorf (Apr 5, 2019)

colony nofi said:


> I really think OP is looking for genuine advice, not knowing a bunch of things a lot of us probably dismiss as assumed knowledge. There are no such things as stupid questions, and information that we all know can appear much simpler to us than to those that don't know (and may not know the right questions to ask.)



Right, but he (I think he's a he) got the answer he was looking for early on.

Your posts about level in general are very good, and far more than he was asking. I only got frustrated at the ludicrous posts in this thread!


----------



## Agondonter (Apr 6, 2019)

Wow, may I say how grateful I am for taking the time to explain all these concepts to me. Thank you so much! I know that what I am asking is very basic to most of you, but indeed for me this simple matter has been a headache for weeks now. I will definitely go through your posts multiple times in order to absorb and understand everything you said.

So, if I understood it correctly reducing the dynamic range makes the softer tones louder and the louder a bit quieter, right? So that even at an average volume both extremes can be heard comfortably. After fixing the problem with my track being very quiet (by normalizing or by turning the kontakt fader to 0 from -6 dBFS and also increasing the GAIN afterwards) that was my next issue, because I noticed that the part where the piece was getting louder was too loud compared to the rest of the piece. The sample library I am using has a built-int function to increase or reduce the instrument's dynamic range. Is it advisable to adjust the dynamic range that way or do it as you said using compressors and EQ?

I did try using a compressor by the way and noticed a considerable increase in loudness, but my knowledge is still very basic as to how to use it correctly. I will have to experiment more with it

Do you think it is wrong if I turn up the Kontakt fader to 0dBFS? So far, I have not seen any clipping happening. Also, should I increase the GAIN, if I have to, before doing the EQ and adding the compressor or afterwards? Sorry, if my questions are redundant. Just trying to understand the whole thing a bit better! Thank you again!

Cheers,
Alex


----------



## Agondonter (Apr 6, 2019)

Nick Batzdorf said:


> Right, but he (I think he's a he) got the answer he was looking for early on.
> 
> Your posts about level in general are very good, and far more than he was asking. I only got frustrated at the ludicrous posts in this thread!



We all inevitably have to go through the learning process of asking questions and/or giving answers that sound stupid. For this reason, many people are afraid of asking questions which I think it is a pity. A lot of times, especially for those who are just beginners, the answer is right in front of them, but because it is something new for them, they simply cannot make the connection. For example, before asking this question, I didn't know what audio bouncing is and had never heard of it. I had heard of normalization, but had no idea what it does or how to use it. Now, some of these techniques and terminologies are a bit clearer to me and I hope that any other beginners who might happen to read this thread might also be benefited.

All in all, thanks for baring with me and for your help!

Cheers


----------



## shawnsingh (Apr 6, 2019)

Agondonter said:


> by turning the kontakt fader to 0 from -6 dBFS


Just want to double check if this was a typo. 0 to -6 dBFS would be a volume decrease (i.e. gets quieter). turning the kontakt fader from 0 to +6 dBFS would increase the volume.



Agondonter said:


> I noticed that the part where the piece was getting louder was too loud compared to the rest of the piece. The sample library I am using has a built-int function to increase or reduce the instrument's dynamic range. Is it advisable to adjust the dynamic range that way or do it as you said using compressors and EQ?



This is probably an artistic choice! It also depends on what the built-in dynamic range function is doing, that can help guide your decision. I can imagine the built-in dynamic range function working in two different ways:

Maybe that function is just using underlying Kontakt FX to work like a typical audio dynamics compressor. In that case, it will work similarly to any compressor FX plugin, and the difference between the built-in and your own compressor FX would just be the depth of control/options you have to configure the compressor FX, versus and the convenience of having it as part of the Kontakt instrument.

But the second possibility, that built-in function may be working like a "MIDI dynamic range" feature, and this could be really useful for intimate piano pieces. Explanation - most virtual instruments will be "velocity sensitive", i.e. the volume of each individual sample being played depends on MIDI note velocity. So if you increase this "MIDI dynamic range" option, the virtual instrument could play low MIDI velocities even quieter, and high MIDI velocities even louder. So in other words its like you have a larger dramatic volume difference between the quiet MIDI velocities and strong MIDI velocities. On the opposite end, if you reduce the MIDI dynamic range, then low and high MIDI velocities will have a more similar volume level.

the effect of an audio compressor versus the effect of a MIDI dynamic range control is very different. The audio compressor will apply to all notes/sounds simultaneously, and it has no concept of separate notes. Take a piano note for example, which can start loud, then it gradually decays over the sustain. A compressor would have no idea that this is one piano note, and so it could do some kind of awkward volume increase that counteracts the piano's natural volume decay. That can be a desirable effect sometimes, but not always. On the other hand, a MIDI dynamics compressor has the ability to keep the natural sustain/decay of each piano note, but it can change the overall volume of each piano note completely separately. If the piano library you're using has recorded different tone color of the piano playing softly and loudly, then this kind of MIDI dynamic range compression could work really well to create an intimate sound.


----------



## shawnsingh (Apr 6, 2019)

Agondonter said:


> Do you think it is wrong if I turn up the Kontakt fader to 0dBFS? So far, I have not seen any clipping happening. Also, should I increase the GAIN, if I have to, before doing the EQ and adding the compressor or afterwards? Sorry, if my questions are redundant. Just trying to understand the whole thing a bit better! Thank you again!



When working with virtual instruments and mixing, there will always be a thousand different places where you can adjust gain / fader / volume level, some examples include
- the Kontakt instrument may have gain/volume knobs in the instrument's interface
- the Kontakt volume slider
- the channel fader on your DAW
- many FX plugins including compressors, EQs, etc, will have input gain and/or output gain knobs
- etc. etc.

So your question about deciding which place you change volume - mainly will depend on three things:
(a) if you have any FX that change behavior depending on volume.
(b) your personal workflow
(c) if you are worried that the output of some software may cause clipping

about option (a) - so for example, if you have a compressor FX, or a distortion FX - the output of those effects will change a lot depending on the input volume. So in those cases you will want to make sure you set the input volume to get the effect you want, and then re-adjust the output volume after the effect, to get the desired volume level you want.

Other than this FX detail, it shouldn't really matter where you adjust the volume, and you can do it where you feel it's the most useful - if you want easy access to change the volume, or if you want it to be a set-it-once-and-forget-it hidden somewhere in your template, etc.

about option (c) - 99% of the time this should not be an issue. We could to discuss it if you're concerned about it though


----------



## ceemusic (Apr 6, 2019)

Do you set your levels up before recording? Play your piano so the loudest peaks are around -6/-8 db, record & mix.

Now have it mastered so it fit's in with the relative volume other any tracks in the project, album or based on where you plan on having it streamed or broadcast.


----------



## Nick Batzdorf (Apr 6, 2019)

Agondonter said:


> We all inevitably have to go through the learning process of asking questions and/or giving answers that sound stupid



We have to go through the process of asking questions that sound stupid, but giving stupid advice... not so sure about that.


----------



## Agondonter (Apr 7, 2019)

Thank you Shawn again for two very informative posts. The only thing I am reluctant to touch are the channel faders. I find myself only doing it if I am using prerecorded effects that are either too loud or too quiet.

I will experiment with both the built-in dynamic range and compression and see what works best. My goal is to achieve an intimate sound that doesn't lack in brightness if possible. I am not sure if The Giant is the correct piano library for that though... The piano by itself sounds very percussive and is overloaded with harmonics/overtones (probably due to the length of the strings).



ceemusic said:


> Do you set your levels up before recording? Play your piano so the loudest peaks are around -6/-8 db, record & mix.



I saw that being mentioned before and I am not sure how to do that. I use my stage piano, that currently doubles as a midi keyboard to record. My audio interface has a gain knob for input levels, but since I am not using a microphone I though it wasn't relevant. The piano (a Roland FP-7) has a volume knob and a balance knob, but both don't seem to affect the sound at all...

I am planning on uploading my tracks on Youtube for the time being. My knowledge is not sufficient enough for an album release.

Cheers,
Alex


----------



## Agondonter (Apr 7, 2019)

Nick Batzdorf said:


> We have to go through the process of asking questions that sound stupid, but giving stupid advice... not so sure about that.



I might be guilty of doing this in the past, too. Sometimes we think we know something and it turns out that we don't or that there is a better way. It happens all the time with interpersonal relationships. Life is all about failing and learning, making mistakes and learning from them (or not).

Cheers


----------



## shawnsingh (Apr 7, 2019)

Agondonter said:


> I saw that being mentioned before and I am not sure how to do that. I use my stage piano, that currently doubles as a midi keyboard to record. My audio interface has a gain knob for input levels, but since I am not using a microphone I though it wasn't relevant. The piano (a Roland FP-7) has a volume knob and a balance knob, but both don't seem to affect the sound at all...




you're right that is not relevant in your case, if you are using the keyboard only for midi. If there is no audio coming from the keyboard, and nothing bring recorded from your audio interface, then those volume knobs would not affect the volume of what you are doing

In your case, the piano sends midi to your computer and the midi eventually reaches kontakt. The sound is finally being generated from kontakt.

Sorry if you already mentioned earlier, but are you using Kontakt as standalone software? Or are you using it as a plug-in in your DAW? Which DAW are you using?

What you want to look for are some kind of audio meters in your software which show you the level of your audio signal. At least there is one in kontakt, but if you are using a DAW, the visualization on the meters from your DAW will have more clear dB units labeled and have a clearer visual. These dB meters will the somewhere in the mixer part of the DAW software.

Once you find the right meter that visualized your audio, and once you find a reasonable fader that affects the volume, ceemusic's recommendation of setting levels so they peak at approx -8 dB - I think that was mentioned with the assumption that mastering will be able to increase the loudness further. But if you are self producing without mastering, you'll want to consider a the other advice too, about using a compressor/limiter. Cheers!


----------



## ceemusic (Apr 7, 2019)

Agondonter said:


> My audio interface has a gain knob for input levels, but since I am not using a microphone I though it wasn't relevant. The piano (a Roland FP-7) has a volume knob and a balance knob, but both don't seem to affect the sound at all...



Learn how your audio interface operates!- YES, not only is the gain knob for input levels relevant but it's KEY!.

Getting nominal levels set is the first & most important factor you need to address before going forward. If not suggestions like normalizing might make the track louder but the floor noise will also be increased.


----------



## shawnsingh (Apr 7, 2019)

ceemusic said:


> YES, the gain knob for input levels is very relevant.



But this only applies when actually capturing audio from the inputs of the audio interface. My understanding is that Agondonter is just using midi to control Kontakt with The Giant piano instrument loaded.

Agondonter, I think part of the difficulty in answering your question is that we don't know the exact setup you have . So you are getting a lot of great advice about a lot of different scenarios. Hopefully that's not causing to much confusion and you'll be able to figure out which advice is actually applicable in your case.


----------



## Pantonal (Apr 7, 2019)

marclawsonmusic said:


> When I mix, I often start with a gain control on my mix bus. I use this to raise the volume of the track to a healthy level (depending on the piece) and then add other plugins after that for other processing.
> 
> I think Nick is right - turn it up. When you don't get enough signal by moving the fader - use a gain plugin.
> 
> I don't think normalization is a good idea if you are trying to make a pro mix.


Are you implying I was suggesting normalization? I wasn't. I was suggesting increasing midi volume which is different from audio volume. It's a parameter to tell the sample player to play louder (or softer).


----------



## Agondonter (Apr 8, 2019)

shawnsingh said:


> Sorry if you already mentioned earlier, but are you using Kontakt as standalone software? Or are you using it as a plug-in in your DAW? Which DAW are you using?



Both. When I am trying things out I use the standalone version. I then move to Sibelius (planning on changing to Dorico at some point) and write down my piece where I use Kontakt as a plug-in and then in Cubase 5 (I am planning to upgrade to 10 when there is a sale... hopefully soon) I will either perform and record the piece or import the midi file.



shawnsingh said:


> What you want to look for are some kind of audio meters in your software which show you the level of your audio signal. At least there is one in kontakt, but if you are using a DAW, the visualization on the meters from your DAW will have more clear dB units labeled and have a clearer visual. These dB meters will the somewhere in the mixer part of the DAW software.
> 
> Once you find the right meter that visualized your audio, and once you find a reasonable fader that affects the volume, ceemusic's recommendation of setting levels so they peak at approx -8 dB - I think that was mentioned with the assumption that mastering will be able to increase the loudness further. But if you are self producing without mastering, you'll want to consider a the other advice too, about using a compressor/limiter. Cheers!



I have SPAN from Voxengo, YouLean Loudness Metter and VUMT Deluxe. I check the LUFS reading from YouLean and the VU reading from VUMT Deluxe apart of course from the dBFS reading of the master channel. I am still unsure about the VU reading. I don't quite understand it, but to be honest I have also not taken the time to read the manual that came with the software.

I composed another piece for a single pad and two voices the other day and managed to bring the volume to a sufficient level by increasing the Kontakt fader and then the Gain by 6. Then I tried compressing the track using Supercharger from NI, but I noticed that the continuous and smooth sound of the pad started being uneven creating a crackling effect after applying the compression (I just used the preset with the least impact). The built-in limiter from the DAW had no other immediate effect other than increasing the volume output. Is it always advisable to use a compressor and a limiter?



shawnsingh said:


> But this only applies when actually capturing audio from the inputs of the audio interface. My understanding is that Agondonter is just using midi to control Kontakt with The Giant piano instrument loaded.
> 
> Agondonter, I think part of the difficulty in answering your question is that we don't know the exact setup you have . So you are getting a lot of great advice about a lot of different scenarios. Hopefully that's not causing to much confusion and you'll be able to figure out which advice is actually applicable in your case.



Yes, I am not capturing audio from the inputs of the audio interface. My mic is not even connected. I use my stage piano to record.

I have an old notebook that has Sibelius, Cubase 5 and several plug-ins and libraries installed. My audio interface is Komplete Audio 6 from NI. I also have a Korg nanoKONTROL 2 for automating, which I haven't used at all so far. I am still very confused about automation and how it works, but that is another story for another thread maybe. 

To record, I have a Shure mic and my old stage piano (a Roland FP-7). At the corner in my room sits a cello that misses a string and a megabass waterphone from the inventor himself. 

Cheers


----------



## VinRice (Apr 8, 2019)

This is a very odd thread...


----------



## ed buller (Apr 9, 2019)

get this

it's free.

https://loudmax.blogspot.com/

Slide the top slider to the left until it's loud enough. For a piano you dont really want it to work on limiting...just bringing up the level....but next time make sure you print at the correct level . 

best


ed


----------



## labornvain (Apr 24, 2019)

To the op, I'm sorry but someone should have told you immediately to just do a Google search for "gain staging". It's critically important to understand and there is a plethora of tutorials on the web on how to do it.

Long story short there are two kinds of volume adjustments for a kontakt instrument.

One is the performance dynamics which should always be controlled by the mod wheel, CC11, and/or velocity. These controls are used to simulate the natural dynamics of a performer, and they often use methods other than just turning the volume up or down. So if you need volume changes in the performance, these will be the most realistic.

The other kind of volume control is where gain staging comes in. And for this you use either the instrument's master volume, or Kontact's master volume which is at the top of the kontakt interface and may have to be unhidden.

The way to gain stage Kontakt is simple. Set it's channel fader in your DAW to zero. Then using Kontact's master volume, adjust it so that, at the performance's loudest point, it peaks at around -12db.

Why -12db? For one reason, this is the happy spot for most insert plugins you might want to use. This isn't hard written in stone, and some plugins might need to be a little hotter, like saturation plugins.

But generally speaking, -12db is a good starting point and it should leave plenty of headroom in the master bus after all of your tracks are assembled.

The main thing to remember is that all gain staging should be done before the signal hits the insert bus. This way you're not clipping your plugins. Unless of course that is the desired effect.

On a personal note, I generally frown on using the channel's fader to control volume. Many Kontakt libraries do some quite lovely things with their internal dynamic controls, like switching layers or changing timber.

Also, some libraries have their own built-in reverb and it sounds really unnatural to change the volume of a reverb return when you change the volume of the instrument that's feeding it.

So when you change the volume or automate the output of a Kontakt instrument that has its own built-in reverb send, you're essentially automating the reverb return on that instrument, which is whacked. Or, really cool depending on how you use it.

General rule of thumb is that if your reverb is meant to simulate a real natural space, then that reverb's output should be left alone.

So what I do whenever I start a new Contact track, is all turn up the mod wheel or CC 11 to the maximum volume that I'm going to use, which is not always 100%, then I'll gain stage to -12db using Kontact's master volume, then I'll record my part.

When mixing, if I need to tweak the volume, including adjusting the dynamic range , instead of using compression or gain automation, I'll try to do so using the instrument's built-in dynamic controls. Something not loud enough? Crank up its velocity levels or turn up CC 11.

Cubase even has a cool feature that allows you to "compress" the dynamic range of a part by reducing the difference between the highest velocity level and the lowest velocity level. Very useful.

So I rarely ever use fader automation and pretty much only use compression as a tonal effect, not to fix dynamic problems.


----------



## shomynik (Apr 24, 2019)

It seems there is some confusion between analog and digital audio here, and I would like to share some things I learned over the years, maybe someone find it usefull. And please correct me if I'm wrong.

There is no "efficient, healthy" capturing audio from a sample library. OP is not recording anything, there is no "gain" in terms of analog amplification of a signal, and there is no changing noise to signal ratio which is set once those samples were recorded. The samples are 24bit, if your signal is around -40db, just apply a clean digital gain aka PT Clip Gain, Cubase has the equivalent as well, or there are numerious free and paid gain plugs you can insert. BUT, it has to be pure digital gain in order not to modify the sound (Logic Pro Gain that sits in the panel on the left is NOT clean but it simulates an analog desk saturation, more you add it, more it modifies the sound - not sure if it can be turned off, but this is a negative thing in my book of daws). Also, volume/gain, in daw, those are all the same, level changing algos, so wheter you change it in Kontakt, in a plugin or your daw fader, if they are clean, they are all doing the same thing (they are probably different algos as there are probably more than one way to acomplish a level change, but that's not relevant here)

Our daws are computing with 32bit floating point which means they have a dynamic range of 1500db. There is nothing lost if you bounce (32bit or 24bit) your instrument at a low level, you can change that level up and down in your daw with no consequences (as long as you dont bounce above 0db digital scale as that is the top). I can send you my piano track (24bit wav) at -40 db and you can add gain to it in ur daw and get the same exact thing as I had before I lowered it in my daw.

We have to pay attention about gain staging when introducing other processes (plugins) which are balanced to work with certain signal levels. AND ofc when we want to reach certain level with our final track. But digital audio had simplified many things by introducing so much headroom, and as far as I know, many rules from analog age doesn't apply anymore.

But ofc, when we record live instruments using mics and preamps, signal levels are crucial.

Cheers.

Milos


----------



## bill5 (Apr 24, 2019)

> There are no such things as stupid questions


Of course there are, but this wasn't one of them (and I don't recall seeing one on this site). Generally though, yeah there are plenty


----------



## Agondonter (Apr 25, 2019)

@labornvain and @shomynik thank you so much for your added clarifications. Since I started the thread, my understanding has improved significantly and you have just added a lot to it. I am now researching more about Gain Staging and digital gain.

@labornvain I used to increase the gain from the mixer in Cubase, but after your suggestion I am going to do that using the Kontakt Master volume. As for the reverb, I recently bought VVV and Blackhole, so I turn the library's reverb off and use one of the two. Reverb is another area I need to learn a whole lot. Right now I am just using the presets with the exception of the Blackhole reverb where I managed to create a nice "natural sounding" reverb for the piano sound I mostly use.

@shomynik I am researching digital gain right now. What is your suggested amount of headroom, by the way? I mostly write ambient, minimalistic music. So far, I have never experienced clipping even with heavily increasing the gain.

Cheers,
Alex


----------



## shomynik (Apr 25, 2019)

Agondonter said:


> @labornvain and @shomynik thank you so much for your added clarifications. Since I started the thread, my understanding has improved significantly and you have just added a lot to it. I am now researching more about Gain Staging and digital gain.
> 
> @labornvain I used to increase the gain from the mixer in Cubase, but after your suggestion I am going to do that using the Kontakt Master volume. As for the reverb, I recently bought VVV and Blackhole, so I turn the library's reverb off and use one of the two. Reverb is another area I need to learn a whole lot. Right now I am just using the presets with the exception of the Blackhole reverb where I managed to create a nice "natural sounding" reverb for the piano sound I mostly use.
> 
> ...



No worries Agondonter, glad you found it helpfull.

In 32 bit floating point there is no clipping while you are working in a clean daw that doesn't have any analog simulations going on. Cubase is such a DAW. The red channel flashing is just an indicator that you have peaks over 0db digital scale, and you have to worry about that only when bouncing/exporting in order to avoid nasty digital clipping. That is very easily done, you can even lower you stereo master fader in the end until the clipping is gone.

As far as the recomended levels, you got very good advices in this thread, many plugins are balanced to work from -12db to -16db, that would be great if you could maintain that level all the way through. But to make things easier with no consequences, you can only think about levels when:

-you use your plugins: you can adjust input gain of that plugin or if the plugin doesnt have the input control, you can insert a separate gain plugin before it, or use any other level control before the plugin.

-when you bounce/export to audio file, nothing should go over 0db digital scale (which is the scale on the cubase faders). Again, for this you can use plugins on the master channel, either level controls in your existing plugins (eq, comps, limiters, etc...) or a separate clean gain plugin.

Also, you got tons of great info regarding the final track levels as well as many ways how to get to a desired one.

Here is a good pdf on digital audio, it's a very easy read:

https://redirect.viglink.com/?format=go&jsonp=vglnk_155499789167515&key=57ed2afae0b472fc7ec991a58f1b72c9&libId=jucsk43p0102uz1c000MAcb3whibl&loc=https%3A%2F%2Fwww.logicprohelp.com%2Fforum%2Fviewtopic.php%3Ft%3D65875&v=1&out=http%3A%2F%2Fwww.popmusic.dk%2Fdownload%2Fpdf%2Flevels-in-digital-audio.pdf&ref=https%3A%2F%2Fwww.google.com%2F&title=Can%20my%20low%20fader%20position%20degrade%20sound%20quality%20%3F%20-%20Logic%20Pro%20Help&txt=http%3A%2F%2Fwww.popmusic.dk%2Fdownload%2Fpdf%2Flev%20...%20-audio.pdf

Milos


----------



## Agondonter (Apr 27, 2019)

shomynik said:


> No worries Agondonter, glad you found it helpfull.
> 
> In 32 bit floating point there is no clipping while you are working in a clean daw that doesn't have any analog simulations going on. Cubase is such a DAW. The red channel flashing is just an indicator that you have peaks over 0db digital scale, and you have to worry about that only when bouncing/exporting in order to avoid nasty digital clipping. That is very easily done, you can even lower you stereo master fader in the end until the clipping is gone.
> 
> ...




Thank you Milos! The pdf is indeed very easy to understand and informative! 

Just for clarification, no matter what genre I am writing in the final track should leave a headroom of 0.3 decibels, right? 

Cheers


----------



## shomynik (Apr 27, 2019)

Agondonter said:


> Just for clarification, no matter what genre I am writing in the final track should leave a headroom of 0.3 decibels, right?
> 
> Cheers


Absolutely!


----------



## Agondonter (Apr 28, 2019)

Hmmm I am having a problem with gain staging. I am writing something new and I am at the point where I solo each track and play it where it peaks and adjust the master output from Kontakt so that the VU Meter reaches close to 0. When I play all the instruments together then, they are very imbalanced though. The bass sounds overwhelming, the violas can hardly be heard etc. What I also noticed is that the VU meter peaks usually at the end of the note where the reverb happens.

Something else that troubles me are the C11 and C7 MIDI messages. I am composing in Sibelius and I input the MIDI messages there and then I import the MIDI file in Cubase. I tried using only C11, but the crescendi and diminuendi where really underwhelming and I couldn't balance the instruments with one another. Then I tried using C11 for the initial attack and C7 to do all the cresc. and dim. and it worked much better, but the volume in Kontakt also changes with every C7 change... Is this correct?

I am so lost right now...


----------



## Agondonter (Apr 28, 2019)

Here is the track without any changes, other than some minor tweaking of Kontakt's Master Output for each instrument so that they don't overpower each other...


----------

