What's new

My tracks are too quiet...

So, heres the thing. It was always that dark. I could hear that at the low volume...or the full scale.

VU meters (real and digital) are a little different....but the good news is, with the forgiving noise floor of digital, its all really a ballpark setting. As long as the Klanghem thing can calibrate to different levels...and the needle bounces nicely, it should be fine. Work for film should use their -22dbfs=0vu...a majority of analog modeling plug ins use -18....i generally set mine to -20 to sort of hedge those bets. But, again, ballpark—so, its not really THAT important which one you pick right there so long as that you pick one.

As a related note, recent Cubase has a right pannel K meter. For all intents and purposes, thats a modern digital replacement for a VU—for ME, as someone who grew up with VU ballistics and just know what theyre “supposed to look like” on different kinds of material, Ive never warmed to the K stuff....but, it should provide a similar function, and might be easier since its built in. Just know youre trying to get into the YELLOW area on it.
 
So, heres the thing. It was always that dark. I could hear that at the low volume...or the full scale.

What can I do to bring some light to the sound, but retain the soft quality? Is this simply a matter of EQ or the recording also plays a role in that? As far as the recording goes, I just enable Write mode and then click on Record button in Cubase and play the score.

Cheers,
Alex
 
any eq with a tilt and a wet/dry knob would work fine. Parallel EQ is another way to handle it(although that's not significantly different than what I suggested)
 
Parallel EQ is what most people call a phaser.

Up is louder. I'm truly baffled why this thread is still going.

There is 0 phase issue with parallel EQ, since you aren't moving the wave form out of alignment.

and db =/= perception of loudness. That's why clipped sausage mixes don't sound as loud as professionally mixed tracks.
 
The only reason you might have phase issues with parallel EQ is if you're going out of box - and not re-aligning after to correct for latency of physically travelling out of an interface, into a device, and back into the interface.
 
If you really want a good mix, try coincident pair mono with omni mics - but you have to use a 1/4" TRS-to-220V converter to carry the signals.
 
It's a sample that you selected the "softer" tone quality. Sure--you can excite it or EQ it....but, it would be better to fix it at the source, which in this case is a sample.

While your'e getting your level right....mess with NOT the softest voicing....I don't know the sample--but often they're gradations....you don't want "hard" probably but if it's all the way softest....just move it a little toward hard....
 
There is 0 phase issue with parallel EQ, since you aren't moving the wave form out of alignment.
I'm not so sure about this...

Minimum phase EQ (which will change phase in a frequency dependent way) or linear phase (which will introduce additional echoes) - either way there is some smearing of the audio signal over time. If that gets summed with the original signal, seems like there could be phase issues no matter how you try to align their timing.

Maybe it's just not audible in many cases, or maybe it produces a pleasing result? Or maybe a wet/dry knob on an EQ just scales all the gain factors of all bands, and isn't really doing parallel versions of the signal at all.
 
It's a sample that you selected the "softer" tone quality. Sure--you can excite it or EQ it....but, it would be better to fix it at the source, which in this case is a sample.

Will do that first, thanks! Perhaps the EQ settings from the sample also need some tweaking...

Thank you all for your help! I have learnt a lot from your replies no matter how self-explanatory my inquiries might be to some...

Cheers,
Alex
 
or just buy the only EQ you'll ever need(Fabfilter) which gives you multiple modes

I can't imagine any sane audio engineer steering you clear of EQ to avoid phasing. Ofcourse you'd also be told to EQ at the instrument level, rather than the master - but all in all - it'll be fine.

if you use more than one mic you're introducing phasing, which means all stereo recordings also have some phasing far more than an EQ is going to introduce. I'm trying to imagine what you could possibly be listening to that doesn't make use of an EQ, or that is phase free for that matter. Old mono 1 mic recordings? is that the benchmark?

Best advice is simply trying what people suggest, and if you like the sound of it - go with it. I intentionally introduce phasing by applying haas to mono close mics, because of the psychoacoustic benefits of positioning. You would encounter some combing when it's crunched down to mono, but the only time that matters is when someone is listening to your music on a phone speaker(because earbuds are too expensive?)
 
Thanks for the suggestion! I didn't know about Fabfilter EQ. I will look into it once I sort some of the more basic things out!

As you said, right now, all I do is trying people's suggestions out. I liked both the peak normalization and the Gain methods to increase the volume. Now I am trying to bring some balance to the sound, but I am finding it difficult with The Giant library. Even though it is very customizable, the raw sound you have to work with is in my opinion not suitable for the kind of music I am trying to write...

Cheers,
Alex
 
Thanks for the suggestion! I didn't know about Fabfilter EQ. I will look into it once I sort some of the more basic things out!

As you said, right now, all I do is trying people's suggestions out. I liked both the peak normalization and the Gain methods to increase the volume. Now I am trying to bring some balance to the sound, but I am finding it difficult with The Giant library. Even though it is very customizable, the raw sound you have to work with is in my opinion not suitable for the kind of music I am trying to write...

Cheers,
Alex


I can tell you that fab filter plugins are generally worth their weight in gold. When I first started, the idea of paying money for an EQ when a DAW usually has one stock was nuts to me. But fabfilter has just extremely well performing, well featured, well implemented tools for audio work. It's not some weird dirty analog modelled EQ - it's just an EQ, that if you know what you actually WANT the EQ to do, it can do. EQ is probably the single most needed tool for audio work, and if you can get in and do exactly what you need right away, with a very streamlined UI then you're saving yourself frustration, and getting straight to results.

problem area? solo a band in like 1 click, reverse the gain, change the Q with your mouse wheel - done.

plus with analyzer is really nice, because you can learn a lot about where your instruments sit in the mix simply by studying it, either in real time or just grabbing the spectrum.
 
Why not use something like MAutoVolume to automatically balance the level for you? Basically like using automation for volume rides. But much easier.
 
Another way to increase volume would be to increase the midi volume level. I am not suggesting you increase velocity because that would change the quiet dynamic level of the piece, increasing midi volume simply tells Kontakt (or Play) to output more amplitude. It's a useful factor in gain staging and some Sample Libraries default to a fairly low setting. Normalizing does pretty much the same thing as long as you're working in the digital domain. Things are different when you're talking recorded audio.
 
or just buy the only EQ you'll ever need(Fabfilter) which gives you multiple modes

I can't imagine any sane audio engineer steering you clear of EQ to avoid phasing. Ofcourse you'd also be told to EQ at the instrument level, rather than the master - but all in all - it'll be fine.

if you use more than one mic you're introducing phasing, which means all stereo recordings also have some phasing far more than an EQ is going to introduce. I'm trying to imagine what you could possibly be listening to that doesn't make use of an EQ, or that is phase free for that matter. Old mono 1 mic recordings? is that the benchmark?

Best advice is simply trying what people suggest, and if you like the sound of it - go with it. I intentionally introduce phasing by applying haas to mono close mics, because of the psychoacoustic benefits of positioning. You would encounter some combing when it's crunched down to mono, but the only time that matters is when someone is listening to your music on a phone speaker(because earbuds are too expensive?)

Oy gevalt.

I'm kidding about coincident pair mono. For heaven's sake.

Nobody tells you to avoid EQ. What any engineer will tell you is that standard EQ will have some ringing at the edges of the passbands; that's how it works. Analog synths exploit that - it's called Resonance. In smaller doses, like on standard EQ, ringing can add a subtle color or it can sound like total shite - for example if you use too much on a sampled piano, sampled strings, and sometimes ringing cymbals.

(For reasons I don't understand, that's usually less true on real piano).

Linear phase EQ works much better for that - usually. Usually.

Next:

"Phasing" - the different time of arrival at the mics in a stereo recording... that's pretty much what stereo *is*! (Edit: Well, that plus amplitude differences.)

If I were ornery I'd even pick apart some of what you're saying about the short-delay positioning technique. You absolutely do have to be cognizant of phasing issues if it's going to be played in mono!

Turn up the freaking piano, or boost the MIDI velocities if you want. But parallel EQ is not a thing.
 
Last edited:
Another way to increase volume would be to increase the midi volume level. I am not suggesting you increase velocity because that would change the quiet dynamic level of the piece, increasing midi volume simply tells Kontakt (or Play) to output more amplitude. It's a useful factor in gain staging and some Sample Libraries default to a fairly low setting. Normalizing does pretty much the same thing as long as you're working in the digital domain. Things are different when you're talking recorded audio.

When I mix, I often start with a gain control on my mix bus. I use this to raise the volume of the track to a healthy level (depending on the piece) and then add other plugins after that for other processing.

I think Nick is right - turn it up. When you don't get enough signal by moving the fader - use a gain plugin.

I don't think normalization is a good idea if you are trying to make a pro mix.
 
Hey @Nick Batzdorf - I really think OP is looking for genuine advice, not knowing a bunch of things a lot of us probably dismiss as assumed knowledge. There are no such things as stupid questions, and information that we all know can appear much simpler to us than to those that don't know (and may not know the right questions to ask.)

There's some good info in this thread already. Given I'm procrastinating before trying to edit a piece of music that I really don't want to, maybe I'll try have a little go at helping out too. Also, I need coffee.

Lets talk about audio level, gain, mix outputs and listening.

The audio level that you work at while composing is (often) very different to the final delivery level that someone listens to on a phone, laptop, TV, movie theatre - whatever!

Its all about dynamic range, as well as some quirks of digital audio.

I'm going to talk only about digital audio here - there are big differences with analog audio, although some concepts remain the same.

Digital audio has an absolute, nothing can go past this* maximum peak level. This is confusingly measured as 0dBFS. (There's a good reason for that, but probably outside of the scope of this little post.)

This maximum refers to the loudest possible peak. The difference between the peak and measured silence is known as dynamic range. You've probably heard of people talk about dynamic range and bit depth before. The "format" or bit depth that you are working at, or the device you are playing back from dictates how big a difference of sound level you can represent in the file.

Another way of thinking about it is that CD's don't have a way of storing an audio file that accurately represents a single audio files with the sound of a jet taking off over your head (around 140dB) as well as the sound of leaves rustling in the distance. (around 10dB) Indeed, that can't be done with many delivery formats at all - or even with much recording equipment. A CD has a dynamic range of 96dB - which is like the difference between the hubbub in a library (40dB) and a jet (140dB) - or a quiet studio environment (20dB) and a chainsaw (120dB). Top of the line pro equipment has a dynamic range of less than 144dB (24 bit) - its actually closer to 21 or 22bit in most cases - which is better than a chainsaw compared to the threshold of hearing (what we refer to as 0dB - but that is a different measurement to 0dBFS maximum peak level - the FS means "full scale")

(another aside : digital audio samples the voltage of audio thousands of times a second - 44100 to be exact for CD audio, and other amounts for other equipment (48000 on iphones these days if my memory serves me correctly!). Some equipment does this 88200 or 96000 a second, or 192000 (!) or even 384000 (!!!) for all sorts of different reasons. Knowing the reason behind these numbers is for another day....)

So - why all this talk of dynamic range?

Well - in all parts of the recording / sampling / creating process on a computer (or digital equipment) we need to know what our dynamic range is - as well as where our peak is. The peak is absolute. You can't go past it - so if you are recording a piano piece in a studio with microphones, you want to set up all parts of the recording that make sure that the absolute loudest part of the performance doesn't go to 0dB, else it will "peak". If audio levels go over 0, they "clip" and it makes a distorted sound. (Many engineers have used this sound to great effect, but not usually for piano. Ok, maybe Jan Jelenic or Adrian Klumpes or.... ;) . )

So we give the recording "headroom". A good engineer will know how loud an instrument is going to be, and set the gain structure on all his equipment so it doesn't overload any piece of the recording chain, AND won't go to 0DbFS on his recording equipment.

This means that for a quiet piano piece, the recording may have peaks around -10dBFS, but for another piano piece, this time louder, our engineer may also set things so that the peaks are ALSO around -10dBFS.

Hang on!!! (I've reached the 10000 character limit so I'm cutting here!)
 
Continued....


The quiet piece will have peaks around the same level as a loud piece??! It can be tricky to get your head around, but this one little concept is really quite important to understand. (especially when we start to think about sample libraries that we use when composing - I'll get to this later!)

Both these recordings ALSO go through another mix (and master) stage before being played on Spotify or Radio or where ever. This is because people like to listen to music in a way that they are not turning it up and down constantly, even if the piece is quiet, or super loud. People like to hear things at approximately the same level all the time.
For radio / spotify etc, this average level (compared to our 0dBFS maximum) is quite loud. So a mix engineer and then mastering engineer will do a tonne of things to make sure that the average listening level is at the intended level for the audience. It is a REALLY important part of the process, and the part that you are coming up against with your question. (EQ can also be dealt with by mix AND mastering engineers.)

So here's our situation for spotify/itunes/radio etc. The long term (approximate) average level of a mix needs to be set at a particular point so that the perceived loudness is at around the same level as similar types of music. (Hiphop will have a louder average than classical, but I'm trying not to get too complicated here!)
There are many many ways to measure average level. In the past, VU's were used (and were extremely useful for analog audio and for many folk still useful in digital audio!) . We have newer tools dedicated to digital audio that also can help like a loudness standard known as LUFS which I personally think a lot more folk should use / know about!

There's even MORE reasons to start to try and understand all this when we start to think of different delivery formats. If your music is going on an ad, what level do you deliver it to the mix engineer at? There's more than one right answer to that - depending on a number of factors. But arguments around it come down to the fact that ads are delivered to different mediums at very different levels.

For TV in Europe, the final ad needs to measure -23dB LUFS. In australia, it needs to be -24dB LUFS (and they are all quite strict about this!) . Radio can be -1 dBFS, Spotify is now asking for -14dB LUFS. Cinema needs another different level entirely (measured sometimes by a dolby method, other times just by what the ad distributer wants!).

For music deliveries, different folk want different amounts too. If I deliver a cue to a mix stage with 0dBFS peaks and loudness around -12dBLUFS, the rerecording engineer will probably pull her hair out. (Thats VERY loud, and will have a very small dynamic range...) yet that is sometimes what is delivered to play on radio.

I realise I've spoken a lot about different ways of measuring loudness without explaining them. It is worth looking up and understanding the difference between digital peak metering (dBFS), analog average metering (VU), digital loudness metering (dB LUFS) and other forms of metering as well (BBC meters, K metering etc!). Waves has a nice primer - there are probably many better ones out there. It was just one of the first google searches that came up! https://www.waves.com/loudness-metering-explained

I guess the takeaway from all this is : When you listen to spotify, or just a file on a phone or computer, there is a maximum PEAK level that can be played back. (0dBFS)
The closer the AVERAGE level is to this amount, the louder the music will be perceived to be.

ASIDE : Adjusting volume on a computer / phone / device.
If you have a piece of music playing back with peaks of 0dBFS on a phone, yet it is playing back softly, it means somewhere in the system, there is a gain change going on. This gain change can occur either digitally or in the analog domain - but these days its normally digitally.

What happens is this. Your device has a maximum digital level. Thats when you turn the phone up to maximum. When this happens, your 0dBFS peaks will be played back at 0dBFS... and be converted to analog from that level (headphones are analog devices!)
When you turn it down, its still reading the audio file peaks at 0dBFS, but the level is being reduced by a simple math algorithm - say to -20dBFS or even much softer depending on where the volume control is at.

So say you deliver an audio file with peaks at -10dBFS, and your volume on the phone is set to -20dBFS. (You don't get to see that number - it might just be around half volume!) Your audio file will sound softer than one delivered with peaks at 0dBFS (assuming that its the same piece of music or just has similar overall dynamic range!). Indeed it will sound 10dB softer - which is quite a lot!

I did mention earlier that I would talk about levels in sampling.

Its quite interesting. I spoke about recording engineers setting equipment levels to record for an instrument - and even the song that is played on that instrument. With sample libraries, the sampling engineer doesn't know how the samples will be used. So she needs to record the samples so that the softest and loudest samples all fit within the required dynamic range. Some sample libraries compress the dynamic range too - but many don't too much or at all! This means that for a piano, the softest notes will play back at a very low level in your DAW indeed. I just tested one here - and the softest note of a soft patch was below -50dBFS peak. The loudest was -3dBFS. Thats a BIG dynamic range. And is from memory around the range of a concert piano. I'm lucky enough to have a really lovely piano to record, and I've only ever had around 35dB range from my somewhat uncooth playing! I'm not that controlled! Ha!)

So then, play that back in Kontakt. Often library developers put the kontakt fader at -6dBFS by default. Why? So that if you play back a number of sounds together, they don't peak straight away. (When you add sound together, the peak levels can increase. Loudness increases by 6dB (measured) when two identical, coherent signals are added together). So they're being slightly - but not TOO - conservative with levels, trying to make it easier for a user to not cause peaks.

What this all means is playing back some super soft velocities on piano can mean very low average levels compared to digital zero (thats 0dBFS!).

And this is where your mix / mastering decisions come into play - that others in this thread have started to talk about.

Normalising is just an algorithm that looks for your biggest peak (lets say its -12dBFS) and brings it up to another value (say 0dBFS). So then it applies a gain of 12dB to the entire track (equally to EVERY sample) and the overall volume of your track is increased. There is nothing wrong with doing this if needed. Its a great tool. Its just gain. You NEED to change gain for different delivery formats.

But you can do more than this. You can reduce slightly dynamic range using compressors. This can be done almost imperceptibly, or horribly (or for deliberate artistic effect) depending on how you use the compressor and exactly what the compressor is doing (they don't JUST compress a lot of the time... and there's tonnes of different models for compressing!) . You can equalise to help balance the frequencies you hear (which WILL change your dynamic range slightly) - or allow slightly more compression at times - or so many different creative things.

Generally speaking, you balance for tone, then compress slightly, then bring the level as close to digital zero as possible, and then measure for loudness (say in LUFS!) . Is the measured LUFS level what you want it to be? Great. If not, what decisions can you make to change it. Note : you can also automate the level of your whole performance - which is like a really SLOW compressor - in order to reduce dynamic range if needed. Once you get into mixing, theres a BUNCH of things you can do / try.

So there you have it. I've skipped over many things, probably oversimplified some bits, over explained others. But hopefully it gives you a starting point to learn some of the concepts. And to ask more questions. Never stop asking. My old assistant - now our head mix engineer here, went from knowing not much about digital audio (even with a masters in film sound design) to going back to university this year to study acoustics part time. His questions keep coming - but damn he's learning a tonne and doing some awesome work!

Cheers!


*I've made LOTS of assumptions and shortcuts all thru this post. This is a fun one though, since it is possible to measure above 0dB - intersample peaks - the calculated level of audio between samples - can go above 0dB, though we try avoid this as much as possible!)
 
I really think OP is looking for genuine advice, not knowing a bunch of things a lot of us probably dismiss as assumed knowledge. There are no such things as stupid questions, and information that we all know can appear much simpler to us than to those that don't know (and may not know the right questions to ask.)

Right, but he (I think he's a he) got the answer he was looking for early on.

Your posts about level in general are very good, and far more than he was asking. I only got frustrated at the ludicrous posts in this thread!
 
Top Bottom