# If you EQ out everything below fundamental freq, do you lose room character?



## ka00 (Sep 3, 2021)

Recently I've been EQ'ing out everything below the fundamental frequency of an instrument's lowest possible note. It seems to make sense to do, as often there is some energy down below that note.

But, does doing this eliminate some of the room's built-in ambience or reverb?

Does anyone know if it's advisable to lose it or keep it in?

Thanks


----------



## RonOrchComp (Sep 3, 2021)

Interesting question. Esp this part:

_But, does doing this eliminate some of the room's built-in ambience or reverb?_

My first thought is no, there is no fundamental frequency lowest note reverb - ie, the ambience does not go below the fundamental frequency.s lowest note.

Then I am thinking - maybe it can?


----------



## ka00 (Sep 3, 2021)

RonOrchComp said:


> Interesting question. Esp this part:
> 
> _But, does doing this eliminate some of the room's built-in ambience or reverb?_
> 
> ...


Yeah, I wonder. If reverberation is all about reflection and absorption of the original sound, and the original sound does not contain energy lower than its fundamental frequency, then any energy below that would probably not be due to original sound.

But, if a trumpet blares into a pane of glass or wall or something, would that surface not shake from top to bottom, moving air in the room and creating long, low frequencies of its own? And so in being "shaken" by the trumpet, would it not move the air around at frequencies lower than the trumpet's notes?

I don't really know what I'm talking about.


----------



## Trash Panda (Sep 3, 2021)

If you can hear the body of the instrument thin out at all, you’re cutting too high or too sharply. If you can’t, then you’re not losing anything meaningful. The room character is baked into the sound itself unless they were recorded in an anechoic chamber.


----------



## RonOrchComp (Sep 3, 2021)

ka00 said:


> I don't really know what I'm talking about.


Disagree!

You are asking the right questions, so although you may not have all of the facts (nor do I), you have a good grasp of what *may* be going on. That's a good starting point.


----------



## Robin Thompson (Sep 3, 2021)

Academically this is a fascinating question, and I can't wait to hear from someone more knowledgeable. In practical terms of how to do your EQing however, I'd say the answer is pretty simple: if your ears think it sounds fine, it's fine.


----------



## CT (Sep 3, 2021)

It will eliminate low room noise (and whatever is introduced via the signal path)/rumble from that particular track, which, in my probably unpopular opinion that you might be wise to ignore, will remove some of the sense of naturalness from the sound, especially when you do that to everything, unless you then add in a separate room tone track underneath all of those high-passed instrument tracks.

I've messed around with this idea a bit, comparing by ear and watching what's going on via analyzers... from what I can tell, many, maybe even most, orchestral recordings are fairly noisy and have uncut, very low end rumble. In some cases they are even noisier than mock-ups of the same piece done with a bunch of non-denoised samples getting stacked on top of each other.

Given how much is already working against a natural sound when using virtual instruments, this is one thing I'd rather be untouched as much as possible. Remove a chair creak or whatever if you must, but more extensive denoising, whether done by the developer or by users cutting low end rumble, is, according to the Mike T philosophy, a turn off.

One thing I can do without, though, is the aforementioned noise introduced by something in the signal path (a console, tape machine etc.). I'd rather not have that stuff dipping in and out with the samples themselves (nor the direct effect that gear can have on the sound), since that's not how it actually works; if you want to emulate steady analog noise it's easy enough to add in via a plugin or two and that'll reflect reality much more... pretty esoteric thing to worry about, but that's me. Of course, real room tone doesn't fluctuate based on when samples are playing either, which is why it can be good to put a little of that in even if you don't high-pass everything, so you don't get the effect of room noise suddenly thinning out or disappearing when the arrangement suddenly gets sparse, for example.

Again, bear in mind these are the ravings of a nobody who knows nothing.


----------



## paularthur (Sep 3, 2021)

With Piano, I've lost some of that key pressing, hammer noise. =(


----------



## RSK (Sep 3, 2021)

It is common in recording to filter out below 100hz or so for instruments that have no important information below that frequency. This has less to do with the instrument in question and more to do with the microphone picking up extraneous frequencies; doing so can clean up low end considerably and get rid of low end "soup" in a recording.

However, cutting everything below the fundamental frequency is taking a good principle to an extreme. As mentioned, you will lose things you didn't intend to.


----------



## CT (Sep 3, 2021)

RSK said:


> It is common in recording to filter out below 100hz or so for instruments that have no important information below that frequency


What type of recording, though? Maybe in "the studio," but when you've got an orchestra in a room you can not be so surgical about what gets cut. Close mics can be cleaned up individually if necessary, but you can't decide you're going to cut anything below the violins' fundamental from the tree signal. 

Obviously what you choose to do in the virtual realm depends on any number of things, but it's worth considering what is and isn't possible in the real world and what is _really_ gained, and lost, by violating that.


----------



## RSK (Sep 4, 2021)

Michaelt (aka Mike T) said:


> What type of recording, though? Maybe in "the studio," but when you've got an orchestra in a room you can not be so surgical about what gets cut. Close mics can be cleaned up individually if necessary, but you can't decide you're going to cut anything below the violins' fundamental from the tree signal.
> 
> Obviously what you choose to do in the virtual realm depends on any number of things, but it's worth considering what is and isn't possible in the real world and what is _really_ gained, and lost, by violating that.


Yes, close mics only. It would be counter-productive to cut anything from the Decca tree.

But again, I'd advise against going any higher than 100-150Hz. Not sure where this whole "fundamental frequency" thing came from.


----------



## ka00 (Sep 4, 2021)

Sseltenrych said:


> Are you talking about a live recording with background noise, or a VST instrument which already has had a high pass filter applied to the samples?


I'm asking about VST instruments. I think some have had high pass filters applied, and some might not have.


Sseltenrych said:


> What kind of instrument?
> Cello, or flute?


Any and all.



Sseltenrych said:


> What kind of room is it?


Trackdown Scoring Stage, Air Lyndhurst Hall, Teldex, etc. I'm curious about general principles or rules of thumb to apply to various situations while setting up my template composed of a variety of libraries recorded in various places. 



Sseltenrych said:


> What does the EQ graph look like as the instrument is playing?
> Can you see the gradual down-slope of valuable sound below your fundamental frequency, and is your high pass filter cutting into this sound?
> What Q value are you using?
> 6dB per octave or 24dB?


Here's an open G on violins 1 from CSS in the Room mic, without any EQ. Normally, I would cut around 100 pretty sharply, say 24dB per octave. But I am questioning whether that should only be done on a close mic.

I'm guessing that of the frequencies shown here, everything below 150Hz or so is not reproducing the sound of the violins, but is potentially reproducing some aspect of how the violins are interacting with the room. That's what I'm unsure about cutting because when you add up that energy from the whole orchestra, it apparently muddies your bass. Which made sense to me.








Sseltenrych said:


> Don't forget about the subharmonics, or undertones:
> 
> 
> 
> ...


Subharmonics... interesting. I will have to look into that.



Sseltenrych said:


> In my observations, if the fundamental frequency is 100kHz (the peak shown on the Eq graph), then you don't want to start your cut at that frequency, especially not a high Q value HP filter.


The fundamental frequency in this specific graph that I've attached, as I understand it, would be the peak centred around 200 Hz. So, in this case I would normally try to EQ out everything below the point where it would start affecting that.



Sseltenrych said:


> Trash Panda makes a valid point.
> However, that assumes you have normal hearing ability in that range.
> 
> It also depends on whether you're trying to clean up a muddy mix or it's a solo instrument.
> ...


----------



## ka00 (Sep 4, 2021)

paularthur said:


> With Piano, I've lost some of that key pressing, hammer noise. =(


Yes, definitely. There are "peripheral" sounds like that, which I think can add to the realism. Like air blowing sounds from wind instruments that don't actually contribute to the "notes" being played.


----------



## ka00 (Sep 4, 2021)

Michaelt (aka Mike T) said:


> It will eliminate low room noise (and whatever is introduced via the signal path)/rumble from that particular track, which, in my probably unpopular opinion that you might be wise to ignore, will remove some of the sense of naturalness from the sound, especially when you do that to everything, unless you then add in a separate room tone track underneath all of those high-passed instrument tracks.
> 
> I've messed around with this idea a bit, comparing by ear and watching what's going on via analyzers... from what I can tell, many, maybe even most, orchestral recordings are fairly noisy and have uncut, very low end rumble. In some cases they are even noisier than mock-ups of the same piece done with a bunch of non-denoised samples getting stacked on top of each other.
> 
> ...


I like this philosophy. It is an aesthetic choice for sure. "Imperfections" that add to the realism of the recording can be important in certain types of renditions. And yes, I also think it's important for there to be consistency in those imperfections. I will experiment with adding a room tone track. As for how much rumble to cut, I'm still trying to figure out how it will effect things down the line when all instruments are summed.


----------



## ka00 (Sep 4, 2021)

RSK said:


> It is common in recording to filter out below 100hz or so for instruments that have no important information below that frequency. This has less to do with the instrument in question and more to do with the microphone picking up extraneous frequencies; doing so can clean up low end considerably and get rid of low end "soup" in a recording.
> 
> However, cutting everything below the fundamental frequency is taking a good principle to an extreme. As mentioned, you will lose things you didn't intend to.


Yeah, this seems like a sensible approach. To be less surgical and more gentle and just reducing as opposed to eliminating.


----------



## ka00 (Sep 4, 2021)

Michaelt (aka Mike T) said:


> What type of recording, though? Maybe in "the studio," but when you've got an orchestra in a room you can not be so surgical about what gets cut. Close mics can be cleaned up individually if necessary, but you can't decide you're going to cut anything below the violins' fundamental from the tree signal.
> 
> Obviously what you choose to do in the virtual realm depends on any number of things, but it's worth considering what is and isn't possible in the real world and what is _really_ gained, and lost, by violating that.


Yes, so this appears to be a good rule of thumb to take. Cut more from the close, less from the tree and room. And I take it your philosophy is to cut less, not more.


----------



## ka00 (Sep 4, 2021)

Sseltenrych said:


> Here is the double bass from EWHO lowest note and with a hall reverb.
> With EQ HP at 100Hz (high q value)
> View attachment Bass with EQ.mp3
> 
> ...


Oh for sure. I would personally never cut a double bass's low frequencies. I was just EQ'ing a flute and saw all that low end information in the graph, and thought to ask this question.


----------



## ka00 (Sep 4, 2021)

Sseltenrych said:


> I hadn't included a more aesthetic option:
> HP filter set at 50Hz for the Instrument and 130Hz for the reverb.
> This preserves the timbre of the instrument and cuts out the muddy reverb.
> View attachment Bass with EQ at 50 Hz.mp3


I like this tip, thanks! I haven't thought to do a high pass on reverbs, but that makes sense to try.


----------



## labornvain (Sep 4, 2021)

RSK said:


> Yes, close mics only. It would be counter-productive to cut anything from the Decca tree.
> 
> But again, I'd advise against going any higher than 100-150Hz. Not sure where this whole "fundamental frequency" thing came from.


Sometimes it's helpful to think in terms of harmonics instead of frequencies when using eq. For example, the fundamental harmonic of a low E on a bass is 40hz.

If you set your HPF much above that, you will not only impact the overall sound quality of your mix, but the music itself.

For example, if you have a track that is in the key of E, and it ends with a big dramatic cadence resolving to the tonic, so you bring in a bass ensemble for dramatic effect when it hits that low E. 

But you EQ'd out the 40hz and so now your basses sound thin and whimpy and are getting their asses kicked by the clarinets.

HPFs are incredibly powerful tools for getting a good mix. But they're also extremely dangerous. Use with caution.


----------



## darkogav (Sep 4, 2021)

yes. especially to contol low end mud and rumble.


----------



## DoubleTap (Sep 4, 2021)

ka00 said:


> As for how much rumble to cut, I'm still trying to figure out how it will effect things down the line when all instruments are summed.


I am far from an expert, but surely that is the reason for cutting low frequencies. An instrument on its own does not need to be de-rumbled, but the sum of all the rumble of every instrument in a mix adds up to much more unwanted noise than a natural, single track recording.

Whether or not that applies to a band or an ensemble being recorded live while playing at the same time, I don't know - I suppose it depends how many mics and tracks there are, since multiple tracks will still ultimately sum. A live front-of-house engineer would probably be able to say.

Either way, mixing seems to involve doing many things to sound that are quite artificial, like adding saturation to bring out a sound without increasing the gain. So there are plenty of things that would sound unnatural in isolation but sound more natural in a mix with the effects.


----------



## ka00 (Sep 4, 2021)

Here's the lowest note on a Piccolo using a room mic. How high would you high pass this? What sort of slope? Anyone can answer!

EDIT:
Personally, after the feedback from this thread, I will probably high pass at 200 Hz, with different slopes according to the mic. With the Close mic at 48dB per octave for something pretty surgical, the Tree at 12dB per octave for something gradual, and the room at 6dB per octave for an even more gradual slope.


----------



## RSK (Sep 4, 2021)

ka00 said:


> I like this tip, thanks! I haven't thought to do a high pass on reverbs, but that makes sense to try.


Many reverbs have a control for it, so you don't have to use a separate plugin.


----------



## JohnG (Sep 4, 2021)

I think this reverb discussion is possibly the most helpful part so far. You really want to narrow the band of what gets sent to the reverb.

Fortunately, as @RSK points out, almost all reverbs will have a low cut and a high cut. Sometimes they are startlingly narrow, like: 

1. 60Hz-2.5kHz, or 

2. 65Hz-4.5k. 

You sometimes see reverb send from an electric guitar EQ'd to be narrower even than these.

I think for many people working narrowing the EQ band on the reverb send is a more important setting than trying to EQ every instrument.


----------



## FireGS (Sep 4, 2021)

@Dietz would probably know something about this..


----------



## NoamL (Sep 4, 2021)

ka00 said:


> Here's the lowest note on a Piccolo using a room mic. How high would you high pass this? What sort of slope? Anyone can answer!
> 
> EDIT:
> Personally, after the feedback from this thread, I will probably high pass at 200 Hz, with different slopes according to the mic. With the Close mic at 48dB per octave for something pretty surgical, the Tree at 12dB per octave for something gradual, and the room at 6dB per octave for an even more gradual slope.


try *LOW *passing this signal at 500 hz with 48db/8ve. Can you hear anything?


----------



## Drundfunk (Sep 4, 2021)

Sseltenrych said:


> But if it's a piccolo solo, maybe just leave it alone?
> I'm not the audio engineer for Hanz Zimmer or anything!
> My advice is just based on common sense. 😉


Sometimes I wish Alan would frequent this forum and answer all our mixing questions . I personally cut those low-freq bumps (only with high freq instruments ofc), because I don't want to stack room noise. Equing reverb is always a good idea imo. Helps with the mud and washiness in general. Now, if someone could tell me why I am a dumbfuck for doing that, I'd be glad.


----------



## Soundbed (Sep 4, 2021)

Cutting out low frequencies below frequency X isn’t always necessary.

Sometimes it harms the beauty and richness of a potential mix. 

I have stopped hi-passing (low-cut) on most sources.

Instead I use dynamics compression on the bands that need it. 

If you don’t have or don't prefer compression, then use a gentle shelf instead of cutting everything. 

This preserves the fullness captured but also tames it so it doesn’t steal energy or distract. 

If you do decide to low cut, please consider using 6dB per octave filters rather than the typical 12dB per octave (or narrower) settings.

Because sharper filter curves (esp many at the same freq across many tracks) are likely to introduce unwanted issues some people can hear. 

There are exceptions but the above are some general mixing guidelines I’ve acquired from top mixing pros.


----------



## fakemaxwell (Sep 4, 2021)

ka00 said:


> Here's the lowest note on a Piccolo using a room mic. How high would you high pass this? What sort of slope? Anyone can answer!
> 
> EDIT:
> Personally, after the feedback from this thread, I will probably high pass at 200 Hz, with different slopes according to the mic. With the Close mic at 48dB per octave for something pretty surgical, the Tree at 12dB per octave for something gradual, and the room at 6dB per octave for an even more gradual slope.


How would anybody know what EQ to set without hearing the audio?


----------



## ka00 (Sep 4, 2021)

NoamL said:


> try *LOW *passing this signal at 500 hz with 48db/8ve. Can you hear anything?


I will try this as soon as I’m back home. I tried something similar last night and had to boost what what in the bottom a really significant amount (like 30dB if I recall) before I heard anything. And it was just rumble.

But I will try with the settings you suggested.


----------



## ka00 (Sep 4, 2021)

JohnG said:


> I think this reverb discussion is possibly the most helpful part so far. You really want to narrow the band of what gets sent to the reverb.
> 
> Fortunately, as @RSK points out, almost all reverbs will have a low cut and a high cut. Sometimes they are startlingly narrow, like:
> 
> ...


Thanks for this advice. I will explore doing this for sure.


----------



## ka00 (Sep 4, 2021)

Drundfunk said:


> I don't want to stack room noise.


Yeah, that seems like a good reason.


----------



## ka00 (Sep 4, 2021)

Soundbed said:


> Because sharper filter curves (esp many at the same freq across many tracks) are likely to introduce unwanted issues some people can hear.


That’s interesting. I hadn’t heard that. I will see if I can find out more about it. Thanks


----------



## ka00 (Sep 4, 2021)

fakemaxwell said:


> How would anybody know what EQ to set without hearing the audio?


I didn’t actually ask how you would EQ this in general, but specifically if you had an approach to high passing it based on this visualization of the frequencies, which with experience, a person could decipher. But it is also valid to only trust your ears if that’s your approach.


----------



## vitocorleone123 (Sep 4, 2021)

Side note: If you click the tiny keyboard icon on the bottom left of ProQ3 you can see the notes mapped to the frequencies.


----------



## CT (Sep 4, 2021)

ka00 said:


> And I take it your philosophy is to cut less, not more.


I don't do it at all anymore. If I need a super tight hybrid sound or whatever, sure, but otherwise it's on my "over-engineer at your own risk" list unless the result is patently more noisy/muddled than a real recording would be.


----------



## RSK (Sep 4, 2021)

Michaelt (aka Mike T) said:


> I don't do it at all anymore. If I need a super tight hybrid sound or whatever, sure, but otherwise it's on my "over-engineer at your own risk" list unless the result is patently more noisy/muddled than a real recording would be.


Truth.

If you're recording a live orchestra, don't filter anything from the tree or the outriggers and only judiciously from the close mics as needed. If you're working with a sampled orchestra, there's probably no reason to do it at all.


----------



## ka00 (Sep 4, 2021)

NoamL said:


> try *LOW *passing this signal at 500 hz with 48db/8ve. Can you hear anything?


I just tried it. Yes! I can hear remains of the D note, but I have to boost it by about 50dB to hear it. Which would imply that a cut that high would affect the signal a little but. As I sweep, I see that I have to drop the low pass down to 160 Hz before any trace of the D is gone.

Down there, after adding 50dB of gain, there's only rumble, but it has a tone sometimes as I play up the keyboard, especially when I go higher up the keyboard. Maybe those are subharmonics? But again, super quiet.


----------



## jcrosby (Sep 4, 2021)

DoubleTap said:


> An instrument on its own does not need to be de-rumbled, but the sum of all the rumble of every instrument in a mix adds up to much more unwanted noise than a natural, single track recording.


For example - Each note has its own decay tail of non-tonal garbage. For every lingering decay that hangs around while other notes are playing you basically wind up with way more rumbly sub sonic rumble than you would in a live recording.

Now multiply that noise by the number of other decay tails from other notes simultaneously playing in other sections, etc...

Also consider the importance of a reverb tail in instruments down in the sub range, and how this noise might mask or muddy up the clarity of those important rich low end tails. All of that summed-up non-tonal garbage doesn't bring anything constructive to a mix.

I wouldn't cut anything from instruments with tonal information down in the sub range, but I don't understand why people think removing non-tonal sub sonic garbage is over-engineering (as one person described it).... As long as you cut far enough outside of the entire tonal range of an instrument the only thing you're getting rid of is useless noise that ultimately interferes with the clarity of other instruments, natural reverb tails, etc....

I wouldn't use steep filters though. These cause severe phase shifts that can interfere with other instruments. 12 dB/octave's fine, 6 dB if you want to be more conservative, and for a less severe approach you can use a shelf where possible (instruments in upper ranges for example, where the shelf doesn't cut into the tonal range of the instrument...)


----------



## Bluemount Score (Sep 4, 2021)

Never thought about it that way. Interesting to bring this thought up. I usually cut pretty much right below the lowest fundamental, not too sharp.


----------



## ka00 (Sep 4, 2021)

jcrosby said:


> For example - Each note has its own decay tail of non-tonal garbage. For every lingering decay that hangs around while other notes are playing you basically wind up with way more rumbly sub sonic rumble than you would in a live recording.
> 
> Now multiply that noise by the number of other decay tails from other notes simultaneously playing in other sections, etc...
> 
> ...


Great insights and tips. Thank you!


----------



## NoamL (Sep 4, 2021)

ka00 said:


> I just tried it. Yes! I can hear remains of the D note, but I have to boost it by about 50dB to hear it. Which would imply that a cut that high would affect the signal a little but. As I sweep, I see that I have to drop the low pass down to 160 Hz before any trace of the D is gone.
> 
> Down there, after adding 50dB of gain, there's only rumble, but it has a tone sometimes as I play up the keyboard, especially when I go higher up the keyboard. Maybe those are subharmonics? But again, super quiet.


heh, your 2nd paragraph is what I was getting at. When you low-passed low enough to avoid the piccolo fundamental, you couldn't really hear anything, right?

The rumble is more than 50 dB quieter than the piccolo (hence why you had to add 50dB to hear anything... 50dB difference in volume means the quieter signal is a lot, a lot, quieter. 1/32 if I recall the formula right. (2^[decibels/10]). 

So long story short, I don't see a need to cut anything here. If you can't actually hear what you're cutting then I would be more worried about the EQ introducing problems than solving them. In particular all this stuff about phase which I don't fully understand, but have heard warnings about from IRL engineers.

Try finding a bad sample with some room rumble, floor resonance from chairs shifting, passing truck, etc - and look at how that rumble plays out in Q3 compared to this piccolo sample. A sample with audible rumble often peaks at like -5 or -10 dB.


----------



## ka00 (Sep 4, 2021)

NoamL said:


> If you can't actually hear what you're cutting then I would be more worried about the EQ introducing problems than solving them.


Thanks, Noam! When you say introducing problems, do you mean phase shifts, like @jcrosby and @Soundbed mentioned? Are there other potential problems I should read up on?


----------



## NoamL (Sep 4, 2021)

Yes! just edited my post to say that...


----------



## CT (Sep 4, 2021)

jcrosby said:


> I wouldn't cut anything from instruments with tonal information down in the sub range, but I don't understand why people think removing non-tonal sub sonic garbage is over-engineering (as one person described it)


Hi, yes I'm that "one person." I think my reasoning was made pretty clear in what I said. It may not be for everyone, but it makes sense, and sounds right, to me.


----------



## Soundbed (Sep 4, 2021)

ka00 said:


> That’s interesting. I hadn’t heard that. I will see if I can find out more about it. Thanks


general statements for mixing audio:

standard eq can introduce phase changes (not at the target frequency but around it, higher and lower).

steeper eq curves are more likely to introduce larger phase shifts.

gentler eq curves introduce less severe phase shifts.

stacking eq curves across multiple channels at the same center frequency (which might have been alluded to earlier in this thread) may increase the subjectively negative effect of unwanted phase shifting.

linear phase eq can introduce pre-ringing. this also reduces the impact of some transient material (for better or worse).

getting a pleasing combination of phase change relationships without unwanted pre-ringing is usually the goal.

some people cannot hear all this beeswax and may never care about it.


----------



## CT (Sep 4, 2021)

All right, just to explain my reasoning a little more visually….

Here are a few screenshots of what’s going on in real recordings where there’s only tonal content in the upper half of the keyboard or so, generally above the bottom G of the violins. The venues include Davies Symphony Hall, Abbey Road, and the Sony scoring stage, with the matching calibre of engineers you would expect.
















Here is a screenshot of very large clusters being played (producing an even denser sound than found in the above recordings), in the same general range, on three stacked sampled string patches, with 3 mics active at full blast on all of them.






Here are sampled celli, again with three mics turned all the way up, playing the same line shown in the third example of the first group. Quite a radical difference there.






What I get from comparisons like this... 

1) orchestral recordings, even done by Shawn Murphy at Sony, are not without a lot of noise

2) VI developers are probably already doing a bit of tinkering to minimize this effect, if the first sample-based example can remain so near, or even below, what's seen in the real recordings

3) to take it any further by high-passing everything does strike me as, yes, over-engineering

I do understand the reasoning that, if every sample is bringing its own noise to the mix, the result will be a mess not at all comparable to the natural noise captured in a real recording, so every track should be cleaned up as far as possible. In practice though, that strikes me as being a solution in search of a problem. I've spent a lot of time looking at music I consider well-engineered like this, and I always compare my own mixes to whatever knowledge is accumulated from that. I've yet to see or hear anything radically errant that makes me reach for a HP.


----------



## jcrosby (Sep 5, 2021)

Michaelt said:


> All right, just to explain my reasoning a little more visually….
> 
> Here are a few screenshots of what’s going on in real recordings where there’s only tonal content in the upper half of the keyboard or so, generally above the bottom G of the violins. The venues include Davies Symphony Hall, Abbey Road, and the Sony scoring stage, with the matching calibre of engineers you would expect.
> 
> ...


Sorry for drawing attention to your approach. If the end result is that whoever listens to a piece of music is satisfied - then no one person or approach is _right_/_wrong_... This is rule no. 1 and ultimately matters most...

That said, I do think it's worth raising the question about why throwing away noise - noise that is fully outside of the harmonic range that an instrument is capable of reproducing on a recording - is a potentially wild approach? To me it seems more rooted in superstition than logic....


----------



## CT (Sep 5, 2021)

jcrosby said:


> Sorry for drawing attention to your approach; (if the end result is that whoever listens to a piece of music is satisfied - then no one person or approach is _right_/_wrong_... etc)
> 
> That said I do think it's worth raising the question about why throwing away noise - noise that is fully outside of the harmonic range the instrument is capable of producing - is something like a potentially wild approach?


I don't think it's wild, but I'm more inclined to wonder why I should do a thing instead of why I shouldn't do it. So, short of something really ugly showing up in a mix that could be solved by further low end cleanup, I just don't see the point.


----------



## jcrosby (Sep 5, 2021)

Michaelt said:


> I don't think it's wild, but I'm more inclined to wonder why I should do a thing instead of why I shouldn't do it. So, short of something really ugly showing up in a mix that could be solved by further low end cleanup, I just don't see the point.


You don't have to do a thing. It's simply some food for thought.


----------



## Saxer (Sep 5, 2021)

Some samples have really audible sub bass bumps. Especially those from Air studio or the older Cinematic Strings 2. It's a good idea to low cut them. If I don't hear any disturbing rumble I don't low cut.


----------



## CT (Sep 5, 2021)

jcrosby said:


> You don't have to do a thing. It's simply some food for thought.


Yep. I've thought about it a lot.


----------



## Faruh Al-Baghdadi (Sep 5, 2021)

I assume the author uses libraries from different rooms/halls which is one of the main coherence killers in case of ITB production, but asking if he'll kill "the tone" with reduction in low end💀 

In short:
Just use combination of cut and shelf filters on both ends(lows and highs). It's a very common technique. And do this by using your ears, not formulas. There's literally no rules, just context and a few basic principals(I'll mention them further).


Explanation:
The purpose of all those activities is to make sound convincing, not "real" or "to preserve the room tone". Coherence is one of the primary things that makes sound convincing. All those instruments we use in one project duplicate the same room/mics(it's if we talk about ideal situation when all instruments we use were recorded in the same place, with the same recording chains and with the same levels, instead of using a bunch of different instruments with their mic setups and chains, what adds more possible problems in this regard), it creates a build up effect that is especially noticable in low end. And of course the bigger the number of players in groups, the harder this effect becomes. I assume one way or another and due to that very reason sample libraries developers use some filtering of recorded samples, but you can't get rid of it completely at that stage(by a million reasons).
This build up effect crates mess in the sound as each one of instruments tries to take each others place in this small area. Then you start to use compression, where those 0-40Hz trigger threshold too early/too hard and you just can't understand why you can't find appropriate parameters and in the end you just take that side chain up to the point of where it starts to shift the frequency balance, or crushing some frequencies anyway, or you just give up on the idea of compression. Same goes to every other effect you use down the signal chain - saturators/distortions, modulation effects, time based effects and so on.
For this very reason we use 1)subtractive eq on every channel with cuts and shelfs(even if you cut first 10-20Hz, it'll already add air to your space, and shelf reduction of even 0.5db in ~20-150Hz will give you even more control and clarity(but nobody is going to kill us if we try 1db, or maybe even a few); 2)group compression for glueing things together and bring some of that tone from ~20-150Hz back if you want, but by more natural way, when you control transients of the entire group, which makes it sound much more coherent.

Long story short, you should keep in mind gain staging and build up effect every time you make such decisions on channel/track level, this is why you should never equing in solo - the only reasons to use eq with solo on is when you've heard a glitch and want to find and cut it out and/or after you did your general equing, and now you want to add some precision.

Speaking of equing reverbs.
1)Always have a digital EQ before reverb as reverbs not always have input eq and/or their eqs are too primitive(I highly recommend to try Equilibrium - the most trasparent EQ plug out there and it has a lot of cool options for every stage of production(from channel eq to mastering)
2)to soften reverb, instead of eq after reverberation try de-esser(try different plugins as all of them tend to have different character)
3)after those 2-3 effects in chain try to use different enhancers(saturators, coloring eq and so on), gentle amount of those on cleaned from mess reverberation can produce some beautiful results.

Tbh I can't list all options available for us in regard of reverberation as there are lots of methods and approaches and it'll take me a whole day to write it all down. So, feel free to find courses and books about it, it's an amazing subject and I'm sure all of you will love it, it is much less boring than... Compression.

Anyway, mixing is art. Mixing engineers eat their bread not without reason. Mixing requires different mindset, toolset, templates and so on. Putting a few reverbs here and there, and eq on 5-6 buses is not mixing.


----------



## HM_Music (Sep 5, 2021)

This, by the way, is a topic that has been bothering me a lot lately. I now want to finish writing the album and put it together in 2 versions (or 3), cutting it differently, then compare these versions.
I'm writing this in the hope that maybe someone has similar examples or a video, of a complete composition where there would be multiple versions of the approach.
Theory with a discussion of frequencies and EQ charts is good. But I want something more tangible to hear and understand more specifically how to do it better.

Now of course I stick to cutting out unwanted signal, because when you add up many tracks it affects the purity of the mix. This is especially true if there is no good control for low frequencies. I have 5" monitors and I can't hear anything well enough. Headphones and realphones help a little, but incompletely. I seem to be working the bass at random.

But when it comes to the fundamental frequency of instruments that play in the audible spectrum, if you cut them to the fundamental frequency it sounds bad even at 12dB slope.
Theoretically it makes room for other instruments, but in practice the atmosphere disappears.

What's bugging me.
Take for example a cello or contrabass from the CSS library. Let it be a cello. At the 12dB cutoff at 50gz I hear a lot of useful signal, but switching to 6dB I start to hear even more. That said, it's only 50gz. The brickwall curve works best where I could only hear useful signal at 60gz.
Of course it affects the phase, and I'm not good at this subject, and I certainly don't do that, but below 60gz I still feel low frequency vibrations. Potentially such vibrations could add up and interfere with the feeling of low-frequency percussion or FX.
Or take 2nd violins, everything below 170 Hz of the lowest note in the css is noise, I don't know if it's room vibration or what, but I wouldn't call it a bad signal, but on the mixing side where there are many instruments this signal is probably less important. Cutting 12-6dB should be much lower, but with a curvature of 12dB at 90 Hz it still seems that I remove a lot of useful signal BUT the volume of the cut noise is higher than the volume of the cut violin.

Maybe I should take as a basis the lowest playing note of the instrument in the song and adjust the EQ based on it. It seems to be very convenient especially when working with libraries where you can see the lowest note in the midi and play it to make adjustments.

P.S. You shouldn't listen to me, I'm rather trying to understand


----------



## ka00 (Sep 5, 2021)

Sseltenrych said:


> You could look up Aliasing and phase smear as two artifacts from EQ.
> Oversampling and linear phase options combat these, and are both available in your fancy EQ plugin 😜
> 
> I have learned much about these things for free from In The Mix Youtube channel, where these things are explained simply and concisely.
> ...



Excellent videos. Thanks, Sseltenrych!


Soundbed said:


> general statements for mixing audio:
> 
> standard eq can introduce phase changes (not at the target frequency but around it, higher and lower).
> 
> ...


Ah, great. Very nice to have the points in mind while setting up my mix. Thanks, Soundbed!


----------



## HM_Music (Sep 5, 2021)

Sseltenrych said:


> It may surprise you how low the fundamental frequency of the low notes are:
> 
> 
> Frequencies of Musical Notes, A4 = 440 Hz
> ...


Right, that's why I gave you the example of a cello with ~60 Hz, as well as as the css library that many people have.
What others do with this, of course, provided that this very bottom note is played in the composition.


----------



## Soundbed (Sep 5, 2021)

HM_Music said:


> This, by the way, is a topic that has been bothering me a lot lately. I now want to finish writing the album and put it together in 2 versions (or 3), cutting it differently, then compare these versions.
> I'm writing this in the hope that maybe someone has similar examples or a video, of a complete composition where there would be multiple versions of the approach.
> Theory with a discussion of frequencies and EQ charts is good. But I want something more tangible to hear and understand more specifically how to do it better.
> 
> ...


My approach these days is that one should not cut lows unless there is a specific corrective reason.

Put simply: the advice to “hipass everything” was bad mixing advice. 

However I arrived at this through the experience of others I trusted based on THEIR professional experience. 

As I said earlier there are other choices than cutting if you are not correcting something. 

Gentle shelves and / or multiband compression help tame lows and sub freqs without potentially harming a mix. 

If you search for the phrase hipass everything or high pass everything you will find plenty of people debunking this “myth” … or trying to correct poor mixing advice.

You don’t have to take my word for it. 

Also this is a dorky shirt:


----------



## Trash Panda (Sep 5, 2021)

There is a reason the first thing audio engineers in a mixing session balance track volume levels and panning as a first step and not eq/compression/etc. 

The latter are tools to address specific issues. If your contrabass and other low end sections are lacking clarity, high passing the junk out of other sections can help gain that clarity.

On the other hand, if it ain’t broke, don’t fix it.


----------



## Soundbed (Sep 5, 2021)

Trash Panda said:


> If your contrabass and other low end sections are lacking clarity, high passing the junk out of other sections can help gain that clarity.


There are other ways to accomplish this. It might be that something could use a dip or resonance suppression anywhere between 80-300Hz.


----------



## HM_Music (Sep 5, 2021)

Soundbed said:


> My approach these days is that one should not cut lows unless there is a specific corrective reason.
> 
> Put simply: the advice to “hipass everything” was bad mixing advice.
> 
> ...


Nice T-shirt, I want one for myself. It amuses me a lot.

I recently bought fabfilter mb for this purpose as well, but haven't had a chance to try it out yet. Although mostly thought to use it through sidechain, I thought this way to make room for low frequency percussion or fx(boom, hit).
Then the other question is how to tune the compressor. Or more precisely how much dB should be removed by EQ or compressor.
Actually, yes, I also think that lately the high pass is not really needed, especially when there is a normal balance of instruments, everything sounds fine without any equalization.
By the way, trying to google high pass everything, not much information, but I found a video



It would be interesting to see or read what people like Alan Myerson use when working with low frequencies.

Looks like me really need to get a better handle on all this.


----------



## HM_Music (Sep 5, 2021)

Here's something else.

Marc Jovani:
You can *cut* a LOT of *low frequencies from your non-bass instruments*. Use a frequency analyzer, which is built into most EQs (I use Fabfilter) to see how much you can cut. You will be surprised. Sometimes my High Pass Filter on violins goes right up to 400-600Hz. Still, use your ear. Don’t just trust visuals.Sometimes my High Pass Filter on violins goes right up to 400-600Hz. Still, use your ear. Don’t just trust visuals.





Composing Part IV – EQ & Reverb


EQ The goal here is to gain clarity. The foundation of this is done with your choices during the arranging process. Adding EQ will help to take this one step further. When adding tracks,




cinematiccomposing.com


----------



## JohnG (Sep 5, 2021)

I don't trust any frequency analyzer or other meter/math-based solution for this (mostly) non-problem.

Use your ears.


----------



## thesteelydane (Sep 5, 2021)

There's an interesting acoustic phenomenon that us string players have used to practice intonation for hundreds of years, particularly double stops: Any time you play two notes perfectly in tune, you will hear a third note equal to the frequency of the upper note minus the frequency of the lower note. If for example you play a perfect fifth, you will hear the bottom note of the fifth and octave below. The 3rd note is always very quiet, but if you play perfectly in tune with just intonation and listen for it, it's very easy to hear. It's called the Tartini tone, named after the violinist who first noticed it.

I have always wondered if this would also apply to orchestras - in other words, there must be a lot of notes floating around way below all the fundamentals. And surely these very low and quiet frequencies must excite the room in some way?

Then again I'm not sure if this is a phenomenon that takes place when the sound waves intermingle in the air or if they are produced in the resonating body of the instrument itself. just something I've been thinking about for many years.


----------



## Faruh Al-Baghdadi (Sep 5, 2021)

Soundbed said:


> My approach these days is that one should not cut lows unless there is a specific corrective reason.
> 
> Put simply: the advice to “hipass everything” was bad mixing advice.
> 
> ...


This is also a good point and reason why everybody tell "to use your ears" - when pros were telling us "hp everything'" they didn't mean "make a cut with 256db curve up to 40Hz on every track", but "even 1Hz with slight cut + shelf with a slight reduction of low end will give you more clean mix without harming it".

Problems start when we take someone's advices blindly and literally.


----------



## CT (Sep 5, 2021)

thesteelydane said:


> There's an interesting acoustic phenomenon that us string players have used to practice intonation for hundreds of years, particularly double stops: Any time you play two notes perfectly in tune, you will hear a third note equal to the frequency of the upper note minus the frequency of the lower note. If for example you play a perfect fifth, you will hear the bottom note of the fifth and octave below. The 3rd note is always very quiet, but if you play perfectly in tune with just intonation and listen for it, it's very easy to hear. It's called the Tartini tone, named after the violinist who first noticed it.
> 
> I have always wondered if this would also apply to orchestras - in other words, there must be a lot of notes floating around way below all the fundamentals. And surely these very low and quiet frequencies must excite the room in some way?
> 
> Then again I'm not sure if this is a phenomenon that takes place when the sound waves intermingle in the air or if they are produced in the resonating body of the instrument itself. just something I've been thinking about for many years.


No clue how strongly the effect might apply to orchestras, but the same principle is used in organs when there isn't space for the largest ranks of pipes. They're substituted with "resultant" ranks, which play fifths to simulate the effect of hearing the root of the fifth an octave lower. In any case I don't think this would really enter into how things work in the VI world. Interesting stuff though!


----------



## MartinH. (Sep 5, 2021)

thesteelydane said:


> There's an interesting acoustic phenomenon that us string players have used to practice intonation for hundreds of years, particularly double stops: Any time you play two notes perfectly in tune, you will hear a third note equal to the frequency of the upper note minus the frequency of the lower note. If for example you play a perfect fifth, you will hear the bottom note of the fifth and octave below. The 3rd note is always very quiet, but if you play perfectly in tune with just intonation and listen for it, it's very easy to hear. It's called the Tartini tone, named after the violinist who first noticed it.
> 
> I have always wondered if this would also apply to orchestras - in other words, there must be a lot of notes floating around way below all the fundamentals. And surely these very low and quiet frequencies must excite the room in some way?
> 
> Then again I'm not sure if this is a phenomenon that takes place when the sound waves intermingle in the air or if they are produced in the resonating body of the instrument itself. just something I've been thinking about for many years.



I've heard about this in the context of guitars and was wondering about this as well. If you ever find out if this is something that only happens on the real instrument or also happens when you play those intervals on sampled instruments where the notes were sampled individually, please let me know!


----------



## germancomponist (Sep 7, 2021)




----------



## Soundbed (Sep 7, 2021)

germancomponist said:


>



that's a college near me


----------



## Markus Kohlprath (Sep 7, 2021)

germancomponist said:


>



This is pretty impressive. In the second example you don't even hear the higher tone anymore. The deep tartini tone overpowers it in my perception.
The question is if a low cut filter on the individual tones would prevent the effect. Probably not since it's a phenomenon created in our ears if I understand it right and thus for it shouldn't matter if the signal is filtered out below the fundamental.


----------



## confusedsheep (Sep 7, 2021)

personally i would not obsess over the details too much... imho the whole processing and adjusting thing can rapidily get out hand and become a bottomless rabbithole to disappear into (quite similar to sample library hoarding or synth and fx plugin collecting... ) instead of doing music (i lost musicians who were on a holy quest to create the ultimate hi-hat sound. i am afraid, several years later, they are still filtering and adjusting. i dimly remember a rather funny anectode from years long past... there was a rather succesful pop music act, and everyone and their cat was marvelling about the amazing drum sound... much later someone made the discovery that it were stock preset triton drum kits  )...

as a sheep i would keep it rather simple. volume, pan position, a bit of reverb... if this sounds good do some more detail work. as little as possible, actually i believe that less is more in terms of processing. and trust your ears is probably at the same time the best (and also the most difficult) advice that was given.

the whole highpass,eq everything idea comes with a certain sound ideal... it works for modern, popular music that has to be as loud as possible, and sounds need to be separated, in order to maximize impact.

e.g. one trick to make a dominant rhythmic part stand out in modern music is to sidechain. works quite well for house music. but would sidechaining your contra bassoons be a good idea to give them more ooomppfff? but wait maybe sidechaining contra bassoons will be the next big thing... forget about braaams...it will be all about earth shattering contra bassoons...you heard it here first! 

in a more acoustic orchestral context blending is actually not a bad thing at all - and a large dynamic range is a good thing. of course if one discovers some low grumbling on a track highpassing is quite nice. but to highpass everything might not be helpful - but a little might go a long way. 

actually if there is too much build up in a certain frequency range, instead of starting to process (which can be still done after all) it might be helpful to look at the composition and see why it adds up, maybe there is just too much going on at once, maybe all that is needed is to separate the instruments over more octaves...

sorry...what was the original question again?


----------



## germancomponist (Sep 7, 2021)

Markus Kohlprath said:


> This is pretty impressive. In the second example you don't even hear the higher tone anymore. The deep tartini tone overpowers it in my perception.
> The question is if a low cut filter on the individual tones would prevent the effect. Probably not since it's a phenomenon created in our ears if I understand it right and thus for it shouldn't matter if the signal is filtered out below the fundamental.


Experiment with samples ... .


----------



## germancomponist (Sep 7, 2021)

confusedsheep said:


> personally i would not obsess over the details too much... imho the whole processing and adjusting thing can rapidily get out hand and become a bottomless rabbithole to disappear into (quite similar to sample library hoarding or synth and fx plugin collecting... ) instead of doing music (i lost musicians who were on a holy quest to create the ultimate hi-hat sound. i am afraid, several years later, they are still filtering and adjusting. i dimly remember a rather funny anectode from years long past... there was a rather succesful pop music act, and everyone and their cat was marvelling about the amazing drum sound... much later someone made the discovery that it were stock preset triton drum kits  )...
> 
> as a sheep i would keep it rather simple. volume, pan position, a bit of reverb... if this sounds good do some more detail work. as little as possible, actually i believe that less is more in terms of processing. and trust your ears is probably at the same time the best (and also the most difficult) advice that was given.
> 
> ...


If you have 50 instruments in the arrangement, all playing at the same time and all fighting for the same frequencies, then with good EQ's you can actually make it sound very clear and just great. Provided you have good ears and equipment.


----------



## Trash Panda (Sep 7, 2021)

Likely controversial opinion: those on either extreme of process everything and do no processing should be disregarded. Extremes are rarely ever good in music production or mixing.


----------



## vitocorleone123 (Sep 7, 2021)

Non professional here...

High and low cuts that aren't linear phase, at least in the EQs I have, produce phase issues. Linear phase EQs introduce ringing. A reasonable compromise is to not use a cut, but a shelf, even of 20db, or even 30db. Some phase impact, but no nulls, and likely not necessary to use linear phase so more ringing can be avoided.

Gentle (6db or maybe 12db) high/low cuts definitely have less impact on phase, but can still produce nulls depending on the frequency with which they start.

I learned this from a Dan W video


----------

