# Equalizing & air absorption



## lee (Jan 19, 2007)

Are there any scientific approaches to this? What filters/equalizers do you use, how and why? I know it depends alot on the samples and their recorded acoustics. But let´s take VSL for example.

Yes, using your ears is best I know. And listening to reference recordings, yup. But what about mathematics, attenuations and q? :smile: 

/Johnny


----------



## kid-surf (Jan 19, 2007)

Meaning muddy mixes?

You're right. It's all about ones ears. What are your ears getting their information from? In other words, get the best monitor you can possibly afford. Now you've got a fighting chance.

Throw the mathematics out the window... artists use a little thing called _intuition_. 

Mixing is an art. Not a science.

Akin to ---- > *Q:* Ho do I write music that means something. *A:* Idunno, you just do.


----------



## lee (Jan 24, 2007)

Anyone wants to give a starting point, like X db / oct, LPF, -X db with X corner frequency?

Thanx, 

/Johnny

Btw, I found this, might be interesting to others too: http://www.regonaudio.com/Records%20and%20Reality.html


----------



## synergy543 (Jan 24, 2007)

Hi Lee, 

There are forumulas for sound absorption in the air but its fairly complex and there really is no one simple formla. There's a lot going on and its dependent upon frequency, humidity, air pressure, wind, and many other factors. So for the time-being, the intuitive approach as kid suggests is probably best.

One example of the complexity of sound absorption in air is the sound of thunder. You know what it sounds like up close (cover your kid's ears!). Its like a giant convolution starter pistol. Now think of what it sounds like when its far off in the distance in the middle of a big storm. It sounds like its being run through a wall full of dynamic Mu-tron Phasors and Moog filters. I think that helps explain the degree of complexity. Plus, in closed environments, acoustic absorption would be another huge interactive factor.

Greg


----------



## lee (Jan 24, 2007)

Hi synergy,

yup you´re absolutely right. However, since my last name is Stubborn, I´ve also found this Air Absorption Calculator! (Scroll down to the bottom):

http://www.doctorproaudio.com/doctor/ca ... res_en.htm

Oh, and fortunately the wind strength in a concert hall or studio is usually quite low. :wink:


----------



## synthetic (Jan 24, 2007)

To move things further away, I tend to roll off the highs and lows using shelving EQ. You can also use a highpass filter around 60-100Hz on instruments without very low frequencies (many brass, wind instruments) to clear up mud in your mix. Cutting at around 6k can also help to decrease presence (i.e. when trying to make VSL samples sound like they're not sitting in your lap). Try not to use the solo button very often, that doesn't help you in the final mix. 

Some more mix tips: 

http://www.mixthis.com/bcpageframe2.html

(Really fun site from one of the world's top rock mixers. Some good tips on this page.)


----------



## lee (Jan 25, 2007)

Thanx for your tips syntetic! You mean cutting at 6khz using a shelving eq?

The link you gave seemed to have more politics than mixing advice. :wink: 

I`m thinking about what equalizer to use. The only commercial ones I have are the native ones in Cubase SL3 and K2.11. Are there any freeware eqs that would be more suitable for this task, than the ones in SL3/K2.11? I´m almost afraid to ask this about freeware, since there are many pros in this forum who use expensive and superior plugins. :oops: 

/Johnny


----------



## kid-surf (Jan 25, 2007)

Johnny --


*Here is a tip:* Learn your frequencies. Spend 15 minutes a day for a month scrolling a sine wave generator through your monitors. As you do, try and guess what frequencies is being presented. At the end of 30 days you'll have a better understanding of where your frequencies lie on your monitor. 

As you are doing that -- Study EQ and what it does. 

You eventually need to get to the point that you can point to your monitors (w/o audio passing) and tell yourself what frequency is represented in that part of the monitor. (going form 20k to 20 Hz). What does this do? Well, now when you hear a sound coming from the monitor you know better what frequencies need attention. And since you've studied EQ you know what tool will work best for the task.

From that point everything is intuitive... Meaning years after you can point to your monitors and know the frequency. (because that simply means you know your monitors, you still need to learn how to mix well)

You need to start by learning the basics of engineering. Once you have the basics down you'll no longer need to ask what to use. Then it becomes about sculpting in a way that pleases your ear. Much trial and error awaits.

So that is why settings and tips about certain frequencies are nothing but a band aid that really isn't going to mean much in the long run. There's no short cut to learning how to mix.

What's the saying about teaching someone 'how' to fish -vs- catching the fish for them?

Cheers,
J


----------



## Nick Batzdorf (Jan 25, 2007)

Things that move back do lose HF content, true, but it's the close-up detail you want to lose, not necessarily brightness. In other words, close-miked violin wtih lots of bow resin detail isn't going to sound right if it's supposed to be back in a section at the other end of a hall.

The real way to get distance is to use delays, or better yet early reflection programs, and then preferably in Altiverb. Sound travels roughly a foot per millisecond at sea level (actually it's 1.1 feet, but who cares), so if you add a 10ms delay you're moving the sound back ten feet. Try it - it works.

Volume and wetness in the reverb send also affect the perceived distance.


----------



## lux (Jan 26, 2007)

kid-surf @ Thu Jan 25 said:


> What's the saying about teaching someone 'how' to fish -vs- catching the fish for them?
> 
> Cheers,
> J



Kid,

on my personal point of view Johnny's question is quite legitimate. Thats what this forum is for. Expecially if we talk about engineering frequencies and such.

Perhaps i would agree better with you if he posted a generic musical question like "how could i arrange and sound like John Williams?"...but asking which hot freqs to cut looks a good and interesting argument here. 

I watched eqs and frequency analyzers for years now and still wondering what the hell is goin on there :shock: :wink: 

Luca


----------



## kid-surf (Jan 26, 2007)

lux @ Fri Jan 26 said:


> kid-surf @ Thu Jan 25 said:
> 
> 
> > What's the saying about teaching someone 'how' to fish -vs- catching the fish for them?
> ...



It is a legitimate question. That's why I took time to answer him in a serious manor. :razz:

My tip was very much sincere. I remember back-in-the-day when I was interning at a studio, the first thing I did was brought my notebook and scribbled down the EQ settings on the board (after I took out the nights garbage). The engineers laughed at me. Later on I figured out why they were laughing.

It's the same reason I don't use mix templates today, gig to gig. And why I don't 'need' to check my mixes outside the studio. I learned how to fish. (not that I'm the only one who knows how to fish... metaphorically speaking)


----------



## lee (Jan 26, 2007)

I´m very grateful for your tips, all of you. And I admit being a little bit impatient, which is a part of my personality I`m afraid. But I`m not against trial and error! It´s more like I´m after something to start from. Indeed, you´ve already given me such valuable advice and I intend to practice alot, until I know what I like, what sounds realistic/unreal, and when to use the different methods.

Kind of OT: Let´s see what factors we have here for achieving what we want, in no particular order.

The composition

The choice of instruments

The pan and width

The reverb(s), reflections and (pre)delays

The equalizing (for achieving depth and/or different timbre)

The editing of samples, for a bigger palette of sounds/articulations

The layering (which could be part of the composition, but also for achieving different timbre/sound)

The tempo

The dynamic processing


Anything I´ve forgot?

/Johnny


----------



## Peter Emanuel Roos (Jan 26, 2007)

I want to suggest seeing reverb as having two components (generally accepted by the major players in this field):

- early (distinct) reflections, in the first 0 - 150 msec
- reverb tail, starting when the distinct reflections/echos turn into a kind of "wash" sound

I consider predelay to be part of the early reflections phase (= time to first ER).

ER's are extremely important for localisation/placement and are processed by our brains in a completely different manner than the reverb tail.


----------



## Nick Batzdorf (Jan 26, 2007)

By the way, I didn't mean Altiverb specifically in my post above, I meant any convolution reverb - the predelays in a convolution reverb rather than using standard delays.


----------



## lee (Jan 26, 2007)

Your idea sounds really good Peter! Maybe it could be a part of vi-control.net? Anyway, your knowledge (and others too) is invaluable for beginners like me and I appreciate it alot! Kind of comforting to meet people like you and forums like this, in contrast to alot of the porn and destructive forums/chats that you can find on the net.

/Johnny


----------



## lee (Jan 26, 2007)

Nick: So you agree with Mr Kaufmann about predelay and depth? Or did you mean the direct sound?

Ok, I`ll go to bed now.. 

Good night!

/Johnny


----------



## lee (Jan 27, 2007)

Hehe, this is why I think all this is kind of complicated. Interesting thoughts, Mr Fairhurst and Mr Batzdorf.

However, assuming you use the other factors in my list (mentioned before) trying to achieve depth, maybe the predelay factor doesnt make such a big difference. Meaning both Mr Kaufmanns and Roos resulty sound satisfactory.

Peter, I know you have good theoretical arguments, but can you actually hear that Kaufmanns songs use his approach to the predelay?

I´m having a thought about this. What if you treat the reverb as if the ER and the tail are two different things (Well I know you already do, but here´s what I mean). The predelay for the ER should increase the closer you get to the sound source, but the "predelay" (the timing) for the tail, should increase for the far away sounds. I will try this out and try to post examples if I get the time. 

/Johnny


----------



## synergy543 (Jan 27, 2007)

Its the relative levels that are important. And the closer you get to an instrument, the louder the direct sound becomes relative to the ambience (ERs + reverb). And the reverse is also true. This is the same concept as near-field monitoring.


----------



## lee (Jan 27, 2007)

Synergy543: Yup, the dry/wet level, that is the ultimate tool. That is what I´ve been using so far, but it sounds interesting to try to use the predelay aswell.


----------



## JohnnyMarks (Jan 27, 2007)

I'ts been some time, but one of my recollections from reading a Griesinger paper was that sidewall ER's are far more influential to the listener in locating the sound in the X-Y plane than those coming from ceiling/floor/front/back, meaning that for this purpose the latter can be largely disregarded. Peter?

EDIT:
Well, I'm reading Griesinger now and seems he has some new findings...


----------



## synergy543 (Jan 27, 2007)

lee @ Sat Jan 27 said:


> Synergy543: Yup, the dry/wet level, that is the ultimate tool. That is what I´ve been using so far, but it sounds interesting to try to use the predelay aswell.


The relationship of ERs to later ambience is also important. As you back away from an instrument, the ERs will become more prominent as the volume of the direct signal drops.


----------



## Nick Batzdorf (Jan 27, 2007)

I agree with Peter, actually, but I go by the sound of the instruments when setting the predelay, not the positioning. Strings sound better with a longer predelay, while horns want a short one.


----------



## david robinson (Jan 27, 2007)

hi, all you've got to do, and i do quite regularly, is get an acceptable mix of you're virt orch, and then send it via a good pair of full range speakers into a room more than 20x30x40ft. record the result. simple and very accurate if you've got the engineering experience.
you people waste a lot of time trying to make static sample libs sound convincing.
they never will.


----------



## david robinson (Jan 27, 2007)

hi, all you've got to do, and i do quite regularly, is get an acceptable mix of you're virt orch, and then send it via a good pair of full range speakers into a room more than 20x30x40ft. record the result. simple and very accurate if you've got the engineering experience.
you people waste a lot of time trying to make static sample libs sound convincing.
they never will.


----------



## Patrick de Caumette (Jan 27, 2007)

Peter Roos @ Fri Jan 26 said:


> Beat's idea:
> Instruments in the back (brass & percussion): more predelay
> Instruments in the front: less predelay
> 
> ...



Hi Peter,

Don't you mean:
Instruments in the back (brass & percussion): ER get to the listener quicker
Instruments in the front: ER get to the listener later?
This makes sense...


----------

