# Multiple reverbs and spaces question



## David Lee-Michaels (Aug 31, 2022)

Hi all
I hear a lot about mix engineers using multiple reverbs but I'm a little confused as to what they mean by that. I have a little experience with reverb and use just one sparingly on the more prominent sounds or instruments in a mix, like lead vocals, lead guitar or strings. But I've heard of people using multiple reverbs. For those that do that, do you use different types of reverbs depending on the genre and instrument or do you layer multiple types of the same reverb with different delays/tails on a single track? I would think that doing that would bloat a mix and make it too muddy. Also my understanding is that the point of reverb is to place the sounds in a 'physical' space. If different types of reverbs are being used doesn't that do the opposite? Some clarification would be great, thanks.


----------



## Jdiggity1 (Aug 31, 2022)

David Lee-Michaels said:


> ...do you use different types of reverbs depending on the genre and instrument or do you layer multiple types of the same reverb with different delays/tails on a single track?


I tend to use different reverbs for different purposes, but not necessarily for different instruments/tracks.
In my world (film scores and orchestral music/mockups), reverb is primarily used for:

*Creating/emulating space*
This is where the focus is more on the reflections of a sound, and not so much the tail. There are many ways to approach this, but the end result is either having your instrument/s pushed back a little further, or sounding like they're in a different room/space/hall, without adding a long tail. The idea with this is to match the rest of the ensemble where applicable, for a consistent sense of space, or create varying levels of depth to create a 3D space. 
Doesn't always need to be applied to all tracks. Will depend on the samples used or space you recorded in.
This is done either as an insert or as a send, depending on the plugins and approach you use.
A common example is to use Eventide's SP2016 and set the slider to "front". Or it may be to use a convolution reverb with presets dedicated to real spaces, etc.

*Effect*
This is more about the tail, adding lushness and shimmer, etc. This comes into effect when you have a soloist that you want to stand out from the rest, a Thomas Newman woodwind noodle, ethereal vocals, etc. Tail length anywhere from 3s onwards, and sometimes much much higher... think Blackhole reverb by eventide, or valhalla shimmer, Raum, etc.
Usually an engineer will have their effect reverbs set up as an FX bus that you can send/route your soloist tracks to.

*Filling in the gaps / Glue / Cohesion*
This is also an effect when it comes down to it, but it's less about highlighting soloists and more about simply filling in the space between notes, affecting the overall dryness or wetness purely to make it sound more pleasing.
Generally achieved with a digital hall reverb, somewhere in the 2-3s range.
Can be set up as FX buses that you route all of your tracks to at various levels as needed, or even just as an insert on the mix bus if you're happy for everything to receive the same treatment.


Then there are techniques such as stacking two different reverbs to make a thicker, more lush reverb, which can do anything from simply achieving a wider sound, or masking up some of the "digital-ness" of reverbs, creating more of an "organic soup", as Alan Meyerson would say.

Basically, all options are on the table, but the important bit is knowing what you're trying to achieve first before you start slapping on plugins and presets.


----------



## David Lee-Michaels (Aug 31, 2022)

Jdiggity1 said:


> I tend to use different reverbs for different purposes, but not necessarily for different instruments/tracks.
> In my world (film scores and orchestral music/mockups), reverb is primarily used for:
> 
> *Creating/emulating space*
> ...


Ok so if I understand correctly The idea of placing your instruments/vi's in a space is not so much a 'rule' but is dependent on the genre and goal?


----------



## David Lee-Michaels (Aug 31, 2022)

Jdiggity1 said:


> I tend to use different reverbs for different purposes, but not necessarily for different instruments/tracks.
> In my world (film scores and orchestral music/mockups), reverb is primarily used for:
> 
> *Creating/emulating space*
> ...


To give a bit of context to my question and as general advice, I've arranged a cover of a song in which I have taken just the vocals and arranged it with vi strings, drums and synths to create a kind of hybrid classical/trailer style piece. The vocals already have their own reverb but everything else is fairly dry and come from various synth plugins and vi's so I'm trying to create cohesion between the various tracks and essentially place them in a 'space'. So I'm not sure how to approach reverb for a project where everything is recorded in different studio's with different sonic qualities.


----------



## Jdiggity1 (Aug 31, 2022)

David Lee-Michaels said:


> Ok so if I understand correctly The idea of placing your instruments/vi's in a space is not so much a 'rule' but is dependent on the genre and goal?


Yes, but also highly dependent on the samples being used to begin with.
It's part of the reason why it's a good idea to start with samples that were recorded in a good space, and offers multiple mic positions. You can usually achieve the desired space that way without resorting to reverbs.

Let's say you're using Spitfire Symphonic Strings and Cinematic Studio Brass in the same piece.
SSS is a much bigger sound/space by default, so you'll want to add your verbs and positioning treatments to CSB in order to push it back into a similar space as SSS. You might not need to do anything to SSS for this stage.
Then you apply your general reverb to both sections (or the whole mix) in order to add some level of cohesion and glue/"gap-filling"


----------



## David Lee-Michaels (Aug 31, 2022)

Jdiggity1 said:


> Yes, but also highly dependent on the samples being used to begin with.
> It's part of the reason why it's a good idea to start with samples that were recorded in a good space, and offers multiple mic positions. You can usually achieve the desired space that way without resorting to reverbs.
> 
> Let's say you're using Spitfire Symphonic Strings and Cinematic Studio Brass in the same piece.
> ...


Ah ok, that basically answered the question regarding my second post about the cover song I was writing


----------



## Beat Kaufmann (Sep 1, 2022)

Hello David Lee-Michaels
Here is a practical example in Part 1 of *this video*.
It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.

The advantage of this reverb concept is that all instruments get the same amount of reverb tail. Nevertheless, some instruments are deeper in the room. 
If you were to add reverb and depth to each instrument at the same time (with a reverb), the rear instruments would have more of everything and the front instruments (solo) would sound practically "dry". This technique makes sense especially in mixes where orchestras are involved, because in these cases you should ideally work with quite different room depths.

The video linked above is about reverb tails. As I said in Part 1, it is first about what reverb tails are needed for - the part for you. 

All the best
Beat
----------------------------------
Here is a Video which creates depths (without or with less Tails)


----------



## tc9000 (Sep 1, 2022)




----------



## Beat Kaufmann (Sep 1, 2022)

Since tc9000 says in his video that you have to use one reverb for each instrument in my presented reverb concept, I have to correct him. In practice, one sets 3 - 4 different reverb depths (e.g. 3 bus channels with depth 1 / depth 2 / depth 3). The violins are then routed through bus 1, the woodwinds through bus 2 and brass and percussion through bus 3. Together with the reverb tail in the output, I then only need 4 reverb instances, which any computer can easily do today... and the Results are much more transparent as with his SEND-version.
If you listen to the results of the SEND technology with only one reverb (Depth and Tail at the same time) in the video of tc9000, you will notice how much everything "drowns" in reverb, especially the distant instruments. As I mentioned in my first post.

Which reverb concept you want to work with depends on personal taste. Personally, I like very different depths of space but at the same time as much transparency as possible Example 1 / Example 2 / Example 3 - I can't achieve that with the SEND version.

Beat


----------



## tc9000 (Sep 1, 2022)

Oooohhh hahahah: @Beat Kaufmann I'm really sorry I was being lazy and just posted a 6-year old Alex Moukala tutorial . Meanwhile, an actual expert is posting real, heartfelt, carefully thought out responses! Kudos to you, sir!

@Beat Kaufmann: apologies - I hope you can forgive my impudence! @David Lee-Michaels - ignore me and listen to Beat!


----------



## NoamL (Sep 1, 2022)

tc9000 said:


> Oooohhh hahahah: @Beat Kaufmann I'm really sorry I was being lazy and just posted a 6-year old Alex Moukala tutorial . Meanwhile, an actual expert is posting real, heartfelt carefully thought out responses! Kudos to you, sir!
> 
> @Beat Kaufmann: apologies - I hope you can forgive my impudence! @David Lee-Michaels - ignore me and listen to Beat!


Alex Moukala may be self taught but he gets results that he & his listeners enjoy.


----------



## tc9000 (Sep 1, 2022)

NoamL said:


> Alex Moukala may be self taught but he gets results that he & his listeners enjoy.


I am the no. 1 Alex Moukala fan! I _love _Alex's vids and have learned so much from him!


----------



## TonalDynamics (Sep 1, 2022)

David Lee-Michaels said:


> Hi all
> I hear a lot about mix engineers using multiple reverbs but I'm a little confused as to what they mean by that. I have a little experience with reverb and use just one sparingly on the more prominent sounds or instruments in a mix, like lead vocals, lead guitar or strings. But I've heard of people using multiple reverbs. For those that do that, do you use different types of reverbs depending on the genre and instrument or do you layer multiple types of the same reverb with different delays/tails on a single track? I would think that doing that would bloat a mix and make it too muddy. Also my understanding is that the point of reverb is to place the sounds in a 'physical' space. If different types of reverbs are being used doesn't that do the opposite? Some clarification would be great, thanks.


The general, blanket answer is that different types of reverbs can have vastly different frequency responses, and thus will sound better or worse for a given instrument or buss of instruments.

As mentioned by others it's also a great way to 'glue' some things together (along with compression).

Many times you will have some stuff that sounds too obnoxious or distracting if it is too forward in the mix, but when blended with other instruments with 'verb it can add some nice textural atmosphere.

'Verb is all about experimentation! There are no right or wrongs, only what sounds good and what sounds crap, so get in there an play around with it


----------



## Trash Panda (Sep 1, 2022)

There are so many variables involved it’s hard to give a blanket answer or trust anyone who tells you there is the one way to approach all scenarios with any effect (even if it’s Jake Jackson with his MUST HAVE TWO REVERBS advice).

The waters get even muddier if you’re adding a band to an orchestra.

Do you want the band at the front or to sound like they’re interspersed within the orchestra?

Do you even want the band to sound like they're in a bigger venue or keep them to a dry studio sound?

How dry or wet are your samples? Are they recorded in situ from a tree mic or all close mic'd and centered?

What kind of venue feel are you aiming for?

For example, if I'm working with a library recorded in Teldex, Trackdown or AIR, those libraries probably don't need a "room" reverb, because they already have plenty of room information baked in. Even with a "dry" library like Audio Imperia's Nucleus/Jaeger/Areia line have lots of room information baked in, even if there is not much of a tail involved.

In some cases, some libraries like those recorded in AIR or Abbey Road have so much natural tail built into their sound that I don't even really need a tail reverb for them either.

Anyways, this is a lot of words saying that it is all dependent on context of what you're using and what you are trying to get to. If you can answer those questions, you can get much higher quality, more relevant answers.


----------



## The Gost (Sep 1, 2022)

David Lee-Michaels said:


> Hi all
> I hear a lot about mix engineers using multiple reverbs but I'm a little confused as to what they mean by that. I have a little experience with reverb and use just one sparingly on the more prominent sounds or instruments in a mix, like lead vocals, lead guitar or strings. But I've heard of people using multiple reverbs. For those that do that, do you use different types of reverbs depending on the genre and instrument or do you layer multiple types of the same reverb with different delays/tails on a single track? I would think that doing that would bloat a mix and make it too muddy. Also my understanding is that the point of reverb is to place the sounds in a 'physical' space. If different types of reverbs are being used doesn't that do the opposite? Some clarification would be great, thanks.


Hi, another point of view, if you don't just work in an "orchestral style", this person has a lot of good videos....


----------



## Nick Batzdorf (Sep 1, 2022)

A minus B monitoring is the answer.

Take one of your favorite records and split it onto two tracks.

Reverse the polarity of one of them and monitor in mono.

You're left with all the reverbs and effects on the sides, making it easier to hear what's going on with all the reverbs and things.

This is Dave Moulton's explanation, possibly from when I was at Recording (if not, he wrote about it at the time):



Moulton Laboratories :: Stereo Reconsidered: A+B/A-B: Another Way of Mixing


----------



## liquidlino (Sep 1, 2022)

David Lee-Michaels said:


> To give a bit of context to my question and as general advice, I've arranged a cover of a song in which I have taken just the vocals and arranged it with vi strings, drums and synths to create a kind of hybrid classical/trailer style piece. The vocals already have their own reverb but everything else is fairly dry and come from various synth plugins and vi's so I'm trying to create cohesion between the various tracks and essentially place them in a 'space'. So I'm not sure how to approach reverb for a project where everything is recorded in different studio's with different sonic qualities.


A helpful reverb video from Cory:


----------



## David Lee-Michaels (Sep 2, 2022)

Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.
> It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.
> 
> ...


Thanks I'll have a look at this.


----------



## David Lee-Michaels (Sep 2, 2022)

Nick Batzdorf said:


> A minus B monitoring is the answer.
> 
> Take one of your favorite records and split it onto two tracks.
> 
> ...


Thanks I didn't know you could do that.


----------



## Peter Emanuel Roos (Sep 2, 2022)

Trash Panda said:


> ...
> 
> In some cases, some libraries like those recorded in AIR or Abbey Road have so much natural tail built into their sound that I don't even really need a tail reverb for them either.
> 
> ...


I agree, but still:

The one time I was in the Abbey Road One control room, during recordings for orchestral library music, there were two Bricasti's connected and switched on. I have no idea why (especially during recording), but maybe they also wanted to be able to check the mix with just some more reverb?

Mind you that halls this size are still not as reverberant as "real" concert halls (and rightly so). I bet they still add a tad of algorithmic reverb tails to mixes.


----------



## Nick Batzdorf (Sep 2, 2022)

David Lee-Michaels said:


> Thanks I didn't know you could do that.



I spent hours in nerd heaven doing that back in the day.

By the way, I should have clarified - the reason you split the mix onto two tracks is so the L and R channels are separated (since you're going to reverse the polarity of one side).

That's probably obvious, because you're going to hear nothing if both tracks contain the same thing and you reverse one!


----------



## David Lee-Michaels (Sep 3, 2022)

Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.
> It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.
> 
> ...


This was a great video, simple to understand, thanks


----------



## David Lee-Michaels (Sep 5, 2022)

Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.
> It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.
> 
> ...


Just wanted to let you know I tried this concept and it really works well. I still have alot to learn about applying reverb but this is the first time I've been able to do it without everything sounding like mush. I gave a like and a sub on your YT channel


----------



## ilamatteo (Sep 6, 2022)

Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.
> It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.
> 
> ...


Does your method work on wet libraries? I'm mean, it would be nice to still give samples a cohesive room sound and an enhanced depth.


----------



## Peter Emanuel Roos (Sep 6, 2022)

What we often call "wet" samples are those with at least "baked in" early reflections (0 - 100 msec) and some amount of reverb tail (say with audible lengths up to 0.5 - 1.2 sec).

To avoid muddiness try to add "tail-only" reverbs to such samples, or use a plugin where you can control the levels of ERs and tails.

(yes, I am plugging my plugin  )


----------



## jcrosby (Sep 6, 2022)

ilamatteo said:


> Does your method work on wet libraries? I'm mean, it would be nice to still give samples a cohesive room sound and an enhanced depth.


ERs are typically something you add to dry sample libraries. Adding ERs to a 'wet' library can blur or muddy the image of a library that has a sense of depth you like. As has already been mentioned for 'wet' libraries you typically want to add tail only.

Separate 'wet' libraries can share the same reverb tail, this will create a gluing effect you're after. You can also add a subtle amount of compression after the reverb, if you go lightly with the compression this can _enhance_ the gluing effect. Or, you can route all of the strings and reverb to a dedicated string bus and add some subtle compression on the buss, (same idea, but tends to create a more cohesive gluing effect).


----------



## RudyS (Sep 17, 2022)

Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.
> It shows how instruments are placed in different room depths (with a 1st reverb) and how a reverb tail is placed over everything at the end. It is what Jdiggity1 explained with words.
> 
> ...


This is a really good video. Thank you for this!


----------



## topijokinen (Nov 28, 2022)

This was an amazing thread. Learn a lot! I have Seventh Heaven and Valhalla Room reverbs. In your experience what works best for early reflections and tail? Would you eq them? I quickly tried with seventh heaven and noticed that I need to add quite a lot of er to make it sound like it on a big hall but then it also starts to get muddy.


----------



## Peter Emanuel Roos (Nov 28, 2022)

jcrosby said:


> ERs are typically something you add to dry sample libraries.


With respect, I dare to disagree with you on what you call Dry sample libraries - and want to explain an opposite reasoning.

Dry sample libraries typically do not lack early reflections, they lack reverb tails.

Otherwise they would be "anechoic" recordings, which often sound ugly and unusable when heard "as is" (demos on the home page of my site).

The word anechoic refers to the lack of distinguisable echos in their recordings, if you would zoom in on the wave forms of very short and transients-rich sounds, like clave, triangle, snare hit, etc.

Reverb tails do not contain echos, by their definition, all previous, early echos are merged into a chaotic / random wave form.

Recordings like from the original VSL "Silent Stage" are not anechoic, they contain reflections from the floor and the walls close-by (it was a much smaller studio than their current Synchron hall). The Silent Stage was designed to have very short reverb reverb tails, but not to suppress early reflections, which are essential to provide positioning information.

So, early reflections are present in "dry libraries". If you add more ERs from other sources you are essentially making "mud", by the comb filtering typically caused (and wanted) from the early reflections. The combined positioning information from multiple sources likely will make no more sense.

A good solution is to lower the amount of early reflections in the part of the signal that you can control: the reverb. You cannot remove ERs from source material.

Hoping this adds some insights into how our hearing perception works and how samples are recorded.

Kind regards,

Peter


----------



## Obi-Wan Spaghetti (Nov 28, 2022)

Peter Emanuel Roos said:


> With respect, I dare to disagree with you on what you call Dry sample libraries - and want to explain an opposite reasoning.
> 
> Dry sample libraries typically do not lack early reflections, they lack reverb tails.
> 
> ...


Very interesting.


----------



## Beat Kaufmann (Nov 29, 2022)

topijokinen said:


> This was an amazing thread. Learn a lot! I have Seventh Heaven and Valhalla Room reverbs. In your experience what works best for early reflections and tail? Would you eq them? I quickly tried with seventh heaven and noticed that I need to add quite a lot of er to make it sound like it on a big hall but then it also starts to get muddy.


You should try to get the size of the room impression by having e.g. brass instruments, percussion, choir etc. sound far away and without much tail. If you achieve this, you can dose the thing with the tail so that the mix does not end up in a muddy mess.
Example with Timpani (without Tail) / Example with Timpani (with separate Tail)
Also: If you achieve all room depths tail-less, it is enough to use a tail over everything at the end. Then the mix will also be little thick.
Watch the first 4 Minutes of the following video... It shows exactly what I just said.



All the best
Beat


----------



## juliandoe (Nov 29, 2022)

The way I work with reverbs is:
1. I turn off all the reverb from all the sample libraries.
2. with digital delay and/or small rooms I try to match the dry libraries with the wet ones. This doesn't mean that I apply reverbs on all the instruments. Very often a dry instrument blends itself with a wet one. Especially with instruments of the same family or section. 
3. I use a convolution-based small hall to glue the instruments even more. I choose the space based on the sonic characteristics of the wet libraries. In this way, I'm not altering the sound of the space. 
4. I use an algorithmic-based (usually chambers or plates) to emphasize the tail of the previous verbs. Basically, the length of the attack of the convolution is the pre-delay time of the algorithmic. 

by no means this is a must, but it's how I'm working with reverbs right now and I'm always open to embracing new ideas if they improve my results or my workflow. 

I hope this is helpful


----------



## Dewdman42 (Dec 8, 2022)

Peter Emanuel Roos said:


> With respect, I dare to disagree with you on what you call Dry sample libraries - and want to explain an opposite reasoning.
> 
> Dry sample libraries typically do not lack early reflections, they lack reverb tails.
> 
> ...



So I hear what you are saying however there are other reflections besides the very shortest ones from the floor around the close mic and the silent stage environment. When trying to make “dry” samples sound like they are on a bigger stage such as synchron or teldex, more ER’s must be added and also for purposes of depth control. 

What you seem to suggest is that this cannot be done with VSL silent stage and other “dry” libraries since the unavoidable very close reflections would conflict with those from products such as mirpro and your new teldex product. It would not be acceptable to only mix reverb tails in with vsl “dry” libraries. Please clarify what you mean and how various dry libraries can be used effectively with your new product or mirpro. I would think a little predelay would probably avoid the phase issues where the silent stage would provide the earliest ER’s and the reverb product would handle not only tails but also later ER’s.

None of my libraries were recorded in an anechoic chamber.


----------



## Nick Batzdorf (Dec 8, 2022)

Dewdman42 said:


> there are other reflections besides the very shortest ones from the floor around the close mic and the silent stage environment


The conventional model is a 7-path one.

And now I'll have to see whether I can figure them out:

floor, ceiling, left and right walls, front wall, rear wall, and I guess the seventh must be the direct path


----------



## Dewdman42 (Dec 8, 2022)

What a bizarre response. In any case the question was for Peter.


----------



## Nick Batzdorf (Dec 8, 2022)

From Dave Moulton's Total Recording (I edited it and was partners in the publishing company back in the day):


----------



## Nick Batzdorf (Dec 8, 2022)

Dewdman42 said:


> What a bizarre response. In any case the question was for Peter.


Then my answer wasn't for you, it was for people who don't feel the need to insult me for no reason.


----------



## Dewdman42 (Dec 8, 2022)

I still find it bizarre and unrelated to my question which you quoted, but it was certainly not meant as an insult. 

How do you feel vsl and other non anechoic libraries can be used with mirpro and the new Berlin teldex product without conflicting ER’s?


----------



## Nick Batzdorf (Dec 8, 2022)

Dewdman42 said:


> How do you feel vsl and other non anechoic libraries can be used with mirpro and the new Berlin teldex product without conflicting ER’s?


I don't have either, but conflicting ERs aren't usually an issue. That isn't true only of orchestral music, but how many film scores have been recorded on scoring stages with Lexicon 480s added?

You can turn down the ERs in most reverbs anyway, but MIR isn't really a reverb unit, it's a mixing engine.


----------



## Dewdman42 (Dec 8, 2022)

Well anyway that is what Peter implied and I am asking him for clarification on his comments.


----------



## Peter Emanuel Roos (Dec 9, 2022)

I hesitate to step in, because I am afraid this is going into confusions over what each of us defines as dry, wet, what are ERs, etc.

My intention was: if samples already have sufficient positioning information (from science, this should relate to ERs), focus on the tail part.

Where tails start is a point of debate and definition.

About the first generation VSL libraries and where/how they were recorded:


> With an ambience of 0.8 seconds, it’s neither a “dry”, nor a “wet” environment, and it provides well-balanced reflections that the instruments’ sound can evolve and the musicians can hear themselves well.


(from https://www.vsl.co.at/en/AboutUs/Silent_Stage)

Thanks @Nick Batzdorf for the very informative illustration!


----------



## Dewdman42 (Dec 9, 2022)

Right but you didn’t answer my question about how vsl and other non anechoic dry libraries can effectively be used with tools such as mirpro and Berlin studio which add more ER’s, which you previously stated would conflict to create mud

If I understand correctly, the awesome sounding audio demos from Berlin Studio used source tracks recorded in anechoic environment. But none of my sample libraries are anechoic. But this has raised questions for me generally about using typical non anechoic “dry” libraries at all if the ER’s inherently present problems for products such as mirpro and Berlin studio or other approaches where more ER’s will be intentionally introduced. That is the whole point of using dry libraries to begin with.


----------



## Peter Emanuel Roos (Dec 9, 2022)

Man, what's up?

Why are you repeating "non anechoic dry libraries"? 

Is there any "anechoic dry library" in this world?

Why this nitpicking? Is this some kind of trial?


----------



## Dewdman42 (Dec 9, 2022)

Oh I’m sorry I did not mean to put anyone on trial or upset you. Just asking a sincere question. I agree libraries are not anechoic. That a why I didn’t understand your previous comments about anechoic recording and potential mud without them! Obviously I am the one confused and remain so.


----------



## re-peat (Dec 9, 2022)

There’s a kind of threshold below which the presence of ER’s isn’t troublesome, Dewd. In the pre-Synchron VSL-recordings, those first reflections, while not entirely absent (that would be physically impossible since, for starters, air is not a vacuum), stay well below that threshold, which leads us to perceive those samples as almost completely dry. Scientifically speaking, they aren’t, but we hear them, and treat them, as such. For all our intents and purposes, they’re dry.

A good example of libraries which are, bizarrely, also often described (even by their developer) as dry but which aren’t dry _at all_, are the Spitfire Studio Series. In these libraries, the presence of the ER’s far exceeds that threshold — which is why I’ve always called them pretty wet libraries (even if they don’t sound wet in an AirLyndhurts-y sort of way) — and I’m also of the opinion that in this case, the room is, in fact, present to such a degree that it tends to conflict with more expansive spatialization. Not so much because of the mud risk (that’s a different problem), but simply because of the incompatibility between two very different spatial presences. People do it of course — add a hall reverb to the Studio Series to make it appear as if these libraries were recorded in a much bigger space (in order to be able to combine with genuinely spacious libraries) — and if they like the resuts, good for them, but I always hear a conflict between the baked-in small studio sound, on the one hand, and the suggestion of a much larger space that is added by the hall reverb, on the other.

_


----------



## Dewdman42 (Dec 9, 2022)

Well that has been my opinion also I was just attempting to get more understanding from the things Peter said earlier. His point about ER’s in the sample combining with ER’s from a reverb to create mud are logical to me so I am trying to understand if and how we should be finessing dry libraries through spatial placement in light of that.


----------



## Obi-Wan Spaghetti (Dec 9, 2022)

There might be a 0.8 second of ambience but if the mic is so close that all you hear is the instrument then there might have been no ambience in the 1st place. Anyway, not an issue anymore with reverbs like MIR and BS i guess.


----------



## Dewdman42 (Dec 9, 2022)

I’m pretty sure in Vsl’s case they took all of this into consideration very carefully and thoughtfully as possible both in how they captured the dry samples on the silent stage as well as how mirpro reacts to them as mirpro was designed to compliment them specifically. In practical use i personally have had good results with this combination but you know the quest to learn orch mixing is ongoing and I’m always on alert in case there is some detail I could do better. 

Peter commented that comb filtering might result when ER’s from the samples are combined with ER’s from an after the fact IR. In particular I think ER’s from the floor I think would be hard to keep out of close mics. 0.8 secs is actually a really long time in terms of ER content but depending on the silent stage acoustic treatment perhaps the more distant ER’s would have been attenuated enough to not muddy up with IR content. I suspect that the more distant ER’s also would be unlikely to be exactly the same amount of time after the direct sound compared to the IR’s distant ER’s due to different room dimensions. But the ER from the floor could be quite close in time to any captured IR’s and thus might be a potential issue as Peter suggested earlier.

As I said in my first question to Peter, perhaps we should use predelay or perhaps mirpro is doing something already to account for the presence of those ER’s from the floor in the sample. I do know that in the case of mirpro much of the data was massaged by hand painstakingly for various reasons maybe this is one of them? @Dietz


----------



## Dietz (Dec 9, 2022)

@Dewdman42 - MIR uses Ambisonics recordings (nowadays in 3rd order) of impulses sent from a single position in eight directions, recorded from up to four distances, so perfect alignment is indeed a must. In addition, all remnants of the direct signal are removed to avoid phasing with the input signal, which is also positioned by MIR (as you know  ...).

Other than that, however, there are no artificial changes to the sound of the Venue, except for typical "cosmetic" tweaks to improve coherence, such as eliminating an erratic slap-back from a single direction or occasional resonances or LF-booms.


----------



## Peter Emanuel Roos (Dec 9, 2022)

This is really a delicate topic, that can easily lead to confusions. I will do my best to first count to 10, sit on my hands, etc.


----------



## osterdamus (Dec 11, 2022)

Interesting topic, trying to process the different strategies.

Between this video...


Beat Kaufmann said:


> Hello David Lee-Michaels
> Here is a practical example in Part 1 of *this video*.


... and this video...



The Gost said:


> Hi, another point of view, if you don't just work in an "orchestral style", this person has a lot of good videos....



Would it be correct to conclude that they essentially represent the same overall strategy (use different delay settings (and panning) to achieve room placement and a _single_ reverb for room tail), however, in the former video instruments are bundled in channels representing a "section" (in which all instruments share same delay setting) and in the latter a delay is set per instrument?

Edit: I notice in the latter video, the dry signal of drums, bass, etc. is routed to the delay, which is routed to output and reverb. The original dry sound doesn't go to output, it seems? Maybe i just misread the user interface of the DAW...


----------

