# What's up with reverb for "tails"?



## BachRules (Jun 3, 2014)

In various threads here, I've seen people speak about reverb for "tails" (of orchestra samples) as a specific consideration. What is special about "tails" when applying reverb? For me, reverb selection has been a lot of trial and error, without any knowing what I'm doing. Hoping to improve on that. Thanks.


----------



## dgburns (Jun 3, 2014)

i'll be nice and try and help.

"Tails" refers to the late part of the reverb,as opposed to the early reflection part,which occurs right after the sound is heard.Alot of guys around here like to treat them as separate items.
using a convolution reverb such as Altiverb for the early reflection part is popular,as impulse response reverb can help place the sound in a room.Then you setup an algorithmic reverb to do the late decay part of the reverb because it has more randomness in the decay ,and can program how the reverb dies out to your taste.this decay part is what we around here call the "tail".

Much debate about the best personal choices.Personally i just prefer the algorithmic reverb for both early and late parts.

So there's my best stab at help,please don't troll me for trying to help.


----------



## The Darris (Jun 3, 2014)

There are basically two important factors when discussing reverb. You have the "early reflections (ER)" and the 'reverb tail (RT/RT60)." 

Early reflections are basically the the sound that hits the listener after it has bounced off of few surfaces such as the walls and ceiling. This sound hits the listener AFTER the direct sound. 

The reverb tail is basically the length (typically in seconds) that is takes for all the sound and early reflections to dissipate in the space. 

With that being said, you have the two 'typical' reverbs used when working with samples, Algorithmic and Convolution. Convolution allows you to pick a space that already has a set reverb tail (RT60) and with that you can position your main sound source via panning and pre-delay.

Algorithmic reverbs are basically space designers. They allow you to customize every aspect to 'shape' the space you want. 

In my work flow, I utilize both. I use an algorithmic reverb to emmulate the space that my samples with the longest natural RT60 were recorded in. In this case, Spitfire Audio. Since I have every library finally sitting in a good space, I need to shape the sound overall to get everyone sounding in the same room. That is where my convolution reverb comes in. I apply this to all my sections and boom, they are all sounding in the same space. 

Of course, this takes a lot of practice to which I encourage you to check out Peter Alexander's Visual Orchestration series to teach you the principles of this concept. Good luck.

Cheers,

Chris


----------



## BachRules (Jun 4, 2014)

dgburns @ Tue Jun 03 said:


> please don't troll me for trying to help.


And you please don’t molest any children, dgburns. As if I’ve ever “trolled” anyone “for trying to help”? Which specific ASCII-sequences did I type which made you want to attack me? Or was it nothing I typed, and you’re just a jabbering imbecile prone to attacking people?



The Darris @ Tue Jun 03 said:


> There are basically two important factors when discussing reverb. You have the "early reflections (ER)" and the 'reverb tail (RT/RT60)."
> 
> Early reflections are basically the the sound that hits the listener after it has bounced off of few surfaces such as the walls and ceiling. This sound hits the listener AFTER the direct sound.
> 
> ...


I had misunderstand earlier references to “tails”, and I’d thought people were using that word in reference to the release-segments of samples, but now I understand better. I’d thought people were using different reverb on the sample’s release-segments. Thanks for clarifying. I listened to the reverb in a little of your music on SoundCloud and it sounds good to me. So the basic ideas are that algorithmic reverb is useful for getting reverb-tail lengths to match up, since algo reverb offers adjustable decay-time. And convo is good for imparting early reflections characteristic of a real space.


----------



## The Darris (Jun 4, 2014)

BachRules @ Wed Jun 04 said:


> So the basic ideas are that algorithmic reverb is useful for getting reverb-tail lengths to match up, since algo reverb offers adjustable decay-time. And convo is good for imparting early reflections characteristic of a real space.



That is the essence of it, in my opinion. The fact remains that samples libraries are all produced in different spaces which all have a different RT60. Some have a very long one like Spitfire's libraries and others have zero like Sample Modelling's instruments. The key is to remember this, you can't take away the tails from the samples, only add to them. 

Again, this takes a lot of time and practice. I am still learning the ropes myself.

Cheers,

Chris


----------



## Nick Batzdorf (Jun 4, 2014)

Just to be precise about terminology, the early reflections - the first seven bounces off the room surfaces - are what tell the ear about the space a sound is in. They usually fall within the first 50 milliseconds of the sound. There's the direct sound, the sound off both walls, ceiling, floor, front, and rear.

After that the secondary bounces are much more closely spaced, and that's the tail. RT60 is the time it takes the reverb to drop 60dB. (You have to pick a point to measure the reverb time, because it continues for ages at a very low level.)

Anyway, the important point is that you can position sounds using discrete early reflection programs, and then send them all to a shared tail. The ER programs have no tail and the tail program has no ER.


----------



## RiffWraith (Jun 4, 2014)

Can someone please explain something to me?

If you are sitting in, let's say a concert hall, and you listen to some players play, you are hearing not only the direct sound (the source), but also the sound bouncing off of the walls, ceiling and floor. _The same _walls, ceiling and floor. Regardless of whether or not you subscribe to the theory that this reverberation can be divided into two discrete elements (and there are those who do not), why would you want one reverb unit handling the early refl., and another handling the tails? Wouldn't you want the reverberation to "be in the same space"? You_ can _use an algo for one and a convo for another... that combo may sound good to your ears, but that doesn't mean it's an accurate depiction of a real-world scenario.

If you have EWQL Hollywood Strings, and SF's Mural, and assuming VI and VII are available to you from both libs, would you use VI from HS, and VII from Mural? Well, you can... that combo may sound good to your ears, but that doesn't mean it's an accurate depiction of a real-world scenario.

So why use two different reverbs to emulate - or enhance - one space?


----------



## The Darris (Jun 4, 2014)

RiffWraith @ Wed Jun 04 said:


> So why use two different reverbs to emulate - or enhance - one space?



It is subjective if you put it that way. I believe in the method I explained, briefly, because it is a solid way to get all of your different libraries into the same room. After all, my different sections were all recorded in different spaces and without the use of the two reverbs, an algo to match all of my non-SF libraries to the "Air" sound and the convolution to get the entire orchestra into one room. 

To each their own though. You may personally not like that approach but it is different for different styles. I like the sound I am getting. It works for me but like I said before, this is all subjective.


----------



## Peter Alexander (Jun 4, 2014)

Jeffrey - great questions. I'm answering based on a year of empirical testing and learning from Ernest Cholakis as he worked out the RT60s for the major orchestral sample libraries.

Separate from the technical definition of ERs, what ERs do for samples is to add more room to the sample. For example, the RT60 of the Vienna libraries as a hole are significantly less than 1.0s.

Starting with FORTI/SERTI, where I had independent ERs to work with, I consistently tested an entire range of these IRs with flute, violins, etc. The end result was that the instrument or sections became more full when longer ERs were added.

Or put differently, each instrument/section filled out more. 

I then tested this with other programs from Ernest (like the Hollywood Sound Impulse Response Collection), QL Spaces, Spat and other programs.

With Spat, we found that even when the reverb was turned off, it still applied the Early Reflections. This gave options like using Spat with reverb on, or turning it off and applying a different overall reverb tail to the whole mix, be it Bricasti or B2.

Three libraries had RT60s almost exactly the same. You would think that those would have been easier to get into the same room. And on one hand they were. But the next observation was observing that a singular issue in getting everyone into the same room is determining how the samples were processed in post-production. 

It wasn't like A reverb was THE solution. You need a tool kit to determine which is best for the job, again, based on what you own.


----------



## clarkus (Jun 4, 2014)

While recognizing that this is a deep area of study, and any short answer is going to be reductive, can someone tell me, mechanically, how you accomplish this marriage of algorithmic and convolution reverbs? I am having trouble picturing this. Do you run a given instrument / track through both reverbs in series?

I understand that reverbs like Spitfire, having been recorded in a hall, may need little or no treatment.

But there is an inference here that where reverb is needed, these two types of reverb are being used, each to provide what the other cannot. How?


----------



## Nick Batzdorf (Jun 4, 2014)

> Regardless of whether or not you subscribe to the theory that this reverberation can be divided into two discrete elements (and there are those who do not), why would you want one reverb unit handling the early refl., and another handling the tails? Wouldn't you want the reverberation to "be in the same space"?



That's interesting. I've never heard arguments saying you can't divide the reverb into early reflections and the tail, but that doesn't mean there aren't any.

Anyway, I have a couple of answers. The answer to your question is that the ER is what tells your brain about the space. If you think about the 7-path model, the psychoacoustics make sense. The echoes are farther apart and therefore more distinct than the tail build-up.

The second answer is that you can use several different reverb programs - different spaces - in a mix, and the ear thinks nothing of it. Think about a pop mix that might have a plate on the lead voc, a room on the drums, an in-tempo reverb on the snare, an overall room on everything, a spring reverb on the guitar amp...and it sounds perfectly normal.


----------



## The Darris (Jun 4, 2014)

clarkus @ Wed Jun 04 said:


> While recognizing that this is a deep area of study, and any short answer is going to be reductive, can someone tell me, mechanically, how you accomplish this marriage of algorithmic and convolution reverbs? I am having trouble picturing this. Do you run a given instrument / track through both reverbs in series?
> 
> I understand that reverbs like Spitfire, having been recorded in a hall, may need little or no treatment.
> 
> But there is an inference here that where reverb is needed, these two types of reverb are being used, each to provide what the other cannot. How?



Again, speaking in terms of how I go about orchestral reverb. I apply algos to my non SF libraries using settings that match best to Air Lyndhurst. This gets those libraries sound close to my SF. The Convolution is put on the entire mix (by sections, High Strings, Low Strings, High Brass Low Brass, etc.) This gets the overall final tone sounding in the same space. 

I am not saying that you need two reverbs to get the best sound, it is subjective to your project. This is just one technique I learned through Visual Orchestration 2 and 3 that I feel yields the best results. There are other approaches that work well too but learning one and really sitting down to listen to your samples and compare with your different reverbs, you are essentially ear training which also leads to better mixes.


----------



## Simplesly (Jun 5, 2014)

With regard to keeping the ERs and the tail in the same space, does anyone here send a little bit of ER bus to the tail bus? In other words I send signal from each of my instrument groups to my ER busses, one for front, mid, and rear distances, then I send a little off of each of those busses to my tail bus, which is also getting a send from each of the aforementioned instrument groups.

I recently started doing this, but I honestly don't know if it's any better. Guess I'll just roll with it and see.


----------



## trumpoz (Jun 5, 2014)

> With regard to keeping the ERs and the tail in the same space, does anyone here send a little bit of ER bus to the tail bus?



I actually use VSS for early reflections as an insert on each channel and then send to an FX channel with Spaces so the processed sound with ER's is also sent to the tails. I keep telling myself that in theory it should work, but I'm finding it reality does quite well.


----------



## waveheavy (Jun 5, 2014)

Did someone mention time length of reverb tails? If an instrument playing quarter notes has a lot longer reverb tail, the sound will 'wash', the instrument losing some clarity and definition. 

Here's a reverb method for mixing I learned from Fab Dupont, along with some explanation. Maybe you all can use it.

Years ago tracks were recorded farther away from the mic, so more of the room's ambiance was recorded. Studio's built echo chambers to simulate reverb effects for that reason. Now things are usually recorded close-miked, up close, so not much of the room's ambiance gets recorded. Mixers rather use plugins and such to simulate reverb now.

Fab uses 3 reverbs. 

The first verb is a very short one, only used to move the recording back away from the mic a few feet, with no room ambiance. Anything recorded 'dry' get this first verb.

Then for instruments he wants farther back in the sound stage he uses another short verb, which adds a bit more of the room sound.

After both first and second verbs, he adds an EQ that cuts the highs down to around 6kHz, with a dip around 3.5kHz (the telephone frequency), and a low cut up to around 250Hz.

The reason for cutting those highs with the first two verbs is because high frequencies travel faster than low frequencies. When instruments are farther back in the sound stage, we increasingly hear less of their high frequency content. If all the high frequency content is left untouched it will conflict with how we naturally hear reverberation in a room (disregarding amplitude of the instrument).

For instruments that Fab wants to remain closer to the listener's ear in placement, i.e., more up front, he adds a hall verb. The EQ after it is set to slightly boost (around 1 dB) from 6kHz up using a shelf. Then the same dip at 3.5kHz, and a low cut up to around 300-350Hz.

The object of the Hall verb is to simulate more of the 'height' of reverberations off the ceiling. It's to also simulate how the high frequencies reach the listener's ear with instruments that are closer, while slightly reducing the low frequencies because it takes longer for those to reach our ear.

The ultimate object is to balance these to achieve the kind of room sound you want. Just adding more reverb to move something farther back is not enough if you want to actually simulate how we hear frequencies.

But of course one can set reverbs how they want, if it sounds good, it is good.


----------



## Simplesly (Jun 5, 2014)

Waveheavy,

I like to keep tails as short as possible for that reason. As soon as the arrangement gets dense, having too long a tail creates muddiness. 

The Fab technique sounds interesting, though I'd love to hear a little more in detail how it's done.


----------



## clarkus (Jun 5, 2014)

If you ask me (and you didn't) that's pretty detailed!

I notice in Waveheary's relation of this technique, there's no mention of what sort of reverb is used. I imagine it's because this technique predates the introduction of convolution. I wonder how its introduction changes things.

Does anyone know if MIR, which attempts to place your chosen instrument on a stage in a place of your choosing, achieves its effects in this way? in other words, with EQ being one of the changing parameters, to create the illusion that an instrument is closer or farther back?


----------



## waveheavy (Jun 5, 2014)

I think that is what the EQ treatment in MIR well may be doing.

But I also believe they've tried to do it from a purist standpoint, when that may not always be a best fit for every composition. With mixing there's things that has to be done at times to the audio that would make most people cringe, thinking you're destroying the audio (like EQ carving). But in reality, the process of recording adds certain artificial elements to the audio that was not present in the natural sound of the instrument and environment. The sample library creators have done some of that work for us already, but not all of it.


----------



## Simplesly (Jun 5, 2014)

clarkus @ Thu Jun 05 said:


> If you ask me (and you didn't) that's pretty detailed!
> 
> I notice in Waveheary's relation of this technique, there's no mention of what sort of reverb is used. I imagine it's because this technique predates the introduction of convolution. I wonder how its introduction changes things.
> 
> Does anyone know if MIR, which attempts to place your chosen instrument on a stage in a place of your choosing, achieves its effects in this way? in other words, with EQ being one of the changing parameters, to create the illusion that an instrument is closer or farther back?



Type of reverb would be one question... Also, how many db to boost cut in the EQ, amount of wet/dry balance for the near verb vs far verb, send or insert? 

I have been obsessed of late with creating a lush and big sound with a lot of depth, without using too "large" a room. I don't want long tails, but I do want a nice soundstage, where I can close my eyes and visualize the placement of the sections.


----------



## waveheavy (Jun 5, 2014)

Simplesly @ 5/6/2014 said:


> Waveheavy,
> 
> I like to keep tails as short as possible for that reason. As soon as the arrangement gets dense, having too long a tail creates muddiness.
> 
> The Fab technique sounds interesting, though I'd love to hear a little more in detail how it's done.



What I use is the UAD EMT 140 for the first two short reverbs, and then either the D-verb in Pro Tools, or the Waves IR-L with Bricasti samples for the Hall. Depends on what I'm mixing. I set up the reverbs each on its own independent effects buss, each with its own EQ after them, and then create sends to those from the instruments I want to treat.


----------



## waveheavy (Jun 5, 2014)

Simplesly @ 5/6/2014 said:


> Type of reverb would be one question... Also, how many db to boost cut in the EQ, amount of wet/dry balance for the near verb vs far verb, send or insert?
> 
> I have been obsessed of late with creating a lush and big sound with a lot of depth, without using too "large" a room. I don't want long tails, but I do want a nice soundstage, where I can close my eyes and visualize the placement of the sections.



This is really something you have to use your ears with, instead of relying on a formula. The EQ carving actually allows you to use more of the reverb wetness without it overpowering. That's one of the secrets of a good reverb simulation. 

The first short reverb you actually should not be able to recognize in the mix. But solo one instrument and remove it, and you should hear the instrument moving up front towards you. The second reverb bring up until you start to hear it, then back off a little. With the Hall you want it to dominate, but still just bring its level up until you begin to hear it, then back off some. Can use a chamber, medium or small hall, whatever you want. No shackles. Experiment.


----------



## RiffWraith (Jun 5, 2014)

The Darris @ Thu Jun 05 said:


> RiffWraith @ Wed Jun 04 said:
> 
> 
> > So why use two different reverbs to emulate - or enhance - one space?
> ...



It is subjective - sure. Most of this stuff is. If it works and it sounds good, then who cares what you used and how you used it.

You make a point of getting all of your different libraries into the same room. With you there, and that can be a bit of a challenge at times. And I for one, cater to the idea that one verb for everything is not necessarily the way to go; that the priority should be getting each section (or instrument) sounding good on it's own, first and foremost. But if that's the goal, is using two different verbs on the same section really the way to go?


----------



## RiffWraith (Jun 5, 2014)

Nick Batzdorf @ Thu Jun 05 said:


> That's interesting. I've never heard arguments saying you can't divide the reverb into early reflections and the tail, but that doesn't mean there aren't any.



Lexicon, for one. I will see if I can find an article or resource.



Nick Batzdorf @ Thu Jun 05 said:


> Anyway, I have a couple of answers. The answer to your question is that the ER is what tells your brain about the space.



Right. But does your brain take a crap at some point, and not realize that the tails belong to the same space?



Nick Batzdorf @ Thu Jun 05 said:


> The second answer is that you can use several different reverb programs - different spaces - in a mix, and the ear thinks nothing of it. Think about a pop mix that might have a plate on the lead voc, a room on the drums, an in-tempo reverb on the snare, an overall room on everything, a spring reverb on the guitar amp...and it sounds perfectly normal.



This is true. But your example is different, in that each instrument ahs it's own verb. Here, we are talking about (for ex) using a convo for the ER, and then an algo for the tails on the same instruments... are we not?


----------



## BachRules (Jun 5, 2014)

RiffWraith @ Wed Jun 04 said:


> Can someone please explain something to me?
> 
> ... why would you want one reverb unit handling the early refl., and another handling the tails? Wouldn't you want the reverberation to "be in the same space"? You_ can _use an algo for one and a convo for another... that combo may sound good to your ears, but that doesn't mean it's an accurate depiction of a real-world scenario....
> 
> So why use two different reverbs to emulate - or enhance - one space?


I'll take a stab at this, even though I probably know the least of anyone here about using reverb effectively. Why use two reverbs is because you need an algo to give you full control over your decay time (you might have a hundred IR's to cover virtually all decay times, but they're going to come with their own ER's, which you might not want); and then you need a convo for ER's, because, I'm surmising, algo's generally don't offer ER's which are as convincing as convolution. If there are other reasons to use two verbs instead of one, I'd like to hear about them.


----------



## Nick Batzdorf (Jun 5, 2014)

You can use convo tails independently of the ERs the same way, although I personally have never used two different processors for the ERs and the tails - and therefore don't have an opinion.

Generically, the main reasons to use separate ERs for different things in a sampled orchestral mock-up is to position them individually, and/or for clarity. Bass drum booming in a solo violin's reverb program could sound bad.

Yet another reason is to use different predelays. Lengthening the predelay on strings, for example, can make them more powerful (because you hear the bite before the reverb wash).



> Right. But does your brain take a crap at some point, and not realize that the tails belong to the same space?



Within reason, not really, because the delays are so close together that it's a big wash at that point. Tails aren't all the same, of course - a plate tail sounds quite different from a church. But for example this is VSL's concept: ERs from the stage, add your own reverb.

(They also have a hybrid reverb with convo ER and synthetic tail that I keep raving about, although Dietz pointed out that they have a newer one now.)

I keep thinking of a way to bring the old Roland D-50 synth, which spliced sampled heads onto a synthesized tail, but I can't find a good segue. 

Anyway, for me the main thing is to find a reverb program that sticks to the sound. That has to do with the programming as much as the processor, I think.


----------



## Mahlon (Jun 6, 2014)

Nick Batzdorf @ Thu Jun 05 said:


> They also have a hybrid reverb with convo ER and synthetic tail that I keep raving about, although Dietz pointed out that they have a newer one now.)



Which one is he talking about?

Mahlon


----------



## Nick Batzdorf (Jun 6, 2014)

Maybe the one in MIR Pro, MIRacle? I'm not sure.

The VSL plug-ins I have installed are the 32-bit versions, i.e. I still have the same Hybrid Reverb processor (which is just great).


----------



## waveheavy (Jun 6, 2014)

Folks do realize that an artificial pulse must be created in the acoustic space for the mic to pick up the room reflections in order to create the impulse responses in a convolution reverb? And part of that artificial pulse is being added to your instrument when you apply the convo?

So what is VSL doing? With MIR aren't they actually using the real instrument to reproduce the real space? That's hard to beat in my opinion.


----------



## iaink (Jun 6, 2014)

Nick Batzdorf @ Fri Jun 06 said:


> But for example this is VSL's concept: ERs from the stage, add your own reverb.



Do you mean MIR - is there a need to add a tail to MIR?


----------



## alanb (Jun 12, 2014)

Although I primarily use VSL's Hybrid Reverb nowadays, my favorite reverb implementation — by far — remains TASCAM's long-deleted GigaPulse. 

http://tascam.com/content/images/universal/products/505/main.jpg

First you selected your room/space, then you chose your L and R mic emulations from a fairly extensive list, then you got to place the L and R mics where you wanted them, and finally you got to place your sound source wherever you wanted it to sit within the room/space (as it relates to each of the mics).

It kills me that no one else has set their GUI up this way... I wonder whether there's a patent preventing it.....


----------



## milesito (Jun 12, 2014)

does anyone have any good orchestral sound cloud examples of songs w/ a "good" reverb tail? can you specify which reverb is being used? would love to hear what you guys think is good so I can calibrate my ears/expectations.


----------



## Nick Batzdorf (Jun 12, 2014)

> Do you mean MIR - is there a need to add a tail to MIR?



No, I mean their Silent Stage studio. It's at the center of their recording technique - they designed a studio that sounds like a concert stage's early reflections, but then it has a very short reverb time. The idea is that you add your own tail.

And most of VSL doesn't sound good until you do that - which is a feature, not a bug.


----------



## klawire (Jun 12, 2014)

alanb @ Fri Jun 13 said:


> It kills me that no one else has set their GUI up this way... I wonder whether there's a patent preventing it.....


This last sentence caught my eye and I felt an urgent need to contribute to this thread with some off topic ibformation. :D According to a corporate lawyer who was my business law professor, patents can't protect ideas, they only protect an expression of an idea. Technical patents protect the technical implementation, and design patents protect the detailed design of a product. Even if Tascam had patented their design or implementation of the GUI, which is highly doubtful, it could be easily circumvented by implementing or designing it a little differently. To conclude, you could create a new reverb with a similar-looking GUI, as long as it isn't a clear copy and doesn't use the same software structure. 

And to make sure that this post isn't completely off topic: I usually use an impulse response (or a digital reverb) for ER, but the tail is always an impulse reaponse. I've tried some digital verbs for tail but didn't really like those. Could be because the reverbs I've tries weren't very good, or could be that I just didn't know how to use them well. In any case, I like the results I get with multiple (2 to 4) layers of different convolution reverbs. And I don't really know what's conceptually different about the tails compared to ERs, other than the use: you can easily move the instruments away from their current position with ERs and EQ without having a load of tail muddying the mix up.


----------



## alanb (Jun 12, 2014)

klawire @ Fri Jun 13 said:


> alanb @ Fri Jun 13 said:
> 
> 
> > It kills me that no one else has set their GUI up this way... I wonder whether there's a patent preventing it.....
> ...



IA(also?)AL, and I feel comparably compelled to point out that: 

_(i)_ there are more than enough poorly-drafted and/or overbroad patents granted every year to ensure that much more is 'protected' than should be;

_(ii)_ the threat of expensive litigation over a barely-colorable patent held by a sufficiently-monied conglomerate will usually be as strong a deterrent as the threat of expensive litigation over a rock-solid patent; and 

(iii) I was being deliberately simplistic when I referred to the 'GUI'—35 U.S. Code §101 states that [w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor...." When I refer to GP's GUI, what I'm really describing could very well be the product of inherently patentable 'process' elements.

In the spirit of keeping my commentary on-topic, I will revise my prior statement thusly:

*It kills me that no one else has implemented comparable functionality in their reverb applications. I wonder whether there's a patent preventing it.....* :mrgreen:


----------



## The Darris (Jun 13, 2014)

RiffWraith @ Thu Jun 05 said:


> And I for one, cater to the idea that one verb for everything is not necessarily the way to go; that the priority should be getting each section (or instrument) sounding good on it's own, first and foremost. But if that's the goal, is using two different verbs on the same section really the way to go?



I agree, getting each section to sound good on it's own is a must but for me, I need them to not only sound 'individually' good but they need to fit with everyone else (VIs) around them. From what testing and experiment I have done, I have found the best result in an orchestral setting with my samples is the use of an algo on my dryest libraries to match them to SF libraries then push each individual section to their own instance of Spaces. I also use VSS on any stereo instruments to help place it even more. So to answer your last question, for me, it is the way to go for my orchestral template. Other styles are something I haven't really worked on until recently. I quickly found that this process gave less than desirable results so I mostly just mixed each instrument separately to sound good and then worked on the file mix after the fact.


----------



## muk (Jun 13, 2014)

Alan, maybe Silverspike R2 has what you are looking for. I'm not entirely sure (don't have it), but it looks like you can place L and R mic freely in a room, and also the source. No different mic emulations though, from what I gathered. And it's algorithmic.


----------



## clarkus (Jun 13, 2014)

Nick Batzdorf, maybe you can enlighten me. I've been batting away at this question & I can't seem to connect with anyone.

Are the folks on this thread talking about using two reverbs in series, one to provide the early reflection and the other the tail? 

I mean (this is my question #1) does one run the signal through one reverb & then the next?

I assume that's the protocol, but as I don't really know I keep imagining there is some unnamed practice that allows these two reverbs (or their effects on the signal) to be married. 

if it is as I describe it here (the two reverbs are simply in series), I'm wondering (my next question): Can Convolution reverbs providing ER be set to provide NO tail, thus leaving that job to reverb #2 ? This would seem to be implicit, but, again, I want to make sure I follow what's being done here.

I'm also wondering - and I sense a few other people are, too - (last question) how do we come out ahead? I mean a good reverb unit can provide an authentic simulation of ER and also the resulting tail, yes?

Sorry to be thick, but this so damned interesting and I am SO in the dark.


----------



## trumpoz (Jun 13, 2014)

Clarkus - some reverbs allow for ER only and Tail only. I just checked Reverence (the stock convolution in cubase 5) and I'm able to change the mix of ER/Tail so it can be ER or Tail only.


----------



## clarkus (Jun 14, 2014)

Mm. None of the reverbs I currently own seem to provide for this.

But thanks for shedding some light.


----------



## The Darris (Jun 14, 2014)

clarkus @ Sat Jun 14 said:


> Mm. None of the reverbs I currently own seem to provide for this.
> 
> But thanks for shedding some light.



If you have the option to control the volume of either tail or ER then you can do this. Just turn it all the way down to 'turn it off' so to speak.


----------



## Mahlon (Jun 14, 2014)

Clarkus,

Reverb can be used both ways. One setup is to run two reverbs (or more if you have different ER reverbs for different sections) as you say, in sequence -- the first reverb for ER and the next, tail. Another way: you can use sends from each instrument channel to grab a bit of the ER from a reverb and then blend that with a send which grabs a bit of the tail. Or yet another way: you can use sends for the ER, and then run all the ERs through a final tail reverb at the end.

I think there was a recent discussion about using Sends going through a final reverb tail.

The above are just standard approaches. Probably the most common.

Some convolution reverbs allow you separate ER from Tail. Most algo reverbs would allow you to do this, I would think.

Which DAW and reverbs are you using at the moment?

Mahlon


----------



## Simplesly (Jun 14, 2014)

Nick Batzdorf @ Thu Jun 12 said:


> > Do you mean MIR - is there a need to add a tail to MIR?
> 
> 
> 
> ...



Nick, I never knew that this was a feature of the VSL silent stage - also it hasn't been my experience that VSL samples sound good with just a tail and no ER. When I strip away the reverb and pan them to taste, what I end up with is instruments that sound like they were recorded in a really nice sounding school band rehearsal room. Not a lot of depth or room placement info. Adding in just a tail and it just sounds artificial. Add in the early reflections and bam, you've got a room. 

The best sound I've been able to get is by sending to my reverbs pre-fader. It just sounds more lush and 'wet' that way, without having overpowering reverb levels.

Wondering if there are any tutorials on adding reverb to VSL samples (and other really dry libraries)...


----------



## Nick Batzdorf (Jun 14, 2014)

Simplesly, I wrote the wrong thing. Brain misfiring. Most people add reverb to the VSL programs, not just tails. But that is the concept: the sound of the stage without the hall.

Pre- and post-fader sounds the same, of course; the difference is what happens when you move the fader! If you want the instrument's level in the reverb send submix to go up and down with the fader, you set it post-fader; if you want it to remain constant, you set it pre-fader.

***

I just read this:



> Folks do realize that an artificial pulse must be created in the acoustic space for the mic to pick up the room reflections in order to create the impulse responses in a convolution reverb? And part of that artificial pulse is being added to your instrument when you apply the convo?



No! The signal used to create the impulse - usually a sine wave sweep - is subtracted from the recorded response before you use it!


----------



## clarkus (Jun 14, 2014)

Thanks for taking an interest, Mahlon -

I am using LogicProX on a MacBook Pro. 

Was using SpaceDesigner exclusively (stock with Logic) when I first began with Logic six mo's ago. 

I'm currently using a demo version of the B2 Reverb from 2C Audio. I should say that though the B2 sounds great, some of the more CPU-intensive reverbs it provides seem to only allow (on my computer) a single instance. So I'm up against that. 

In any case, when I poke around the parameters of the B2 and on Logic's SpaceDesigner, either one, I don't see "ER" or "Tails" as options that can be turned on or off. Maybe I'm not poking on the right places? This is about the B2, in case you're not familiar with it. A pretty recent product. They have a demo ...

http://www.2caudio.com/products/b2#_overview


----------



## Simplesly (Jun 14, 2014)

Nick Batzdorf @ Sat Jun 14 said:


> Pre- and post-fader sounds the same, of course; the difference is what happens when you move the fader! If you want the instrument's level in the reverb send submix to go up and down with the fader, you set it post-fader; if you want it to remain constant, you set it pre-fader.



Right - so what I'm saying is the levels hitting the send are whatever they are (in my case) coming from Vienna, which I try not to mess with too much, save for very minor eq to get rid of irritating frequencies. The pre fader send allows me to set relative levels of all my sections to create depth in the mix, while maintaining a constant level hitting the reverb submix. When mixing VSL, the most important thing is to eliminate as much of the phony dry sounding signal as possible, and this technique helps. 

If you send post fader, you also drop the level of your reverb send on instruments that you set farther back in the room by moving the faders down. If there is a better, simpler way, I'm all ears, but what I'm doing sounds pretty good (to me) :D


----------



## Simplesly (Jun 14, 2014)

clarkus @ Sat Jun 14 said:


> Thanks for taking an interest, Mahlon -
> 
> I am using LogicProX on a MacBook Pro.
> 
> ...



Clarkus,

To get rid of the tail in space designer you need to manually mess with the volume envelope of the IR (move the dots) until you just have an early reflection. You basically need to use your ears.


----------



## Mahlon (Jun 14, 2014)

hi Clarkus,

They may not be listed as ER and TAIL. I don't have B2. But you can emulate a tail or ER with it depending on the parameters you set. That would be the same with any algorithmic reverb I believe. If B2 is too power hungry, you might want to look at Breeze. I do have that plug-in and it's good with lots of great presets from Den.

Spaces is probably one of the all around best sounding, easiest reverbs to use. It's convolution, and I use it for both ER and tail. There's no law that says you have to use convo for ER and algo for tail. Spaces does both beautifully. It's cheap (especially on sale), not too power hungry and sounds great. True, you don't have as many parameters to tweak in Spaces compared to VSL Hybrid or B2 or Breeze, but you really don't need to. It just kinda works. Throw on an Acme .5 or .8 preset for ER, and a bit Hamburg Cathedral preset (back off of it some or it's too wet) for tail and Bob's yer uncle. So I find myself using Spaces when not tweaking Hybrid.

Mahlon


----------



## Hannes_F (Jun 15, 2014)

clarkus @ Sun Jun 15 said:


> I'm currently using a demo version of the B2 Reverb from 2C Audio. I should say that though the B2 sounds great, some of the more CPU-intensive reverbs it provides seem to only allow (on my computer) a single instance. So I'm up against that.
> 
> In any case, when I poke around the parameters of the B2 and on Logic's SpaceDesigner, either one, I don't see "ER" or "Tails" as options that can be turned on or off. Maybe I'm not poking on the right places? This is about the B2, in case you're not familiar with it. A pretty recent product. They have a demo ...
> 
> http://www.2caudio.com/products/b2#_overview



I have and love and use B2 ... but it is not the best example of reverb for dividing between ER and Reverb tail. You can use workarounds like using very late and sparse ERs but you can't switch them off.

So ... this is the way of using B2 that works for me: I put _one _instance of B2 as an insert in a sum bus and mix into that reverb. And based on that, if any of the instruments or groups are still too close, I selectively add ERs to them (with a different software).

Same with QL Spaces.


----------



## thebob (Jun 15, 2014)

sometimes, simply turning the oversampling to x1 let you use many more instances, while still delivering a great sound ! 


the thread made me want to look again at what Sebastian did with his expansion "air x verb" and how he handled tails vs ER... but is seems from google that the thread can only be seen my mods now.. and the soundcloud tracks and other discussion places about it have disappeared.. what happened >8o ?


----------



## iaink (Nov 21, 2014)

Nick Batzdorf @ Thu Jun 12 said:


> No, I mean their Silent Stage studio. It's at the center of their recording technique - they designed a studio that sounds like a concert stage's early reflections, but then it has a very short reverb time. The idea is that you add your own tail.
> 
> And most of VSL doesn't sound good until you do that - which is a feature, not a bug.



I thought their sound is dry ... in that you are meant to add early reflections and tail. That's why their Hybrid reverb is built with early reflections and tail in one tool?


----------

