# Why is it common for people to use 2 different reverbs?



## Daniel Petras

I have not experimented with this; perhaps I should. I have often seen people do this in tutorials. What effect or advantage does this have over simply using one reverb?


----------



## tomaslobosk

I've seen guys using different reverbs for separate early reflections and late reflections, still I think less is more.
I always use one reverb with little or no early reflections, since most of the times room position is given by the samples themselves.


----------



## Maxime Luft

No reverb, SF's T/O/A mics are enough.


----------



## Replicant

I've seen it done in rock music, where you want the snare to have this big, open reverb, but with a fast decay. Anything else is usually run into a reverb of the same space, but with a longer decay. Not exactly "natural" sounding, but could be desirable in some styles.

In most cases though, it's probably to compensate for something that is lacking.

This is something I've come to realize after years of watching YouTube or other tutorials: 9/10 "tricks", for lack of a better term, are just a means of compensating for poor source material. A great example is in drum production. You often see guys layer samples, parallel compress it, throw harmonic exciters, etc. on it and then you're like "Wow, that sounds so much better!" at the end; however, it only sounds better because the original was garbage.

Usually, that suped-up version pales in comparison to what just getting it right at the source would sound like, and if you were to apply these "tricks" to already good-sounding tracks, you'd wind up with something that is not only more difficult to mix, but actually sounds _worse_.


----------



## karelpsota

From what I learned by watching scoring mixing engineers.

The first reverb is to make the instrument sit in a realistic room. Usually convolution. Altiverb's Fox Scoring Stage is a good place to start.
The second reverb is for embellishing. Usually longer tails with algorithmic (or outboard) reverb. Lexicon and Bricasti come to mind.

Also, Shawn Murphy (Williams and Powell's engineer) told me this a while back:
_
*Q*: How do you approach reverb for dense orchestral sections?_

_*Shawn*: We use multiple reverbs typically. We use a short dense reverb to sort of fill the spaces between the instruments, and we use a longer reverb to create a tail. We use it judiciously for film and music because we don't want the reverb to overcome the direct sound in terms of competition with sound effects and dialogue on the screen. We want the picture to represent the music on the screen accurately._

Source: https://www.reddit.com/r/StarWars/comments/3w97d3/were_pat_sullivan_dann_michael_thompson_and_shawn/


----------



## synthpunk

One way to construct a subtle and complex ambience in a mix is to combine two different approaches to reverb. Going about this in an informed, deliberate way will result in a much more refined and appropriate sound than by simply stacking two different reverb algorithms (either in parallel or – heaven forbid – serial).

One way to approach it is to think about foreground and background. Often using a single reverb results in an ambience that sits primarily in the foreground (resulting in a shallower mix) or in the background (resulting in a relatively dry foreground). Using two reverbs might allow a mix the benefit of both the foreground ambience (for softness and blurriness) and background ambience (for depth and spaciousness). One way to do this is to use a plate for the foreground ambience and a hall for the background ambience. This will be most coherent if foreground sounds are mainly (if not exclusively) sent to the plate, and background sounds are mainly (if not exclusively) sent to the hall. This approach is useful if the mix calls for a lush ambience with a three-dimensional quality to it.

Another approach is to combine short and long reverbs. This can be appropriate if the song calls for a long deep ambience, but there’s no middle ground between too dry and too lush for some sounds. This way, some textural background sounds and feature sounds would use the long reverb and other sounds (particularly more percussive/articulate sounds) would use the short reverb. A hall or plate would be suitable for the long reverb, and a room or shorter plate might be suited to the short reverb. For a more unnatural sound, use a thick modulated hall for the long reverb and a non-linear reverb for the short reverb. This approach is useful for complex mixes that don’t need to have a particularly realistic acoustic sound, such as electronic music and ‘studio’ music.


----------



## D-Mott

Different reverbs sound better on different sounds.

Like, would I use QL Spaces on a synth sound? ehh nar. I'd rather use an algo for that, especially if the sound required a long tail. I like chorusy tails on synths.

I have two reverbs that are great at long tails - Audio Damage Eos/Valhalla Shimmer. For me, Eos sounds better on Piano that Shimmer does. Certain synth sounds sound better when run through shimmer than Eos. 

Vocals sound great when run through Valhalla Vintage verb in the random hall.

LX 480L. Great verb for vocals/anything dry. Anything that has a decay time between 1 to 5 seconds. Plus, that thing can really send intruments back in the mix just by increasing the mix knob. The verb glues to the sound and actually makes it sound like its further away, rather than just being more wet. Valhalla Vintage for example, does not have that kind of glue for me. 

Oh and I have not found any verb that sounds as good on Hollywood Strings than QL Spaces Hamburg Cathedral.


----------



## lysander

Interesting posts, thanks for the discussion !


----------



## Guy Rowland

In terms of samples, unless you go for a full spatial control plugin a la MIR or SPAT, you have three different controls for spatial information

1. The Pan Pot. Pretty straightforward unless you're working in surround - from hard left to hard right.
2. Early Reflections (ER). Probably the least understood, I like to think of it as distance from the listener. Provides a softer diffusing bloom.
3. Tail. The overall space of the room, what we often think of as just reverb.

If someone is playing the clarinet 3 feet from you in a big live concert hall, you'd still very clearly hear the sound of the hall as well as someone standing TOO DAMN CLOSE WITH A CLARINET. You'd be hearing the direct sound (let's say they're right in your face so panned centrally), and the hall (the tail), but no bloom (ER) because they are TOO DAMN CLOSE WITH A CLARINET. Now let's say they've mercifully backed off 30 feet and a little to the left. The direct sound is now more diffuse and comes from a little over there. So pan - yes, tail - yes, but it won't sound right because it still is a very close sound, not sounding like 30 feet away, and the tail in the room doesn't fully help you. That's where you need the Early Reflection to give it that bloom.

The reason why this is so important with samples is that unless you buy every library from SuperDev (TM) which is all consistently and immaculately recorded in a beautiful space, then you'll have samples that are recorded in different spaces using different techniques. Even the close mics of an ambient library have tail in them, as with the close clarinet example above - the tail will be quieter relatively than stage mics, but still there. A very dry library - VSL, Sample Modelling et al - will need ER for pretty much everything if you want to blend it with libraries that were recorded in live spaces. Likewise, you'll likely need no ER at all for an ambient library.

So yes - for simulating sample libraries in a real space, you typically need at least 2 reverbs, one for ER and one for Tail. Of course it all gets as complex as you want, predelays etc, and this is where the likes of MIR and VSS come in. But I find a very simple method works surprisingly well - just use three controls - pan (left to right), aux send to ER (front to back) and aux send to Tail (room - and both reverbs sent post and don't let anyone tell you otherwise or your mix will run away from you). If you need to generate multiple stems at once, you'll need 2x reverbs for each stem - strings, brass etc - otherwise 2 will do tolerably well for the whole mix.


----------



## SillyMidOn

Guy Rowland said:


> If someone is playing the clarinet 3 feet from you in a big live concert hall, you'd still very clearly hear the sound of the hall as well as someone standing TOO DAMN CLOSE WITH A CLARINET.



That is funny...


----------



## Jimmy Hellfire

SillyMidOn said:


> That is funny...



Reminded me of:


----------



## Baron Greuner

I use just one most of the time. Lexicon, and generally that would be Random Hall. I tend to use as little reverb as possible to the point where it's actually too dry.

This has been put up before, but this is quite a good YouTube article on the so-called Abbey Road Reverb Trick.


----------



## AlexandraMusic

So, after adding convolution sends etc would you also add the 'tail' as individual sends or is it only one applied at the end, on the output/master?


----------



## jonathanprice

D-Mott said:


> Like, would I use QL Spaces on a synth sound? ehh nar. I'd rather use an algo for that, especially if the sound required a long tail. I like chorusy tails on synths.



Same, but the Jerry Goldsmith fan in me has been tempted to emulate the way he would put the synth player (with speakers) in the room with the orchestra, in an attempt to get the synth to blend naturally. I have mixed feelings about how well that worked for him, but every time I pull up a reverb for a synth, I think hmm...


----------



## re-peat

AlexandraMusic said:


> So, after adding convolution sends etc would you also add the 'tail' as individual sends or is it only one applied at the end, on the output/master?




Alexandra,

It's not compulsory to split up ER's and tails and rely for each on a different reverb, you know. Certainly not if you have a good reverb that has separate controls for both. (And any good reverb *has* controls for both.)

Yes, many people swear by the technique of using convolution reverbs for ER's and algorithmic reverbs for the tails, but that's just an aesthetic preference. (If even that, cause when it comes to reverb, it seems to me that too many peope wander around in their mixes in an advanced state of confusion and not-quite-knowing-what-to-do-ness, and then simply end up doing what they read other people are doing, without really knowing why.)

But there's no law or wisdom that says that you *have* to use different reverbs for tails and ER's. Again, if you have a good reverb, simply send some amount of your source track(s) into it, make all the required adjustments regarding tail and ER's (and the balance between them) inside your reverb, and you're done. Beautifully simple.

If I use two (or more) reverbs in a mix, it is never to have one for ER's and another one for tails, but either to get a richer, more complex reverb sound, or to give one of the instruments something extra reverb-wise, or to compensate for spatial incompatibilities among the sounds (samples) I chose to work with.

_


----------



## pixel

re-peat said:


> If I use two (or more) reverbs in a mix, it is never to have one for ER's and another one for tails, but either to get a richer, more complex reverb sound, or to give one of the instruments something extra reverb-wise, or to compensate for spatial incompatibilities among the sounds (samples) I chose to work with.



+1

Two reverbs are common in Pop/Rock music where it's not necessary to make everything to sound like in one room. Eg.: Drums need to be dominant and percussive so long big reverb is an enemy and short 'room' type of reverb is useful.
Another thing is that instruments in Pop/Rock are recorded very dry (as much as possible) so ER's are added to create position in space (distance from listener). It's not that obvious with wet samples used in virtual orchestra (let's face it, most of them are wet even when only close mics are used). Sometimes added ER's to samples that have already recorded information of position in space (they're wet) can make even more mess and make perception of position in space unreadable.


----------



## Jeast

Myself I use 8dio strings a lot. The problem with those is that they are panned dead in the middle. Normally I would use the Far or Mix mic's, but because when you start panning the string sections into place, you also pan the room information. Thats why I use the close mics, room verb and then a long tail (algo).


----------



## pixel

Jeast said:


> Myself I use 8dio strings a lot. The problem with those is that they are panned dead in the middle. Normally I would use the Far or Mix mic's, but because when you start panning the string sections into place, you also pan the room information. Thats why I use the close mics, room verb and then a long tail (algo).



It's advantage for me. I'm using them most of the time as layers (close mics) so it's good to have easy possibility to pan them to position of my master strings samples. I have never used them with Far/Mix mics


----------



## AlexandraMusic

re-peat said:


> Alexandra,
> 
> It's not compulsory to split up ER's and tails and rely for each on a different reverb, you know. Certainly not if you have a good reverb that has separate controls for both. (And any good reverb *has* controls for both.)
> 
> Yes, many people swear by the technique of using convolution reverbs for ER's and algorithmic reverbs for the tails, but that's just an aesthetic preference. (If even that, cause when it comes to reverb, it seems to me that too many peope wander around in their mixes in an advanced state of confusion and not-quite-knowing-what-to-do-ness, and then simply end up doing what they read other people are doing, without really knowing why.)
> 
> But there's no law or wisdom that says that you *have* to use different reverbs for tails and ER's. Again, if you have a good reverb, simply send some amount of your source track(s) into it, make all the required adjustments regarding tail and ER's (and the balance between them) inside your reverb, and you're done. Beautifully simple.
> 
> If I use two (or more) reverbs in a mix, it is never to have one for ER's and another one for tails, but either to get a richer, more complex reverb sound, or to give one of the instruments something extra reverb-wise, or to compensate for spatial incompatibilities among the sounds (samples) I chose to work with.
> 
> _



Thank you for taking the time to reply. Indeed, I know there is no one size fits all approach and upon researching I can see that many people do things different ways for different reasons and at the moment I'm really just playing around to see what results I get. I usually just do a send from QL Spaces to each instrument and I also use VSS 2. Overall I will just trust my ears and whatever works for what I'm working on!


----------



## Guy Rowland

On 2 different MODELS of reverb I agree with Piet that it doesn't really matter, with the priviso that some verbs don't in fact have ER (eg EW Spaces). Personally I think at least 95% of the algorithmic / convolution debates are redundant. I don't know of any regular reverbs that have their ER and Tail chains totally separate, so again unless you're looking at SPAT or something, two instances is by far the easiest and most controllable way to go.


----------



## Daniel Petras

karelpsota said:


> From what I learned by watching scoring mixing engineers.
> 
> The first reverb is to make the instrument sit in a realistic room. Usually convolution. Altiverb's Fox Scoring Stage is a good place to start.
> The second reverb is for embellishing. Usually longer tails with algorithmic (or outboard) reverb. Lexicon and Bricasti come to mind.
> 
> Also, Shaw Murphy (Williams and Powell's engineer) told me this a while back:
> _
> *Q*: How do you approach reverb for dense orchestral sections?_
> 
> _*Shawn*: We use multiple reverbs typically. We use a short dense reverb to sort of fill the spaces between the instruments, and we use a longer reverb to create a tail. We use it judiciously for film and music because we don't want the reverb to overcome the direct sound in terms of competition with sound effects and dialogue on the screen. We want the picture to represent the music on the screen accurately._
> 
> Source: https://www.reddit.com/r/StarWars/comments/3w97d3/were_pat_sullivan_dann_michael_thompson_and_shawn/



Wow, thanks for that source!


----------



## re-peat

Guy Rowland said:


> (...) I don't know of any regular reverbs that have their ER and Tail chains totally separate (...)




Why should they, or would you want them to be totally separate, Guy? They’re both part of the same thing: simulating a room’s response. It actually makes more sense, I would think, for them _not_ to be totally separate and be generated by one and the same engine, rather than have different devices for each of them.

In a reverb like the EA Phoenix, for example, you can set the levels of the ER’s and the tail separately, you can adjust the attack, time and slope of the ER’s independently, and there are a handful of parameters specifically for the tail as well. That’s more than enough for me with regard to having separate control over both stages of the reverberation. And all conveniently in one place.

Besides, but not without importance: if you stick to one reverb for both, settings for predelay, overal colour and other shared (or inter-dependent) characteristics will be entirely consistent for both as well.

I don’t see in how using two reverbs for a job that can be done to perfection by a single reverb, is easier or more controllable.

_


----------



## Ashermusic

I have been using two reverbs, but not because of the whole ER/tails thing. I have used a convo with an algorithmic verb (Spaces & UAD Plate 140) for orchestral stuff, because it sounds great to my ears. I mostly use the UAD Lexicon for pop stuff.

But I think from here on out it is mostly going to be Adaptiverb for the orchestral stuff. I just need to get more skilled with it.


----------



## Guy Rowland

re-peat said:


> Why should they, or would you want them to be totally separate, Guy? They’re both part of the same thing: simulating a room’s response. It actually makes more sense, I would think, for them _not_ to be totally separate and be generated by one and the same engine, rather than have different devices for each of them.
> 
> In a reverb like the EA Phoenix, for example, you can set the levels of the ER’s and the tail separately, you can adjust the attack, time and slope of the ER’s independently, and there are a handful of parameters specifically for the tail as well. That’s more than enough for me with regard to having separate control over both stages of the reverberation. And all conveniently in one place.
> 
> Besides, but not without importance: if you stick to one reverb for both, settings for predelay, overal colour and other shared (or inter-dependent) characteristics will be entirely consistent for both as well.
> 
> I don’t see in how using two reverbs for a job that can be done to perfection by a single reverb, is easier or more controllable.
> 
> _



The problem with using 1 is it would need to be set differently for every instrument, so 400x instances, as far as I can tell. An semi-ambient library might need some tail and no ER, a dry one a good amount of both, a very ambient one perhaps none. I've found pretty much every library I own needs a different combination, even patches within a library vary. 400x reverb instances would sound better, but I'd need a computer from the Met Office to run it, and they seem keen on using theirs. 

I'm not arguing that my approach is the most sonically pure, but I think it's the best compromise on keeping resources low and blending completely different libraries enough to fool the ear.


----------



## jamwerks

Jeast said:


> Myself I use 8dio strings a lot. The problem with those is that they are panned dead in the middle..., but because when you start panning the string sections into place, you also pan the room information...


Yeah, I agree. That's the one big problem for me with 8dio sampling méthodes. When you pan the V1's over to the left, it's like you then also just moved the right wall over to center stage!

Putting a tail on that of course helps, but it's still not as 3D as if it had been recorded in place!


----------



## Jeast

jamwerks said:


> Yeah, I agree. That's the one big problem for me with 8dio sampling méthodes. When you pan the V1's over to the left, it's like you then also just moved the right wall over to center stage!
> 
> Putting a tail on that of course helps, but it's still not as 3D as if it had been recorded in place!


Exactly! I must say that using a Spaces true stereo scoring stage (short room) + an algo tail fixes a lot!


----------



## afterlight82

If you do a 2 reverb setup with ER and tail you can always send a little of the return of the ER to the tail verb, post fader. The main thing is to set the levels by EAR and not by diagram. I'd always ride the tail verb too, you can easily clear up a muddy passage by backing off of it a few db and then coming back up in quieter/sparser passages. 

In surround, we do it a little differently if combining stereo reverbs (eg bricasti/lex 480) to make a surround image, but there you're looking to have the rears have a slightly longer tail and you might have two different length "tails" going on. not always tho. There's many valid approaches. Don't discount delays either. Hugely important. Also, just because IRs are "real" rooms doesn't mean they always sound good; they can cause phase issues, can act almost like an unwanted EQ. Also, a good tip is to eq on the send before the reverb plugin, dial out a little highs or lows, but no so strong as you can hear the filter/eq resonance impacting (the problem I have with the 'reverb trick' above is that it's already built into many algorithms/plugs in some way shape and form, so it's good to be able to check out what the chain of the plugin is and know how to use the internal EQ on your reverb).


----------



## jonathanprice

I use the Spaces reverbs. I'll use the instrument-specific presets for each section or specialty instrument (Harp, Tuba, etc.). These are great for placing the instrument in the "S. Cal. Hall" space. Then I dial back the reverb so it's a little drier than I might prefer and I'll add the "Full Orc" preset to all the sections/instruments. Just enough to glue them together. The ultimate reason for this is that it sounds right to my ear, but I can also explain what I think I'm trying to approximate, which is wave interference. You could record the different sections of the orchestra separately, mix those together, and it won't sound the same as if you'd recorded the full orchestra in one take. And I mean acoustically, not just that they're not listening to each other in the same take. It's because the sound waves from (for example) the brass are crashing into the sound waves from the strings and changing them slightly, the way ripples in water can interfere with each other. This might not affect the ER's a whole lot, but it will certainly affect the tails. If I were to only use the instrument-specific reverbs, the tails would be pristine reflections of only that instrument/section, and even if you add the different section tails together, it's not the same sound as an interfered wave hitting a back wall. So, for Spaces, I use the "Full Orc" to get closer to that sound. It's not true wave interference, but it's closer to what I hear when I'm in a real room.


----------



## re-peat

Guy Rowland said:


> (...) A semi-ambient library might need some tail and no ER, a dry one a good amount of both, a very ambient one perhaps none. I've found pretty much every library I own needs a different combination, even patches within a library vary. (...)




Ah, I see. Yes, you have a point there. Although the situation you’re describing — lots of different libraries with lots of different spatial characteristics and/or needs in one orchestral mock-up — is one I try to avoid as much as possible, because that formula, in my view, spells sonic doom anyhow, no matter how many or how few reverbs you decide to work with.

My virtual orchestra has to consist of at least 75%-85% libraries of an identical, similar or compatible spatial stamp, for which I then would need either none, or one reverb. And anything else that gets invited, receives whatever treatment it requires.

(All of the above applies strictly to pseudo-realistic mock-orchestral productions though. If do something else with samples, modelled instruments or synths, I won’t say no to using two, three, four or even 10 reverbs on a single source, if that is what I feel it needs.) 

_


----------



## jamwerks

re-peat said:


> My virtual orchestra has to consist of at least 75%-85% libraries of an identical, similar or compatible spatial stamp...


Along these lines, my ears are happier when I use the same library for the whole section. So WW's from A, Brass from B, Perc from C, & Strings from D, seems to work for me.


----------



## Hannes_F

"Ideal room": exponential decay -> straight line in log -> one reverb
Reality: Coupled rooms -> bent line in log -> two reverbs
I stack them all the time. I love it.


----------



## Nick Batzdorf

What Synthpunk says.

The answer to the original question is highly dependent on the context. Acoustic music - orchestral, whatever - is one thing. You're just using reverb to create a space.

On the other hand, "pop" production is a total suspension of disbelief - which is what makes it such a highly evolved art in the first place. Reverb isn't just used to create a space, it's an effect.

If you want to hear this in detail, try A-B listening. It's really interesting, because it brings out the effects without the middle. Split a stereo file into two mono ones, pan them both to the center, and reverse the polarity of one channel.

The ear is fully capable of buying things in lots of different spaces at the same time. It would be weird for the lead vocal to be in the same reverb as the bass drum, for example; the lead voc wants to be in the foreground and the BD probably wants to be working with the bass. The snare reverb might be timed to the song, or as Synpunk says, nonlinear.


----------



## Silence-is-Golden

Hannes_F said:


> "Ideal room": exponential decay -> straight line in log -> one reverb
> Reality: Coupled rooms -> bent line in log -> two reverbs
> I stack them all the time. I love it.


Hi Hannes,

To understand what you indicate: what do you mean with "bent line in log". As in a logarithm?
And I also presume with 2 reverbs you chose the 2 types of reverb way: algo-and convo?

For now I experience I can reach good results with EAreverb2 as my one and only reverb (apart from MIRx Teldex with VSL libs, and ERreverb's POS mode helps with positioning (instead of VSS2) if the needs calls for it.
Further I use (also what I got from you) Baron Greuner's mixing tip regarding the EQ-ing of the reverb.

PS: as a side note: is the picture you used not "mirrored". I see the basses left and the Harps right?


----------



## Hannes_F

Silence-is-Golden said:


> To understand what you indicate: what do you mean with "bent line in log". As in a logarithm?


Yes, I was just lazy in typing.



> And I also presume with 2 reverbs you chose the 2 types of reverb way: algo-and convo?


No, just two reverbs, a short one and a long one.



> PS: as a side note: is the picture you used not "mirrored". I see the basses left and the Harps right?


Lazy me again, I just grabbed any concert hall in order to show that real halls consist of several sub-rooms. If they are strongly coupled there will only be one decay time. However if they are weakly coupled the system has different decay times (more or less).






I happen to have a similar situation in my studio. No matter how much I dampened the bass with traps, there was always a longer decay underneath. Until I found out that the studio is acoustically coupled to the staircase ... and that has a longer decay. Easy once you got the idea, eh? 

Everything can be overdone, so there is no need to follow this religiously. But it may explain in a rational way why some engineers love to use combinations of reverbs (I know I do).


----------



## Nick Batzdorf

Are we talking about acoustic music, especially orchestral spaces?

If so, I think someone mentioned that the "traditional" film score sound is the natural studio reverb + a thick Lexicon hall. Not being Shawn Murphy or James Guthrie or one of those guys, I can't tell you with certainty what's in their minds, but my educated guess is that it's simply because the studio doesn't sound big enough on its own.

And of course all bets are off when you're dealing with samples, because they're all recorded in their own spaces to start with. What Hannes just drew is the VSL Silent Stage approach: record in a stage-sized room with short reverb (the small room), then add the hall tail or whatever you want to make it a big room.


----------



## Silence-is-Golden

Hannes_F said:


> Until I found out that the studio is acoustically coupled to the staircase ... and that has a longer decay. Easy once you got the idea, eh?


Hahaha, then you must have been equaly happy you didn't choose to live in the empire state building.....

What a tail that must be!


----------



## Daniel Petras

synthpunk said:


> One way to construct a subtle and complex ambience in a mix is to combine two different approaches to reverb. Going about this in an informed, deliberate way will result in a much more refined and appropriate sound than by simply stacking two different reverb algorithms (either in parallel or – heaven forbid – serial).
> 
> One way to approach it is to think about foreground and background. Often using a single reverb results in an ambience that sits primarily in the foreground (resulting in a shallower mix) or in the background (resulting in a relatively dry foreground). Using two reverbs might allow a mix the benefit of both the foreground ambience (for softness and blurriness) and background ambience (for depth and spaciousness). One way to do this is to use a plate for the foreground ambience and a hall for the background ambience. This will be most coherent if foreground sounds are mainly (if not exclusively) sent to the plate, and background sounds are mainly (if not exclusively) sent to the hall. This approach is useful if the mix calls for a lush ambience with a three-dimensional quality to it.
> 
> Another approach is to combine short and long reverbs. This can be appropriate if the song calls for a long deep ambience, but there’s no middle ground between too dry and too lush for some sounds. This way, some textural background sounds and feature sounds would use the long reverb and other sounds (particularly more percussive/articulate sounds) would use the short reverb. A hall or plate would be suitable for the long reverb, and a room or shorter plate might be suited to the short reverb. For a more unnatural sound, use a thick modulated hall for the long reverb and a non-linear reverb for the short reverb. This approach is useful for complex mixes that don’t need to have a particularly realistic acoustic sound, such as electronic music and ‘studio’ music.



To address your first paragraph: 

Is it a popular choice then in an orchestral setting to apply to a plate reverb setting to instruments sitting in the front of an orchestral and a hall setting for instruments in the back of the orchestral or are you just referring here to an ambient texture itself?

To your second paragraph: 

When talking about short reverbs for percussion sounds, would this not cause these sounds to sound closer in the mix? How can one push a short percussive sound back in the mix (refering again to orchestral percussion) while still keeping it short and percussive?


----------



## Daniel Petras

Guy Rowland said:


> In terms of samples, unless you go for a full spatial control plugin a la MIR or SPAT, you have three different controls for spatial information
> 
> 1. The Pan Pot. Pretty straightforward unless you're working in surround - from hard left to hard right.
> 2. Early Reflections (ER). Probably the least understood, I like to think of it as distance from the listener. Provides a softer diffusing bloom.
> 3. Tail. The overall space of the room, what we often think of as just reverb.
> 
> If someone is playing the clarinet 3 feet from you in a big live concert hall, you'd still very clearly hear the sound of the hall as well as someone standing TOO DAMN CLOSE WITH A CLARINET. You'd be hearing the direct sound (let's say they're right in your face so panned centrally), and the hall (the tail), but no bloom (ER) because they are TOO DAMN CLOSE WITH A CLARINET. Now let's say they've mercifully backed off 30 feet and a little to the left. The direct sound is now more diffuse and comes from a little over there. So pan - yes, tail - yes, but it won't sound right because it still is a very close sound, not sounding like 30 feet away, and the tail in the room doesn't fully help you. That's where you need the Early Reflection to give it that bloom.
> 
> The reason why this is so important with samples is that unless you buy every library from SuperDev (TM) which is all consistently and immaculately recorded in a beautiful space, then you'll have samples that are recorded in different spaces using different techniques. Even the close mics of an ambient library have tail in them, as with the close clarinet example above - the tail will be quieter relatively than stage mics, but still there. A very dry library - VSL, Sample Modelling et al - will need ER for pretty much everything if you want to blend it with libraries that were recorded in live spaces. Likewise, you'll likely need no ER at all for an ambient library.
> 
> So yes - for simulating sample libraries in a real space, you typically need at least 2 reverbs, one for ER and one for Tail. Of course it all gets as complex as you want, predelays etc, and this is where the likes of MIR and VSS come in. But I find a very simple method works surprisingly well - just use three controls - pan (left to right), aux send to ER (front to back) and aux send to Tail (room - and both reverbs sent post and don't let anyone tell you otherwise or your mix will run away from you). If you need to generate multiple stems at once, you'll need 2x reverbs for each stem - strings, brass etc - otherwise 2 will do tolerably well for the whole mix.



Does one generally use 2 different types of reverbs to control ERs and the tail or are these parameters generally on the same reverb plugins and you would just set them differently according to your needs?


----------



## Guy Rowland

Sonorityscape said:


> Does one generally use 2 different types of reverbs to control ERs and the tail or are these parameters generally on the same reverb plugins and you would just set them differently according to your needs?



I use the same reverb and just different IRs / settings. But some prefer different reverbs.


----------



## gsilbers

Baron Greuner said:


> I use just one most of the time. Lexicon, and generally that would be Random Hall. I tend to use as little reverb as possible to the point where it's actually too dry.
> 
> This has been put up before, but this is quite a good YouTube article on the so-called Abbey Road Reverb Trick.




for a sec there at the end i thought it was harrison ford 
neat trick


----------



## JohnG

Guy Rowland said:


> you'll likely need no ER at all for an ambient library



I agree with Guy here. The assumption that you need two reverbs seemed to grow up in the days when dryer libraries seemed more common than today.


----------



## Daniel Petras

Here's a video on YouTube pertaining to ERs and reverb tails - thought it might be helpful for some. Looks like it's been linked to this forum before:


----------



## re-peat

Sonorityscape said:


> Looks like it's been linked to this forum before:


It has. Many times. Personally — strictly, totally and entirely personally, that is — I think it's a video that is best ignored. For reasons explained here and here.

_


----------



## Guy Rowland

+1 to Piet - with the best will in the world, that video should be taken down. Gawd knows how much grief it must have caused.


----------



## Baron Greuner

Yes. Take it away and have it destroyed! What?


----------



## Peter Emanuel Roos

karelpsota said:


> From what I learned by watching scoring mixing engineers.
> 
> The first reverb is to make the instrument sit in a realistic room. Usually convolution. Altiverb's Fox Scoring Stage is a good place to start.
> The second reverb is for embellishing. Usually longer tails with algorithmic (or outboard) reverb. Lexicon and Bricasti come to mind.
> 
> Also, Shawn Murphy (Williams and Powell's engineer) told me this a while back:
> _
> *Q*: How do you approach reverb for dense orchestral sections?_
> 
> _*Shawn*: We use multiple reverbs typically. We use a short dense reverb to sort of fill the spaces between the instruments, and we use a longer reverb to create a tail. We use it judiciously for film and music because we don't want the reverb to overcome the direct sound in terms of competition with sound effects and dialogue on the screen. We want the picture to represent the music on the screen accurately._
> 
> Source: https://www.reddit.com/r/StarWars/comments/3w97d3/were_pat_sullivan_dann_michael_thompson_and_shawn/


This is the same approach Dennis Sands uses, as he explained during his mixing for composers course this summer in Vienna (a super event!)


----------



## jamwerks

Peter Emanuel Roos said:


> This is the same approach Dennis Sands uses, as he explained during his mixing for composers course this summer in Vienna (a super event!)


I assume in that case he might have been talking about using VSL stuff, so yeah room then tail.

Did he say anything about what he does when treating "already-in-a-room" (wet) libraries?


----------



## Peter Emanuel Roos

Nope, it was not about VSL samples in particular. He mostly explained his mixing approaches with cues from Danny Elfman and others, with the entire mixing session available in 5.1, soloing tracks and letting us hear "into" his work.

Here is a text I that I was already typing in EverNote:

I think it is important to be aware of the difference between "room" and "reverb tail" and how to combine them.
Studio's for orchestral recordings typically have a great ambient sound, but not a very long reverb tail that we associate with concert recordings. This is the "room" part, with lots of early and late reflections and a tail QT of up to 1.8 - 2.2 seconds.

If you want to make this longer, you should add a longer reverb without adding too much more early reflections!

I have been in quite a few good studios (Abbey Road, Air Lyndhurst, Vienna Synchron, Teldex Berlin, Smecky in Prague) and I can tell you there are always Lexicon's, TCE's and Bricasti's to add a subtle longer reverb tail, but hardly for more "room".

When using sample libraries, you should train yourself to be able to tell how much "room" they contain (switch off release samples for that) 

A good example of a score with a lot of room but hardly long reverb tails is Michael Kamen's Robin Hood:



You might want to use this score to experiment with adding a longer reverb without messing up the ambience part (use the CD).

When using "wet" samples and a Send to a reverb FX channel, try to lower the amount of ER's in that reverb, or experiment with a predelay of 50-100 msecs. My main point here is: don't get too many early reflections, it will really mess up the positioning of the instruments/samples. The early reflections range is from 0 to around 150 msec, take special care in this range. This is where the positioning and distance are defined.

Another "trick" that might interest you: use automation on the reverb (tail) send. In musically denser parts lower the level, this will reduce the effect having a constant thick reverb tail obscuring the more important room sound and the many notes.

And: use several reverb busses, at least for the main sections in the orchestra. More pre-delay for the closer sections, less for the sections further away.


----------



## Daniel Petras

@re-peat @Guy Rowland @Baron Greuner 

What's unclear to me is how to create different layers of depth when there is only 1 aux send (a method that was earlier said in this thread how to create ERs) to an ER reverb. Is this created by changing the send level for different instruments? I don't understand how one ER setting can create different depths for a multitude of different instruments.


----------



## Peter Emanuel Roos

To answer this question would require me to write a long tutorial on how mixers works. I really recommend that you research online how to route within a mixer, with multiple FX channels, group channels, etc. I am sorry I cannot help you with this question, but I am considering making some Vlogs on this subject. Research and study! All the best


----------



## tack

I feel like this whole subject has been disastrously over-complicated.

Here's what I wish I could have told myself when I first started:

If you have a dry instrument, insert a reverb that does positioning (and therefore ERs) after the VSTi on the FX chain. All the better if it does tails too, like EAReverb2 or SPAT. No sends -- you're done.
If you have a wet instrument that already has room ambience and you want to soup it up, send to a reverb bus that just does tails or, if you want more control, insert a new instance on the FX chain after the VSTi and tweak reverb to taste. Don't add ERs.
IMO, anything you do beyond that is for performance reasons. If you don't have performance problems, why complicate it?


----------



## JohnG

Peter Emanuel Roos said:


> I have been in quite a few good studios (Abbey Road, Air Lyndhurst, Vienna Synchron, Teldex Berlin, Smecky in Prague) and I can tell you there are always Lexicon's, TCE's and Bricasti's to add a subtle longer reverb tail, but hardly for more "room".



Exactly. So if you are using mid-to-longer distance recordings, adding early reflections -- I would never add them.


----------



## Peter Emanuel Roos

tack said:


> I feel like this whole subject has been disastrously over-complicated.



With all respect, this is one of the most difficult parts of mixing. Ask the masters like Alan Meyerson, Dennis Sands, etc.


----------



## Ashermusic

Peter Emanuel Roos said:


> With all respect, this is one of the most difficult parts of mixing. Ask the masters like Alan Meyerson, Dennis Sands, etc.




With all respect, mixing a real orchestra is a lot different than mixing samples. I think I may agree with Tack. I am experimenting with replacing all my verbs in an orchestral simulation piece with a single instance of Adaptiverb and I am liking what I am hearing.


----------



## Peter Emanuel Roos

Ashermusic said:


> With all respect, mixing a real orchestra is a lot different than mixing samples. I think I may agree with Tack. I am experimenting with replacing all my verbs in an orchestral simulation piece with a single instance of Adaptiverb and I am liking what I am hearing.


Hey Jay! - You have have no idea how much mockup samples are kept in the final mixes! Cheers man


----------



## jamwerks

Peter Emanuel Roos said:


> If you want to make this longer, you should add a longer reverb without adding too much more early reflections!...


Not sure I follow. So add late reflections (room), or late reflections (tail).



Peter Emanuel Roos said:


> When using "wet" samples and a Send to a reverb FX channel, try to lower the amount of ER's in that reverb, or experiment with a predelay of 50-100 msecs. My main point here is: don't get too many early reflections, it will really mess up the positioning of the instruments/samples. The early reflections range is from 0 to around 150 msec, take special care in this range. This is where the positioning and distance are defined...


So here again you're talking tail right?

Just to be clear, using main mics with stuff from Cinesamples or OT, he would just be adding one reverb (tail)?


----------



## Ashermusic

Peter Emanuel Roos said:


> Hey Jay! - You have have no idea how much mockup samples are kept in the final mixes! Cheers man



Oh, I know, with certain composers like Zimmer. And it definitely is a different thing to mix than just straight orchestra.

And by the way, I did so way back in 1990 on "Zorro", but i did not mix at all then.


----------



## Peter Emanuel Roos

Ashermusic said:


> Oh, I know, with certain composers like Zimmer. And it definitely is a different thing to mix than just straight orchestra.
> 
> And by the way, I did so way back in 1990 on "Zorro", but i did not mix at all then.


After three days with days with Dennis Sands, your opinion would be different. This is not about HZ, this is how stuff is done with HW music.


----------



## Ashermusic

Peter Emanuel Roos said:


> After three days with days with Dennis Sands, your opinion would be different. This is not about HZ, this is how stuff is done with HW music.




I know this is heresy, but there are several orchestral mixers here in LA that I greatly prefer to Dennis.

Just kidding,


----------



## re-peat

Peter Emanuel Roos said:


> With all respect, this is one of the most difficult parts of mixing.



No, it isn't. If it is, there's something wrong with the mix elsewhere. Adding reverb to a good mix is actually one of the easiest parts of mixing. And one the most pleasant as well, because the easier it is, the better you know your mix is. The litmus test for most mixes (not all, but most): the ease with which reverb can be added.

I agree entirely with Tack: people tend to make this much too complicated. Been saying the same thing for the past 10 years, but to no avail. One almost gets the impression that people feel it *has* to be complex and complicated before it can be 'right'. But it really does't have to be. Especially not in a mock-up that is a stew of compromises, conflicts, ineptness, failures, almost-but-not-quites and all-pervading fakeness anyway.

_


----------



## Ashermusic

A big +1, Piet.


----------



## JohnG

re-peat said:


> a stew of compromises, conflicts, ineptness, failures, almost-but-not-quites and all-pervading fakeness



Have you been talking to my family again?


----------



## Guy Rowland

Sonorityscape said:


> @re-peat @Guy Rowland @Baron Greuner
> 
> What's unclear to me is how to create different layers of depth when there is only 1 aux send (a method that was earlier said in this thread how to create ERs) to an ER reverb. Is this created by changing the send level for different instruments? I don't understand how one ER setting can create different depths for a multitude of different instruments.



In my easy-peasy method, it's just, effectively, the wet/dry control for depth. Totally dry = close, Totally wet = far away. Not super-scientific, but it works surprisingly well. I use one of the LASS bundled IRs for ER.


----------



## Daniel Petras

Guy Rowland said:


> In my easy-peasy method, it's just, effectively, the wet/dry control for depth. Totally dry = close, Totally wet = far away. Not super-scientific, but it works surprisingly well. I use one of the LASS bundled IRs for ER.



Cheers! That's what I was looking for.


----------



## waveheavy

Jake Jackson mostly uses 2 reverbs, a short verb and a long verb, mixing them together based on the instrument.

Rock producer Fab Dupont uses 3 reverbs, a very short, a less short, and a long, but each one is EQ'd differently. The way he explained it is the first one is only for moving a dry recorded signal back away from the mike a little while not hearing any room reflections or anything. Since most things are recorded really dry today, this is necessary. Then the second verb is a short verb that adds some room reflections, but the high to high mids have cuts to make the instrument sit further in the rear. And then the long verb with a boost in the highs to make reflections clear and sound upfront.


----------



## Peter Emanuel Roos

re-peat said:


> No, it isn't. If it is, there's something wrong with the mix elsewhere. Adding reverb to a good mix is actually one of the easiest parts of mixing. And one the most pleasant as well, because the easier it is, the better you know your mix is. The litmus test for most mixes (not all, but most): the ease with which reverb can be added.
> 
> I agree entirely with Tack: people tend to make this much too complicated. Been saying the same thing for the past 10 years, but to no avail. One almost gets the impression that people feel it *has* to be complex and complicated before it can be 'right'. But it really does't have to be. Especially not in a mock-up that is a stew of compromises, conflicts, ineptness, failures, almost-but-not-quites and all-pervading fakeness anyway.
> 
> _


Thanks for sharing your wisdom. Still not finished the course "How not to be rude but helpful"?


----------



## Ashermusic

waveheavy said:


> Jake Jackson mostly uses 2 reverbs, a short verb and a long verb, mixing them together based on the instrument.
> 
> Rock producer Fab Dupont uses 3 reverbs, a very short, a less short, and a long, but each one is EQ'd differently. The way he explained it is the first one is only for moving a dry recorded signal back away from the mike a little while not hearing any room reflections or anything. Since most things are recorded really dry today, this is necessary. Then the second verb is a short verb that adds some room reflections, but the high to high mids have cuts to make the instrument sit further in the rear. And then the long verb with a boost in the highs to make reflections clear and sound upfront.



Well, making hit records is a very different task than doing orchestral simulation with samples.


----------



## Wes Antczak

Not to mention that some of the libraries we use are not "recorded really dry" to begin with.


----------



## Peter Emanuel Roos

Wes Antczak said:


> Not to mention that some of the libraries we use are not "recorded really dry" to begin with.


Hence my explanation, I hope it may help a bit. Cheers


----------



## Silence-is-Golden

JohnG said:


> Have you been talking to my family again?


Hilarious!


----------



## jamwerks

Peter Emanuel Roos said:


> Hence my explanation, I hope it may help a bit. Cheers


Hi, you didn't answer my interogation (#57).



waveheavy said:


> Jake Jackson mostly uses 2 reverbs, a short verb and a long verb, mixing them together based on the instrument.


Aren't those just 2 different tail lengths, not room + tail?


----------



## waveheavy

jamwerks said:


> Aren't those just 2 different tail lengths, not room + tail?



Not necessarily. One verb might stress early reflections only. It's not only about how reverberation scientifically works, it's about how we hear it. Trying to simulate it by using data plots doesn't have a lot to do with how we actually perceive it. That's why I mentioned Fab Dupont's tutorial, he deals more with our perceptions of the reverb effect. Just by looking at the various opinions here reveals it's more of a perception thing.


----------



## Takabuntu

re-peat said:


> It has. Many times. Personally — strictly, totally and entirely personally, that is — I think it's a video that is best ignored. For reasons explained here and here.
> 
> _



I guess I was "saved by the bell", because it was on my short list to try...


----------



## Takabuntu

re-peat said:


> I agree entirely with Tack: people tend to make this much too complicated. Been saying the same thing for the past 10 years, but to no avail. One almost gets the impression that people feel it *has* to be complex and complicated before it can be 'right'. But it really does't have to be. Especially not in a mock-up that is a stew of compromises, conflicts, ineptness, failures, almost-but-not-quites and all-pervading fakeness anyway.
> _



I find this a very interesting and helpful thread: so thank you all! I had been trying to complicate my template while I should have simplified it. I learned a lot while reading this. I was already using two reverbs, one for ER and one for the tail. But I had set it up pre-fader and already learned that I've should not have done that. Since I'm using Logic Space Designer I thought I could make things sound "better" by complicating things.


----------



## Nick Batzdorf

Guy wrote something about:



> 1. The Pan Pot. Pretty straightforward unless you're working in surround - from hard left to hard right



I think a lot of people aren't aware that amplitude-based panning - the pan pot - has serious limits. You can work like crazy, panning everything to exact positions, and then you turn your head a fraction of an inch and it all goes to... pot.

Phase-based panning is much more stable, which is why you use convolution-based ERs. You can use delays as well, but that creates other issues if you're not careful.


----------



## Chandler

I think something that people should realize is that ERs aren't all the same. In fact they should be different for each insturment. I posted this in a previous thread, but basically depth has multiple components and although ERs are one of them it isn't enough. 

A big problem with ERs is that they have to match the position and room they were recorded in. Most algorhythmic reverbs don't do this and most IRs and recorded in multiple positions. AFAIK the only reverbs that will properly set ERs are SPAT, MReverb,MIR and maybe VSS. 

Of course just because ERs aren't used in an accurate way doesn't mean they can't sound good, but if people are wondering why it doesn't sound like a real recorded space, it is because the physics of how a real space works isn't like the ER bus LR bus approach.

I hope more companies come out with realistic space simulators that are based on how real ERs work. Of course I doubt they can do it perfectly, but they could definitely do better than what they're doing now.


----------



## Ashermusic

OK, Chandler et al, no disrespect intended and I promise I am going to say it once more only and then shut up.

Taking an aural snapshot of a real instrument/instruments in a space, which is what samples are, and then placing them in an aural snapshot of a space, which is what an IR is, has _nothing_ to do with the reality of that instrument/instruments in a space. And going to exaggerated lengths to try and make it so is an exercise in futility.

Tweak them until the damn things make a sound you find aesthetically pleasing or at least tolerable and focus on more important considerations, like the actual composition and mixing.


----------



## jamwerks

Chandler said:


> ...I hope more companies come out with realistic space simulators that are based on how real ERs work


I get the impression that that may not happen and we won't need if, given the current tendency to record "in-situo", ER'S will already be taken care of.

With these libraries we're only adding tails and having multiple instances of verbs only necessary should we need to treat tails differently between libraries. That's how I'm reading all the info shared here.


----------



## Chandler

Ashermusic said:


> OK, Chandler et al, no disrespect intended and I promise I am going to say it once more only and then shut up.
> 
> Taking an aural snapshot of a real instrument/instruments in a space, which is what samples are, and then placing them in an aural snapshot of a space, which is what an IR is, has _nothing_ to do with the reality of that instrument/instruments in a space. And going to exaggerated lengths to try and make it so is an exercise in futility.
> 
> Tweak them until the damn things make a sound you find aesthetically pleasing or at least tolerable and focus on more important considerations, like the actual composition and mixing.



I don't take it as disrespect and I'm happy to hear others opinions. I think you're misunderstanding what I'm saying. I don't want people to neglect other aspects of mixing or composing, just realize that if they don't try to replicate the auditory effects of a space it won't sound like it is recorded in that space. Of course there is no law saying every instrument has to sound like it was recorded together. Most of the music I listen to has different reverbs on the various instruments and most of the instruments weren't recorded in the same space. I love that sound. Most reverbs are made for that type of application, not making multiple instruments sound like they were recorded in the same space. I'd like to see more tools that let people easily place instruments using psycho-acoustic principles. I'd prefer to have easy to use tools that allow me to place something in a space and doesn't require me to calculate delay times, etc or just say "F*** it, I'm going to slap Valhallaroom on it and call it a day". I think there is a middle ground that is missing in the plugin space.


----------



## Nick Batzdorf

Jay:



> Taking an aural snapshot of a real instrument/instruments in a space, which is what samples are, and then placing them in an aural snapshot of a space, which is what an IR is, has _nothing_ to do with the reality of that instrument/instruments in a space. And going to exaggerated lengths to try and make it so is an exercise in futility.



Actually I have to disagree with the Birthday Boy. You can create a very realistic soundstage, even using different libraries. VSL's MIR certainly does that!

jamwerks:



> I get the impression that that may not happen and we won't need if, given the current tendency to record "in-situo", ER'S will already be taken care of.



Often, but other times you want to change that to fit with other instruments.


----------



## JohnG

To each his own; I think this entire topic -- the ER topic -- has for years needlessly tangled a lot of people, especially beginners, in fruitless complexity and anxiety. I'm with Jay and maybe others in urging beginners, especially those using samples recorded in a hall, to spend time writing better notes and skip ERs. 

Some people are willing to take almost any aspect of music and recommend lots of complexity, sometimes accompanied by the insinuation that more complexity equals greater professionalism or is more impressive, or is the "right" way to do things. "That's how the _real_ guys do it." 

Yes, there are a lot of cool engineering tricks, but the idea that there is orthodoxy and everyone must or _actually does_ do things a certain way doesn't accurately reflect the crazy ideas that composers and engineers have been coming up with since people began recording. Check out the audacity and zany mixing ideas in "Why So Serious?" from "The Dark Knight" as an example. Bonkers! Awesome! Not orthodox!

Many orchestral libraries now are recorded in space and _in situ_ -- with the players sitting where they would be if the full orchestra were playing live. With such libraries, there is no need to add ERs.


----------



## Nick Batzdorf

I agree with all that too, John, except the part about it being complicated if you do want to use extra ERs.

You just load a the tail and leave the ER on. Okay, in Logic's Space Designer you have to use the volume envelope to get rid of the tail.

But for example Waves IR1 and Altiverb have a button. And Altiverb even has a dual-axis thing you move around to position things where you want them.

I don't think it's a big deal.

Now, what is a big deal is pop production. That's much more involved than just positioning orchestral samples.


----------



## desert

Would there be any sonic complications if I split the reverb output hard panned to the left and right?

Example:
AUX with reverb with no channel output 

> pre fader bus to AUX [Pan hard left]
> pre fader bus to AUX [Pan hard right]


----------



## NoamL

Can someone clarify why you need an ER verb if you have an IR? Are ERs not recorded in an IR?


----------



## Karma

JohnG said:


> To each his own; I think this entire topic -- the ER topic -- has for years needlessly tangled a lot of people, especially beginners, in fruitless complexity and anxiety.* I'm with Jay and maybe others in urging beginners, especially those using samples recorded in a hall, to spend time writing better notes and skip ERs..*


Thanks for this. Speaking as someone who is only a couple of years in the world of Virtual Instruments, I find myself getting twisted and occasionally overwhelmed by all the focus on reverb to create realism. A good mix will always be beneficial, but undoubtably the most important aspect will always be the composition. Time to focus on what matters...


----------



## Guy Rowland

Again (last time, promise) - not claiming its the final definitive word, but for good, quick, workable results...

Pan for L-C-R (in stereo)
Post Aux Send 1 - ER to push back from being in-your-face (for dry or dryish samples)
Post Aux Send 2 - Tail for room (for anything not recorded in a relatively large-sized hall)

Reverbs set to 100% wet, sends set to taste.

If you're only using ambient samples (especially from a single developer) recorded and edited well with consistent staging, you can skip all three of these - lucky you. If any of your samples are dry or dryish, it's not so simple.

The Pan And Two Auxes method is not very difficult (hey, I use it, it can't be), and massively better than smashing very dry and very wet samples together without any form of compensation. Yes, you can make it massively more complex and with trial, error and practice you'll likely get better results, but imo simply saying "forget about everything and just compose" is poor advice.

Every composer who also records and produces music has to get their heads round a basic level of mixing knowledge - prospective clients often can't differentiate a terrible mix from a terrible composition. Reverbs and spatialisation can be an intimidating black hole of impenetrable black art knowledge, so starting with one or two decent quality reverbs and some decent, appropriate impulse responses if they are convolution, with the above workflow will do you fine to start. Endless theorising, unless you're at a considerably advanced level, is one trap to avoid. Throwing out baby, bathwater and the bath itself is another.


----------



## JohnG

Guy Rowland said:


> for good, quick, workable results...
> 
> ...
> 
> If you're only using ambient samples (especially from a single developer) recorded and edited well with consistent staging, you can skip all three of these - lucky you. If any of your samples are dry or dryish, it's not so simple.



Very helpful, Guy.


----------



## Daniel Petras

Junkie XL on reverb:


----------



## rottoy

No ValhallaRoom instance in sight. Shame on you, Tom!


----------



## Fab

Sonorityscape said:


> Junkie XL on reverb:




gotta love those pajama bottoms.


----------



## Jimmy Hellfire

Fab said:


> gotta love those pajama bottoms.



I'm glad I'm not the only keen observer to notice the comfortable elegance of those.


----------



## tack

Jimmy Hellfire said:


> I'm glad I'm not the only keen observer to notice the comfortable elegance of those.


But still inferior in comfort to studios which have a pants optional policy. Probably only home studios have that.


----------



## jononotbono

tack said:


> But still inferior in comfort to studios which have a pants optional policy. Probably only home studios have that.



I'm not sure it's just home studios. I'm pretty sure I read that Elfman wears nothing but a chastity belt and makes his assistants dress up as Beetlejuice when the deadlines come in fast.


----------



## Fab

jononotbono said:


> I'm not sure it's just home studios. I'm pretty sure I read that Elfman wears nothing but a chastity belt and makes his assistants dress up as Beetlejuice when the deadlines come in fast.




lol, wtf dude.


----------



## jononotbono

Fab said:


> lol, wtf dude.



Sorry. Too much? 

I've had to watch the JXL Reverb video a few times. Very generous of him to share his time and studio with everyone!


----------



## Fab

jononotbono said:


> Sorry. Too much?
> 
> I've had to watch the JXL Reverb video a few times. Very generous of him to share his time and studio with everyone!



Just the imagery was a bit shocking.


----------

