# How many of you separate your ERs & Tails?



## Ethos (May 4, 2018)

How many of you are still separating your ERs and Tails? 10 years ago I was a huge proponent of this technique (I think SvK posted a great tutorial thread around that time on the topic, but I can't find it now). Then I switched to a combination of EW Spaces and Aether and just sort of liked the simplicity of some of the presets (it also made my mixes more simple).

But now I'm missing that dynamic lushness and super smooth decay on the tail that I used to get and I'm thinking about re-committing to separate ERs and Tails. (my god I hope my wife never reads this)

So what say ye? What's new with reverb technique? What are you all doing these days?


----------



## Saxer (May 4, 2018)

Everything changes over time here too. At the end I never found a solution for everything. There was a phase with Virtual Sound Stage or Altiverb divided into early and tail. At the moment I'm trying Breeze2 as it seems the CPU easiest of the great sounding reverbs. I just put an instance on everything except the shorts of roomy libraries like Spitfire. No sends, just an insert on any single channel. I don't use templates with thousands of tracks. I'm normally way below 100.
At the end I mostly like roomy sampled libraries without release samples (cutted realease). Musical Sampling or EastWest Hollywood for example. The main sound has the depth of room but the tail is cut. Sounds great with an additional reverb.


----------



## SBK (May 4, 2018)

I use many parallel busses with reverbs + compression like crazy (maybe even distortion), to breath life, all sorts of strange sounding reverbs created with randomizing the MTurboReverb. You will be amazed how it sounds


----------



## muk (May 5, 2018)

It depends entirely on the library I am using. Anything that has its own room sound I just add a tail that seems appropriate. With Cinematic Studio Strings, for instance. Or even Light & Sound Chamber Strings, which are very dry, but still have their own room sound. With VSL libraries however I use separate instances of reverb for ERs and tail. I have this crazy reverb setup with Dimension Strings where every individual player has its own set of reverbs. First an instance for ER, then another one with ERs for stage placement if I want a very clearly defined stage placement, than two more for tail.

In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time. So what I did is carefully check the volume difference for each player with and without reverb, and then compensate for it in the mixer. Now I play the strings without the reverbs. When bouncing I switch on the reverbs and disable the volume compensation, knowing that the reverberated strings will be at the same volume levels as I played them in. It’s a pretty neat solution for me. That way I don’t have to bounce until the very end, and I know that the added reverb will not totally screw my mix. It’s complicated as hell to set up, but once it’s done it’s easy to work with.

Not quite what you were asking, but I think it is interesting non the less: with Dimension Strings, reverb on each individual player vs on the whole section does make an audible difference to my ears. Here are two examples:


Violins 1, reverb on the whole section:



Violins 1, reverb on each individual player:



Celli, reverb on the whole section:



Celli, reverb on each individual player:





The latter sounds more spacious to me, the former a little flat in comparison. It’s not a night and day difference, but enough for me to make it worth.

But despite all that I am rethinking and readjusting reverb every few months or so. In fact, at the moment I have a mind to test a few other setups with different tools than the ones I’m currently using, and compare them to my current setup.
Interesting observation @Saxer. I've come to like libraries that have a room sound, but not too much tail. Think CSS, LASS, or Light & Sound. But roomy samples with the tail cut, that sounds like an interesting solution. Any more developers doing that apart from Musical Sampling and Eastwest?


----------



## sinkd (May 5, 2018)

Still do it. Have to with the different libraries I am mixing from VSL to LASS to East West to Spitfire. Each library needs a different dose of each, really. Vienna Suite Numerical Sound impulses (ER/Tail applied in VEPro) and then a little Lexicon Hall in the DAW to glaze it all together.

I am finding that I want to mix with less and less tail/algo lately, though.

DS


----------



## Josh Richman (May 5, 2018)

muk said:


> It depends entirely on the library I am using. Anything that has its own room sound I just add a tail that seems appropriate. With Cinematic Studio Strings, for instance. Or even Light & Sound Chamber Strings, which are very dry, but still have their own room sound. With VSL libraries however I use separate instances of reverb for ERs and tail. I have this crazy reverb setup with Dimension Strings where every individual player has its own set of reverbs. First an instance for ER, then another one with ERs for stage placement if I want a very clearly defined stage placement, than two more for tail.
> 
> In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time. So what I did is carefully check the volume difference for each player with and without reverb, and then compensate for it in the mixer. Now I play the strings without the reverbs. When bouncing I switch on the reverbs and disable the volume compensation, knowing that the reverberated strings will be at the same volume levels as I played them in. It’s a pretty neat solution for me. That way I don’t have to bounce until the very end, and I know that the added reverb will not totally screw my mix. It’s complicated as hell to set up, but once it’s done it’s easy to work with.
> 
> ...





Very interesting test. The entire section the reverb interacts with all the sounds. More realistic to my ears. I think the appeal of the individual ones is that it is a bit clearer. Like mixing in a bit more of the dry sample with wet creating a layered depth.


----------



## Andrew Souter (May 6, 2018)

muk said:


> In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time.



just fyi, 120 instances of Breeze 2 is perfectly possible on recent CPUs and will not even take 20% of the total CPU resources on recent 8+ core machines...

I don't think we really need one for every instrument in the orchestra though, as typically we don't create 1st violin section by having 16 seperate tracks with a solo violin loaded on each, right? (although I suppose that would be the ultimate realism, if we could also introduce some kind of randomization to the actual performance -- since this randomization combined with the different spatial position of each is what creates the ensemble/section sound). Probably one instance per instrument section (and perhaps a few if there are divisis happening) is enough. And dedicated instances for soloists as well. 25-50 instances max is likely enough to be almost perfect.


----------



## Andrew Souter (May 6, 2018)

btw, an interesting "thought experiment":

the stage of a hall such as Boston Symphony Hall is ~18meters wide and ~12meters deep (guessing at the moment based on quick google search). I assume players have no way to compensate for this while playing and everyone follows the conductor visually and the sound of their nearest neighbor. Musical "now" is likely the same more or less for all players. So the sound from instruments in the back of the hall would be late compared to that of sound from instruments in the front from the audience perspective. There would be some delay gradient imposed on instruments naturally based on their front/back position. At 12 meters deep or so this means things in the back of the stage could be 20-30ms later than those in exact front! 

I suppose this is why strings often feel as if they are leading the other instruments slightly.

Does anyone introduce intentional delays to brass, timpani etc in their templates? I am NOT speaking of pre-delay here. I am speaking of a delay to the actual direct sound. Without such a delay things in the back of the hall would be early compared to a real live performance since the MIDI data is likely well aligned/quantized and the instrument libraries probably don't have built in delays I guess (I have not checked).

...interesting. Not sure I ever personally thought about that before.


----------



## Living Fossil (May 6, 2018)

Andrew Souter said:


> Does anyone introduce intentional delays to brass, timpani etc in their templates?



I don't think this approach makes too much sense practically. The timing in orchestras is a really complex thing, also how musicians react to the conductor, and usually the sound of a specific orchestra evolves with the years of musicians playing together (with fluctuations of course). in case there would be a significant delay, caused by distances, a conductor would normally advice the musicians to avoid this. Even more, if e.g. a choir is positioned behind the orchestra.
With samples, i see few sense in mechanically induce an unwanted delay.
Also, since perceiving a live performance is a completely different situation than listening to a recording.
The visual feedback that a listener has in a concert can change the balance and the dynamic (on a psychoacoustic base) etc. Flaws that are not relevant in a concert can become really annoying when heard for the third time on a recording.
Etc.

In the context of distances and sample based mockups i think the loss of energy in the high frequencies over distances is a much more relevant aspect to the ear.


----------



## robgb (May 6, 2018)

I use delay and convolution reverb. I prefer to work with dry samples. I don't worry too much about whether or not it sounds perfect, as long as it sounds good to my ear. I really couldn't care less about "authenticity."


----------



## Jimmy Hellfire (May 6, 2018)

The only thing I always keep coming back to is the realization that I can't really settle on a particular method. I don't even think it really matters that much tbh. If something sounds good enough and the music can take over, it's fine. I firmly believe that if the music is good enough, the production methods only need to sound "OK" enough that they don't draw unnecessary attention to themselves.

I like keeping things simple. If it gets too elaborate on the technical side, it starts to bore me real quick, and if I get bored, I get annoyed. If the samples are already recorded with room sound, I'll just add a bit of algo tail. With VSL, I use their MIRx profiles that sound pretty real by themselves, and again just add a bit of algo on top. For other dry stuff, currently I like to route the signal to EW Spaces busses, dial back the direct signal a few Db, and again send a little bit of algo reverb to that. It's not perfect, and not the "realest" sound I ever heard. But it works. That's OK for me.


----------



## Andrew Souter (May 6, 2018)

Living Fossil said:


> I don't think this approach makes too much sense practically. The timing in orchestras is a really complex thing, also how musicians react to the conductor, and usually the sound of a specific orchestra evolves with the years of musicians playing together (with fluctuations of course). in case there would be a significant delay, caused by distances, a conductor would normally advice the musicians to avoid this. Even more, if e.g. a choir is positioned behind the orchestra.



Interesting. Great insights! So effectively the conductor would advice brass etc (things in the back) to play ever so slightly early so as to effectively realign things. I suppose it a more of a creative/intuitive process developed with years of playing together as you say more than a command "hey brass guys, play 25ms early" or whatever, yes.


----------



## Saxer (May 6, 2018)

In real world to compensate a 25ms delay a simple 'Hey, brass, don't hang!' from the conductor would do it.


----------



## Andrew Souter (May 6, 2018)

Ya, I just chatted with a friend who is a 1st violinist in the NSO at the Kennedy Center in Washington, DC, and she basically said the same thing: some combination of conductor input and player/section intuition and orchestra hive-mind somehow magically combine to rebalance these timing differences, so likely don't exist in large amounts in the final result. Makes sense.

Human beings are so sneaky.


----------



## Serg Halen (Jun 15, 2018)

After huge experiments with ER, tail, convo, algo reverbs, i find that 1 simple reverb plugin (lexicon hall) is enough for me. Now i focused more on libraries and arranging.


----------



## Atarion Music (Jun 15, 2018)

Andrew Souter said:


> just fyi, 120 instances of Breeze 2 is perfectly possible on recent CPUs and will not even take 20% of the total CPU resources on recent 8+ core machines...
> 
> as typically we don't create 1st violin section by having 16 seperate tracks with a solo violin loaded on each, right?
> 
> ...


----------



## mc_deli (Jun 15, 2018)

Serg Halen said:


> After huge experiments with ER, tail, convo, algo reverbs, i find that 1 simple reverb plugin (lexicon hall) is enough for me. Now i focused more on libraries and arragning.


Tru dat. If I ever get to the point where the sounds, music and available time warrant looking again at complex reverb treatments I'll already have won


----------



## Jeast (Jun 15, 2018)

mc_deli said:


> Tru dat. If I ever get to the point where the sounds, music and available time warrant looking again at complex reverb treatments I'll already have won


+1


----------



## Consona (Jun 15, 2018)

I don't separate ers and tails. Things like Cinematic Strings 2 or Cinestuff are already recorded in a studio/hall so the reflections are there already anyway. But I sometimes use two reverbs to make the sound feel more distant.


----------



## Ethos (Jun 24, 2018)

I just finished mixing my most recent score and it was the least amount of time I've ever spent on reverbs. The music way incredibly complex (mostly crazy circus music), and I got by with just 3 reverb sends + a feather reverb that I only used occasionally. 

I think I'm over the insane reverb setups.


----------



## Beat Kaufmann (Jun 24, 2018)

Like everyone who has been producing mixes successfully for a long time, I have also found my hall concept. *It uses ERs separately from the tail.* What I find so brilliant about this concept is that a "tail-over-all" in the output channel so wonderfully "glues together" the whole mix. Also, this method allows to use so-called "dry" and "wet" samples in the mix. The wet samples mostly contain an integrated ER component, so that they can usually be routed directly to the output. The tail reverb then often helps these samples to fit together with all others well.

When using "Impulse Responses" for ERs, there is one thing to keep in mind:
It takes a lot of time to find samples (IRs) that are really suitable for it. Good IRs are those that let instruments sound very far away (at 100% wet) and at the same time do not color the sound very much. Recently, I find that algo reverbs are increasingly coming onto the market, which can generate relatively natural space depths as well. The latest example is Breeze2 but also EaRecon and a few others are well. Of course that's great, because these "alogo depths" are usually pretty neutral sounding.
-----------------------------------------------------------------
*On the subject of Delay from distant instruments:*
In my sound recordings, I compensate for the times between the main mics and the remote microphone mounts. Otherwise, the overall sound would be rather washed out. Incidentally, this is one of the biggest problems in recording large ensembles; these delays between the microphones. One can actually say: Each additional microphone is initially another problem instead of a profit.

Photos showing thousands of microphones above the instruments make us believe that this will be a particularly great shot. But each microphone also "hears a bit" what the other hears and quickly there are phase cancellations!
A rule of thumb is that the second (next) microphone should be at least 3x as far from the sound source as the first microphone ...

Here is a Live-Example with just 2 Main-Mikros, 3 Mikros for the percussion, 1 Mikro for the Harp, 1 Mic for the doublebass and one for the tuba-section and one for the piano (all mics are time compensated agains the main mics).
This low-budget-mix is more transparent then a lot of much more expensive mixes - even if it is a live take with just one chance and two hours of "assembly time". That's why no microhon signal disturbs the other.
Please keep also in mind, that the drums are a bit too loud because of the special cirkumstances here in this case and also that is only the youtube quality...
​*With samples you do not have to insert an artificial delay. 
On the contrary: 
Be glad that you do not have to find the right value as each of me*​
Best
Beat​


----------

