What's new

How many of you separate your ERs & Tails?

Ethos

Active Member
How many of you are still separating your ERs and Tails? 10 years ago I was a huge proponent of this technique (I think SvK posted a great tutorial thread around that time on the topic, but I can't find it now). Then I switched to a combination of EW Spaces and Aether and just sort of liked the simplicity of some of the presets (it also made my mixes more simple).

But now I'm missing that dynamic lushness and super smooth decay on the tail that I used to get and I'm thinking about re-committing to separate ERs and Tails. (my god I hope my wife never reads this)

So what say ye? What's new with reverb technique? What are you all doing these days?
 
Everything changes over time here too. At the end I never found a solution for everything. There was a phase with Virtual Sound Stage or Altiverb divided into early and tail. At the moment I'm trying Breeze2 as it seems the CPU easiest of the great sounding reverbs. I just put an instance on everything except the shorts of roomy libraries like Spitfire. No sends, just an insert on any single channel. I don't use templates with thousands of tracks. I'm normally way below 100.
At the end I mostly like roomy sampled libraries without release samples (cutted realease). Musical Sampling or EastWest Hollywood for example. The main sound has the depth of room but the tail is cut. Sounds great with an additional reverb.
 
I use many parallel busses with reverbs + compression like crazy (maybe even distortion), to breath life, all sorts of strange sounding reverbs created with randomizing the MTurboReverb. You will be amazed how it sounds
 
It depends entirely on the library I am using. Anything that has its own room sound I just add a tail that seems appropriate. With Cinematic Studio Strings, for instance. Or even Light & Sound Chamber Strings, which are very dry, but still have their own room sound. With VSL libraries however I use separate instances of reverb for ERs and tail. I have this crazy reverb setup with Dimension Strings where every individual player has its own set of reverbs. First an instance for ER, then another one with ERs for stage placement if I want a very clearly defined stage placement, than two more for tail.

In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time. So what I did is carefully check the volume difference for each player with and without reverb, and then compensate for it in the mixer. Now I play the strings without the reverbs. When bouncing I switch on the reverbs and disable the volume compensation, knowing that the reverberated strings will be at the same volume levels as I played them in. It’s a pretty neat solution for me. That way I don’t have to bounce until the very end, and I know that the added reverb will not totally screw my mix. It’s complicated as hell to set up, but once it’s done it’s easy to work with.

Not quite what you were asking, but I think it is interesting non the less: with Dimension Strings, reverb on each individual player vs on the whole section does make an audible difference to my ears. Here are two examples:


Violins 1, reverb on the whole section:



Violins 1, reverb on each individual player:



Celli, reverb on the whole section:



Celli, reverb on each individual player:





The latter sounds more spacious to me, the former a little flat in comparison. It’s not a night and day difference, but enough for me to make it worth.

But despite all that I am rethinking and readjusting reverb every few months or so. In fact, at the moment I have a mind to test a few other setups with different tools than the ones I’m currently using, and compare them to my current setup.
Interesting observation @Saxer. I've come to like libraries that have a room sound, but not too much tail. Think CSS, LASS, or Light & Sound. But roomy samples with the tail cut, that sounds like an interesting solution. Any more developers doing that apart from Musical Sampling and Eastwest?
 
Still do it. Have to with the different libraries I am mixing from VSL to LASS to East West to Spitfire. Each library needs a different dose of each, really. Vienna Suite Numerical Sound impulses (ER/Tail applied in VEPro) and then a little Lexicon Hall in the DAW to glaze it all together.

I am finding that I want to mix with less and less tail/algo lately, though.

DS
 
It depends entirely on the library I am using. Anything that has its own room sound I just add a tail that seems appropriate. With Cinematic Studio Strings, for instance. Or even Light & Sound Chamber Strings, which are very dry, but still have their own room sound. With VSL libraries however I use separate instances of reverb for ERs and tail. I have this crazy reverb setup with Dimension Strings where every individual player has its own set of reverbs. First an instance for ER, then another one with ERs for stage placement if I want a very clearly defined stage placement, than two more for tail.

In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time. So what I did is carefully check the volume difference for each player with and without reverb, and then compensate for it in the mixer. Now I play the strings without the reverbs. When bouncing I switch on the reverbs and disable the volume compensation, knowing that the reverberated strings will be at the same volume levels as I played them in. It’s a pretty neat solution for me. That way I don’t have to bounce until the very end, and I know that the added reverb will not totally screw my mix. It’s complicated as hell to set up, but once it’s done it’s easy to work with.

Not quite what you were asking, but I think it is interesting non the less: with Dimension Strings, reverb on each individual player vs on the whole section does make an audible difference to my ears. Here are two examples:


Violins 1, reverb on the whole section:



Violins 1, reverb on each individual player:



Celli, reverb on the whole section:



Celli, reverb on each individual player:





The latter sounds more spacious to me, the former a little flat in comparison. It’s not a night and day difference, but enough for me to make it worth.

But despite all that I am rethinking and readjusting reverb every few months or so. In fact, at the moment I have a mind to test a few other setups with different tools than the ones I’m currently using, and compare them to my current setup.
Interesting observation @Saxer. I've come to like libraries that have a room sound, but not too much tail. Think CSS, LASS, or Light & Sound. But roomy samples with the tail cut, that sounds like an interesting solution. Any more developers doing that apart from Musical Sampling and Eastwest?



Very interesting test. The entire section the reverb interacts with all the sounds. More realistic to my ears. I think the appeal of the individual ones is that it is a bit clearer. Like mixing in a bit more of the dry sample with wet creating a layered depth.
 
In a setup with 32 individual players that makes for some 120 instances of reverb. Way too much for my computer to handle in real time.

just fyi, 120 instances of Breeze 2 is perfectly possible on recent CPUs and will not even take 20% of the total CPU resources on recent 8+ core machines...

I don't think we really need one for every instrument in the orchestra though, as typically we don't create 1st violin section by having 16 seperate tracks with a solo violin loaded on each, right? (although I suppose that would be the ultimate realism, if we could also introduce some kind of randomization to the actual performance -- since this randomization combined with the different spatial position of each is what creates the ensemble/section sound). Probably one instance per instrument section (and perhaps a few if there are divisis happening) is enough. And dedicated instances for soloists as well. 25-50 instances max is likely enough to be almost perfect.
 
  • Like
Reactions: muk
btw, an interesting "thought experiment":

the stage of a hall such as Boston Symphony Hall is ~18meters wide and ~12meters deep (guessing at the moment based on quick google search). I assume players have no way to compensate for this while playing and everyone follows the conductor visually and the sound of their nearest neighbor. Musical "now" is likely the same more or less for all players. So the sound from instruments in the back of the hall would be late compared to that of sound from instruments in the front from the audience perspective. There would be some delay gradient imposed on instruments naturally based on their front/back position. At 12 meters deep or so this means things in the back of the stage could be 20-30ms later than those in exact front!

I suppose this is why strings often feel as if they are leading the other instruments slightly.

Does anyone introduce intentional delays to brass, timpani etc in their templates? I am NOT speaking of pre-delay here. I am speaking of a delay to the actual direct sound. Without such a delay things in the back of the hall would be early compared to a real live performance since the MIDI data is likely well aligned/quantized and the instrument libraries probably don't have built in delays I guess (I have not checked).

...interesting. Not sure I ever personally thought about that before.
 
Does anyone introduce intentional delays to brass, timpani etc in their templates?

I don't think this approach makes too much sense practically. The timing in orchestras is a really complex thing, also how musicians react to the conductor, and usually the sound of a specific orchestra evolves with the years of musicians playing together (with fluctuations of course). in case there would be a significant delay, caused by distances, a conductor would normally advice the musicians to avoid this. Even more, if e.g. a choir is positioned behind the orchestra.
With samples, i see few sense in mechanically induce an unwanted delay.
Also, since perceiving a live performance is a completely different situation than listening to a recording.
The visual feedback that a listener has in a concert can change the balance and the dynamic (on a psychoacoustic base) etc. Flaws that are not relevant in a concert can become really annoying when heard for the third time on a recording.
Etc.

In the context of distances and sample based mockups i think the loss of energy in the high frequencies over distances is a much more relevant aspect to the ear.
 
I use delay and convolution reverb. I prefer to work with dry samples. I don't worry too much about whether or not it sounds perfect, as long as it sounds good to my ear. I really couldn't care less about "authenticity."
 
The only thing I always keep coming back to is the realization that I can't really settle on a particular method. I don't even think it really matters that much tbh. If something sounds good enough and the music can take over, it's fine. I firmly believe that if the music is good enough, the production methods only need to sound "OK" enough that they don't draw unnecessary attention to themselves.

I like keeping things simple. If it gets too elaborate on the technical side, it starts to bore me real quick, and if I get bored, I get annoyed. If the samples are already recorded with room sound, I'll just add a bit of algo tail. With VSL, I use their MIRx profiles that sound pretty real by themselves, and again just add a bit of algo on top. For other dry stuff, currently I like to route the signal to EW Spaces busses, dial back the direct signal a few Db, and again send a little bit of algo reverb to that. It's not perfect, and not the "realest" sound I ever heard. But it works. That's OK for me.
 
I don't think this approach makes too much sense practically. The timing in orchestras is a really complex thing, also how musicians react to the conductor, and usually the sound of a specific orchestra evolves with the years of musicians playing together (with fluctuations of course). in case there would be a significant delay, caused by distances, a conductor would normally advice the musicians to avoid this. Even more, if e.g. a choir is positioned behind the orchestra.

Interesting. Great insights! So effectively the conductor would advice brass etc (things in the back) to play ever so slightly early so as to effectively realign things. I suppose it a more of a creative/intuitive process developed with years of playing together as you say more than a command "hey brass guys, play 25ms early" or whatever, yes.
 
In real world to compensate a 25ms delay a simple 'Hey, brass, don't hang!' from the conductor would do it.
 
Last edited:
Ya, I just chatted with a friend who is a 1st violinist in the NSO at the Kennedy Center in Washington, DC, and she basically said the same thing: some combination of conductor input and player/section intuition and orchestra hive-mind somehow magically combine to rebalance these timing differences, so likely don't exist in large amounts in the final result. Makes sense.

Human beings are so sneaky. ;)
 
just fyi, 120 instances of Breeze 2 is perfectly possible on recent CPUs and will not even take 20% of the total CPU resources on recent 8+ core machines...

as typically we don't create 1st violin section by having 16 seperate tracks with a solo violin loaded on each, right?


LOL I laughed soon as I read this part, man am I guilty lol 12 horns (Six on the right, six on the center/left. 16 first violins, 14-16 Second Violins(Hey those 2 violins can make a noticeable difference lol) , 12 celli, 12 silky sounding Viola, 9 Contrabass, I don't even want to start on the woodwinds and various percussion. Choir is a whole different story But, its HUGE, This template is great once you get over how much time you wasted creating the template in the first place but. It's great for me to compose anything from a small Solo, Duo and quartet all the way up to a Witcher 3 type epic and above.

Spent countless hours adding every section in one instrument at a time with Virtual Soundstage, moving and tweaking to get rid of all phasing issues. Removing as much reverb as I can from the samples themselves and turning off VSS's reverb as well, then I'll add in EW Spaces to each instrument (SECTION) and smack Altiverb over the final cut and it'll give it a jaw dropping sound you'd not think was possible from samples in 2018. :) But If there's a tight deadline, no I would not go this route lol.
 
I don't separate ers and tails. Things like Cinematic Strings 2 or Cinestuff are already recorded in a studio/hall so the reflections are there already anyway. But I sometimes use two reverbs to make the sound feel more distant.
 
I just finished mixing my most recent score and it was the least amount of time I've ever spent on reverbs. The music way incredibly complex (mostly crazy circus music), and I got by with just 3 reverb sends + a feather reverb that I only used occasionally.

I think I'm over the insane reverb setups.
 
Top Bottom