# How do you use Your reverb on orchestral samples?



## S R Krishnan (May 13, 2019)

Please share some of your workflow tips on using reverb on orchestral samples and your go to reverb plugins.


----------



## Bluemount Score (May 15, 2019)

Currently, I have only one reverb channel and I link all my instruments to it. Then, I adjust the amount of reverb for each individually. It's a very fast workflow and I like when everything is playing in the same virtual room by only using one reverb instance. Also it's very CPU efficient.
However, I recently thought about splitting it up into sections. One reverb for strings, brass, percussion etc. seperatly. I hope to achieve a bit more flexibility by that.
My current reverb plugin is RC48.


----------



## S R Krishnan (May 15, 2019)

Thanks  RC 48 is from NI right?


Meetyhtan said:


> Currently, I have only one reverb channel and I link all my instruments to it. Then, I adjust the amount of reverb for each individually. It's a very fast workflow and I like when everything is playing in the same virtual room by only using one reverb instance. Also it's very CPU efficient.
> However, I recently thought about splitting it up in sections. One reverb for strings, brass, percussion etc. seperatly. I hope to achieve a bit more flexibility by that.
> My current reverb plugin is RC48.


----------



## Bluemount Score (May 15, 2019)

S R Krishnan said:


> Thanks  RC 48 is from NI right?


Exactly! Got it in Komplete.
I've only heard very positive reviews about it and I'm not thinking about switching anytime soon.


----------



## BenG (May 15, 2019)

- I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct. 

- Then, I have sends set-up for each section of the orchestra with a convolution reverb (Spaces). I.e. One for Woods, Brass, Strings, Perc, Choir, etc.

- Lastly, everything passes through the same algorithmic reverb (B2) to add a bit of tail and glue everything together.


----------



## S R Krishnan (May 15, 2019)

BenG said:


> - I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct.
> 
> - Then, I have sends set-up for each section of the orchestra with a convolution reverb (Spaces). I.e. One for Woods, Brass, Strings, Perc, Choir, etc.
> 
> - Lastly, everything passes through the same algorithmic reverb (B2) to add a bit of tail and glue everything together.


Ah nice! VSS2 is an alternate to MIR pro right? How do you compare that to MIR?


----------



## BenG (May 15, 2019)

S R Krishnan said:


> Ah nice! VSS2 is an alternate to MIR pro right? How do you compare that to MIR?


Yes, they are similar. I've always liked VSS2, but some are not a fan and say that it colours the sound slightly. Have never tried MIR, would love to hear from someone who has both!


----------



## Divico (May 15, 2019)

BenG said:


> - I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct.
> 
> - Then, I have sends set-up for each section of the orchestra with a convolution reverb (Spaces). I.e. One for Woods, Brass, Strings, Perc, Choir, etc.
> 
> - Lastly, everything passes through the same algorithmic reverb (B2) to add a bit of tail and glue everything together.


do you send your spaces instances through the glue verb?


----------



## BenG (May 15, 2019)

Divico said:


> do you send your spaces instances through the glue verb?



I'll send the 'Group Tracks' to the 'Master Verb'


----------



## Tice (May 15, 2019)

On my large orchestral template that I'm currently using I have 5 different fx tracks with reverbs, all the same Spaces reverb but each track has a little less direct sound to it than the previous one did. This creates an orchestra 5 rows deep. The brass is the exception to this, I'm using Spitfire Symphonic Brass in an otherwise VSL orchestra. But Spitfire recorded the space as well, not just the instrument (dry), so I use that without adding additional reverb. Figuring out how much of the room mics to use was a bit tricky, but I'm happy with the result in the end.


----------



## oxo (May 15, 2019)

depends on the libraries used (micposition, room, panning). my standard set up includes two different space instances. one with a stage IR. and one with a hall IR. all sends go to the hall IR for the tail. the very dry libraries go through the stage IR first (for positioning and ER) and then to the hall.
sometimes, for pieces with short notes (spicc pattern, etc.), i use an additional reverb for the shorts with other settings, so that it does not get washed out or the rhythm is influenced by reverb and pre-delay.


----------



## AdamKmusic (May 15, 2019)

I've always wondered, does, for example, Hans Zimmer @Rctec use additional reverb on his orchestra after he's recorded at somewhere like air. I guess it's kind of similar to using samples without the additional build up of noise/reverb from the millions of mics you're using when playing in samples.


----------



## goalie composer (May 15, 2019)

BenG said:


> - I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct.
> 
> - Then, I have sends set-up for each section of the orchestra with a convolution reverb (Spaces). I.e. One for Woods, Brass, Strings, Perc, Choir, etc.
> 
> - Lastly, everything passes through the same algorithmic reverb (B2) to add a bit of tail and glue everything together.


Are you using vss2 on libs recorded in situ as well?


----------



## BenG (May 15, 2019)

goalie composer said:


> Are you using vss2 on libs recorded in situ as well?



Yes, and mostly using the library specific presets it comes with. I should also mention that I'm initial, baked-in room ER so all libraries are roughly in the same space to start. This is done with raising or lowering close, far mics.

Libraries:

BWW
CB Core + Pro
CSS
Spitfire Perc
Storm Choir


----------



## S R Krishnan (May 15, 2019)

Thank you guys! So much of learning here!


----------



## JohnG (May 15, 2019)

AdamKmusic said:


> I've always wondered, does, for example, Hans Zimmer @Rctec use additional reverb on his orchestra after he's recorded at somewhere like air. I guess it's kind of similar to using samples without the additional build up of noise/reverb from the millions of mics you're using when playing in samples.



If your question is, "in general, do major film scores add additional reverb besides the room sound?"

The answer is "yes," they do, typically. 

About HZ in particular I don't know for sure, but I can hear some pretty unusual things going on with his mixes. Moreover, they vary considerably from project to project. He may be the one top composer who follows a varied and idiosyncratic path to post-production of orchestra. So a blanket "yes/no" answer for him in particular is going to pave over what really happens.


----------



## chimuelo (May 15, 2019)

Marcato Pizz Cellos sounds fantastic with a reverse or even gated reverb.
But it’s just what I prefer live when doing Electric Light Orchestra style stuff, Trans Siberian, etc.
Adjust the slope or decay and have fun.
I’m using a Strymon Big Sky but it’s basically Code on a Custom Chip. So Native verbs should have similar Algos.

My recent favorite is a Reverse Gated Spring.
It’s just not normal which is why I like it more.


----------



## Beat Kaufmann (May 16, 2019)

Hi S R Krishnan
Have a look >>> here... an old thread about reverb and orchestra.

All the best
Beat


----------



## AdamKmusic (May 16, 2019)

JohnG said:


> If your question is, "in general, do major film scores add additional reverb besides the room sound?"
> 
> The answer is "yes," they do, typically.
> 
> About HZ in particular I don't know for sure, but I can hear some pretty unusual things going on with his mixes. Moreover, they vary considerably from project to project. He may be the one top composer who follows a varied and idiosyncratic path to post-production of orchestra. So a blanket "yes/no" answer for him in particular is going to pave over what really happens.



I only use Hans as an example as he is synonymous with Air studio and obviously that amazing room sound


----------



## Bear Market (May 16, 2019)

Beat Kaufmann said:


> Hi S R Krishnan
> Have a look >>> here... an old thread about reverb and orchestra.



I know this wasn't directed at me but I thought I'd try to sneak in a question anyway  

What (if any) difference would you say it is between placing the ER portion of a reverb as an insert on a group channel as opposed to on a send?

Thanks by the way for all your contributions on this forum.


----------



## camelot (May 16, 2019)

As a send, it will sit on an own fx channel which you can mix individually. Furthermore, you can set the ratio of dry and wet for each channel of the group individually using the send for the wet component and the channel volume as dry. Opposed to a reverb on a group, where the ratio is fixed for all of them.


----------



## Shredoverdrive (May 17, 2019)

BenG said:


> - I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct.


Oh my, it never occurred to me that I could do that! Since VSS2 has presets for individual instruments, I thought I could not route every one on a single instance of VSS2. I really am dumb sometimes. VSS2 is not a CPU hog at all but I still must try this this week-end. Thanks for the idea.
Apart form this slight difference, I have the same setup as you, @BenG . I got inspiration for it from a post by @bennyoschmann, a while ago.
VSS2 with instruments presets for VSL, East West (or adapting them when they do not exist, as for my beloved CH stuff or others) routed by section to a QL Spaces verb. I have been trying to add a very slight UVI Sparkverb algo glue on top of all that but I'm not convinced so far.


----------



## Divico (May 17, 2019)

AdamKmusic said:


> I've always wondered, does, for example, Hans Zimmer @Rctec use additional reverb on his orchestra after he's recorded at somewhere like air. I guess it's kind of similar to using samples without the additional build up of noise/reverb from the millions of mics you're using when playing in samples.


Alan Myerson has done a lot of his mixing. I found some tips about his reverb settings:
_“I also don’t like to EQ reverb returns. But I do roll off the low and high frequencies to the send of the reverb. This is important so that the reverb doesn’t become cluttered. There usually is a lot of low mid and low-frequency buildup that happens in a reverb. I don’t want to add to that with sounds that have it in the first place.”_

_“I also use multiple reverbs per stem and have the stem reverbs independent. So, my strings would have a set of reverbs, brass another set, percussion another etc. This gives me a lot of space to play with and build the movement. I don’t like having a static mix. Music has to move and the instruments have to play in the space and interact. So multiple reverbs help me with this.”_

_“I almost never send reverbs from the spot mics unless it is really needed”_

_“Now, when I send to the reverb aux, I send it from my room mics. I get a lot of body from them and use that to extend the room rather than trying it with the spot mics, because they don’t usually glue well in the mix that easily as they have more mid range content.”_

_“I sometimes add a devil loc or a __decapitator__ in the return of the front or surround reverb just to give it a bit of grit and definition”_

_“I don’t use the pre-delays on the rooms and never use it to place the instrument in space. I use them only to get a definition for the reverbs. And for a long time, the New York pop world pre-delay value was 120 ms. That is what I use for most of the pre-delays to get that delay a bit separated from the main sound if I want it as an effect in a dense mix.”

Also if I am not mistaken he is a big Bricasti fan and uses a specific setup for both left and rigth._


----------



## BenG (May 17, 2019)

Shredoverdrive said:


> Oh my, it never occurred to me that I could do that! Since VSS2 has presets for individual instruments, I thought I could not route every one on a single instance of VSS2. I really am dumb sometimes. VSS2 is not a CPU hog at all but I still must try this this week-end. Thanks for the idea.
> Apart form this slight difference, I have the same setup as you, @BenG . I got inspiration for it from a post by @bennyoschmann, a while ago.
> VSS2 with instruments presets for VSL, East West (or adapting them when they do not exist, as for my beloved CH stuff or others) routed by section to a QL Spaces verb. I have been trying to add a very slight UVI Sparkverb algo glue on top of all that but I'm not convinced so far.



Correct me if I'm wrong, but doesn't VSS2 automatically load everything into one instance where all the instruments appear on a single 'stage'?


----------



## Shredoverdrive (May 17, 2019)

Well it shows them all for sure but it never occurred to me it applied to the all the instruments in one instance. I thought it was just to keep track of the whole picture.


----------



## Zoot_Rollo (May 17, 2019)

i really like this for placement and ER/LR

https://www.eareckon.com/en/products/eareverb2-reverb-plug-in.html


----------



## Beat Kaufmann (May 20, 2019)

Bear Market said:


> I know this wasn't directed at me but I thought I'd try to sneak in a question anyway
> 
> What (if any) difference would you say it is between placing the ER portion of a reverb as an insert on a group channel as opposed to on a send?
> 
> Thanks by the way for all your contributions on this forum.



Thanks for the nice words, Bear Market!

So, first we have to say that the result always counts. That's why there is not THAT procedure.
Nevertheless, my proposed system (depths in a group channels) has 3 main "plus points":

1. Acoustically creating a room depth usually requires more effects than just a reverb. An EQ, for example, helps to better simulate the distance of instruments. Since all instruments come with samples at the same volume (alto flute ... bass drum), you always have to amplify the instruments in depth3 with Compressor & Co because the are even weaker (EQ cuts the high frequencies...).
All these procedures can be conveniently solved in each bus channel for optimizing every room depth perfectly. With more or less "send" into one single reverb this is not possible in the same way.

2. If you have a 4th bus (without any effect), you can collect all the sampled instruments there that have already integrated a room depth.

3. The big advantage now comes to the end: Because now all the different and still dry room depths (only ERs) are looped through one and the same Reverb with only Tail, so everything is now glued together nicely. Even if the instruments play in different depths the feeling of one concert hall is nicely given. Also the tail volume is the same for close and far away playing instruments which simulates perfectly the reality.

This system is almost always successful, especially with larger mixes. Maybe you can count this fact as a further Plus.

--------------------------------------------------
That's what about the text is above:





And that's how it can sound: Example
Observe that the tail is always the same. It is only the distance of the instrument that changes. Are you able to reach this result by using "send"?

All the best
Beat


----------



## Zoot_Rollo (May 20, 2019)

Bear Market said:


> What (if any) difference would you say it is between placing the ER portion of a reverb as an insert on a group channel as opposed to on a send?



that's the way i do it, especially if the instruments are not "in situ".

i'll use EAReverb 2 or Panagement/low CPU reverb on each track for placement and ER, 

then route to a bus with a send to a(n) LR reverb.


----------



## neblix (May 20, 2019)

For reverbs, I use Seventh Heaven, a fantastic Fusion-IR emulation of the Bricasti M7. So it's convolution based, but lets you manage reflections, decay time, and other neat parameters. My template is set up like this:

*1*. All tracks by default are set to output to a Null bus (-inf dB, so silent).
*2*. Five send/busses as follows.

Dry - This is a 0 dB unity gain bus by default with nothing on it.
ER - This has a subtle low roll-off into a reverb configuration set only to early reflections.
LR - This has a less subtle low roll-off into the identical reverb configuration, but set instead to late reflections.

Amb - This is like the LR send, but it has some more creative effects like Valhalla Shimmer adding a spacey, lasting shine. I don't write strictly traditional acoustic orchestration so this is a personal thing.
Sub - This is like the Dry send, but it has a low pass at around 100 Hz. This send is to artificially increase and manage the low end of my instrument tracks. In this send I could do things like stereo field management, compression, automation, etc.
*3*. All tracks in my DAW have the send faders available so I can control the blend on every element in my mix, right in the DAW mixer without opening plugins. I can place a choir further back at say, 10% Dry - 30% ER - 60% LR. For soloists, give them Dry detail and some Amb to add space without pushing them back into the room. It's case by case; this template is about allowing me easy access to use my ears and adjust things, not necessarily about pre-mix ideologies.

*3 (sub)*. I have a few reasons to use a Dry send and not simply output the track to Master. One is that it allows me independent control of the track fader (which controls all of these sends together, because they're post-fader sends) vs. controlling the level of dry signal for blending. In other words, without doing this, I'd have to use the track fader to control Dry signal, making it useless for general mix adjustments and automation, and then changing the other sends to pre-fader so I can pump more signal into them should the Dry have to be really low (for spacier/further sounds).

Another reason is now I can process the detailed, clear parts of my mix without also processing the reverb. There are relatively few cases where I actually do this, but it's useful in some circumstances. One time, I took the Dry signal and used it as a key input to a sidechain, so that I was driving the master compressor only by detailed information and none of the buildup from room sound. I like to experiment with unconventional mixing techniques, sometimes it pays off.

*EDIT*: Worth mentioning, the Sub bus is completely dry, so if I shove something in the back of the room, like the string section, I can still steal their dry low end. It's one of those "larger than life" approaches to mixing.

*4*. All sample libraries have unloaded all mic positions except for the close positions. Exceptions made occasionally for drum overheads/rooms or certain libraries where I like the smoother sound of a slightly farther position (like Tree mics in Spitfire libraries). This isn't a hard fast rule but it's an important starting point for the most efficient RAM usage and easiest mixing process. Having all close mics and managing reverb through just a small number of plugins is not only incredibly efficient, it sounds way better than anything I used to do before, and blending libraries from different developers is a completely seamless thing.

Here is an example of a song I mixed utilizing this workflow. There are sample libraries from 4-5 different developers here, yet it's not even a thing to consider when mixing using this approach. There's barely any EQ happening because they're quality libs, just gentle filtering to control ranges.

Excuse the scratch composition; additionally, this demo was made before I got Seventh Heaven. It's the TSAR-1 from Softube. It would probably sound even better if I replaced the reverb config.



Here is an "alternate mix", where I arbitrarily changed the positioning of elements. I brought the choirs closer and moved the strings back for a more intimate sound, and this was done purely through managing the Dry, ER, and LR faders on those elements (celeste track, string bus, choir bus).



Here's a snippet of the Menuet from Ravel's Le Tombeau de Couperin. I just snagged the MIDI online from somewhere, so the sample sequencing probably isn't the greatest. But this is demonstrating combining Spitfire Strings and Berlin Woodwinds, and it's totally seamless.



*EDIT: *Make sure you know if your sends are pre or post-pan. Mine are pre-pan, which renders the pan control on mixer tracks unfortunately useless. I simply instead use the pans on the send faders or have a pan effect in the FX chain of the element. If you can make your sends post-pan, that's even better.


----------



## SBK (May 20, 2019)

Beat Kaufmann said:


> That's what about the text is above:



This looks clever!!!! Thanks for sharing


----------



## MartinH. (May 20, 2019)

Do you guys worry about or see a benefit in designing your reverb setups with easy stem export in mind? E.g. if you were building a template for a job that will require delivery of stems for different instrument sections, would you try to make it so you can just export the master plus all separate stems in one go with reverb in the stems already?
(apologies if I have messed up some of the terminology, I've never actually worked that way so far)


@Beat Kaufmann: Thanks a lot for the in depth explanations and examples that you always give! I've learned a lot already from your posts on this forum.


----------



## 98bpm (May 20, 2019)

[QUOTE




And that's how it can sound: Example
Observe that the tail is always the same. It is only the distance of the instrument that changes. Are you able to reach this result by using "send"?

All the best
Beat[/QUOTE]

I hope you don't mind my asking, but in your audio example, it sounded as if the instrument was moving farther away from the listener in real time. I think I understand the concept of creating space/depth by placing ERs on group channels at differing values to emulate depth. But how did you move the instrument in the example through those depths in real time?


----------



## Henu (May 20, 2019)

neblix said:


> Sub - This is like the Dry send, but it has a low pass at around 100 Hz. This send is to artificially increase and manage the low end of my instrument tracks.





fixed version said:


> This send is to artificially introduce some serious phase cancellations and troubles.


----------



## paulthomson (May 20, 2019)

This may be of interest - I don’t talk about bussing (may do a separate one on that) but I do go into some detail - might be useful for anyone just getting their heads around the whole reverb thing.


----------



## Beat Kaufmann (May 21, 2019)

98bpm said:


> And that's how it can sound: Example



*.... But how did you move the instrument in the example through those depths in real time?*

Hi
In order to be able to simulate a large depth of space, one must find an "Impulse Response" as condition, which lets the instruments sound far away at 100% "wet". Search for this in your IR-Library.
If you look at the top scheme, you can adjust the distance with the wet/dry slider in the Depth Groups. Depth1 contains more of the dry signal ... Depth3 contents more of the wet signal. Tail: wet to taste.

The sound example shows how it sounds when you pull the ER knob from *dry to wet* in a depth BUS. I did this with a controller curve which controlled the dry/wet parameter. It shows the large range of different depths you can achive with a good IR (shortened to the ER-area - so without Tail).

All the best
Beat


----------



## hdsmile (May 21, 2019)

View attachment 20216


98bpm said:


> And that's how it can sound: Example



Beat, I never understood how to properly configure the reverbs, maybe you can show an example of settings for both reverbs, a few pics example would be grateful as convolution reverb I use Spaces II and for Algo-reverb is 2C-Breeze2

here is my example of settings for Spaces II, correct me if I'm wrong


----------



## Beat Kaufmann (May 21, 2019)

hdsmile said:


> Beat, I never understood how to properly configure the reverbs, *maybe you can show an example of settings for both reverbs, a few pics example would be grateful* as convolution reverb I use Spaces II and for Algo-reverb is 2C-Breeze2



Although I mentioned in a previous thread that usually natural space pulses are more suitable for creating room depths. Meanwhile, there are also Algorithmic Reverbs that can be used to create beautiful room depths. An example of this is the EaReckon reverb shown above. 

*Here's* what you want to have *- just with "your" BREEZE2 from 2CAudio.*

--------------------------------------

Sorry for advertising - but maybe that helps solve many of your problems as well: 
We all spend a lot of money for audio plugins. Unfortunately we often don't know "how to use them". That's why I wrote the tutorial "Mixing an Orchestra"... for half the price of the plugins . 

All the best
Beat


----------



## darcvision (May 21, 2019)

do you pan your reverb bus?


----------



## hdsmile (May 21, 2019)

Beat Kaufmann said:


> *Here's* what you want to have *- just with "your" BREEZE2 from 2CAudio.*


it's not exactly what I asking for, because there is only explanation about how to creating depth with BREEZE 2 only, but I need setup example for use a bunch of two reverbs: convolution + algo reverb, like on your picture example above.
because I can create pretty great depth with Spaces II, but as soon as I turn on the algo-reverb (on Master channel) after Spaces II, the sound deteriorates


----------



## neblix (May 21, 2019)

Henu said:


> This send is to artificially introduce some serious phase cancellations and troubles.



If you double a signal, it doesn't phase cancel. The phase correlates. This is basic signal theory. Perhaps you should ask questions instead of conjecturing on something you haven't tried or seen for yourself. I'd be happy to explain how everything works in further detail. I am not happy to see people being smartasses.

Never mind the fact I provided a very long informative post about my reverb send workflow, which is on topic, and you respond with a sarcastic gripe about a tiny part of the workflow that I provided the least detail on because it only has a tangential relevance. I'm not recommending people follow and recreate my template. The OP asks how people use reverb on their orchestral instruments, so I provided an answer.


----------



## 98bpm (May 21, 2019)

Beat Kaufmann said:


> *.... But how did you move the instrument in the example through those depths in real time?*
> 
> Hi
> In order to be able to simulate a large depth of space, one must find an "Impulse Response" as condition, which lets the instruments sound far away at 100% "wet". Search for this in your IR-Library.
> ...


Thank you sir.


----------



## Beat Kaufmann (May 21, 2019)

stefandy31 said:


> do you pan your reverb bus?


No. You have to pan the individual audio channels and route them (panned) through the corresponding depth bus. That's why you should adjust left and right. Therefore, you should set the right-left position along with the selected depth.

Beat


----------



## Beat Kaufmann (May 21, 2019)

hdsmile said:


> it's not exactly what I asking for, because there is only explanation about how to creating depth with BREEZE 2 only, but I need setup example for use a bunch of two reverbs: convolution + algo reverb, like on your picture example above.
> because I can create pretty great depth with Spaces II, but as soon as I turn on the algo-reverb (on Master channel) after Spaces II, the sound deteriorates



The important thing is that the reverb in the main channel really does not produce ERs any more. It was supposed to deliver only the part of the reverb that we turned off in the bus. Unfortunately, you can not hide the ER part with Breeze without some tinkering. There are reverbs that can do that better. Also, this added tail-effect is often not larger than 20%.
So try to find reverbs where you can hide or decrease ER.
Set the Predelay to about 60 - 100ms (earlier the tail can not occur).
Set about 18% wet and for orchestral music about 2 - 4 seconds Tail Decay. That's it.

Unfortunately, I do not own Space II. The individual depths should themselves contain as little "tail" as possible. So what comes from the depth groups sounds like this:

*Just ER* https://www.beat-kaufmann.com/mixing-an-orchestra/downloads/timpani_close-depth_nur_er.mp3 (Timpani (Close, Depth1, Depth2, Depth3))
*With ER and Tail* https://www.beat-kaufmann.com/mixing-an-orchestra/downloads/timpani_close-depth_er_tail.mp3 (Timpani (Close, Depth1, Depth2, Depth3)) _...a bit too much tail to my taste _
If you can not produce such "dry" distances with Space II, you just leave the tail in the main channel and let your depths together with their tails. That works too - but without the "glue-effect" in the Main-Channel.
As already mentioned: It only counts the result. How to achieve it does not really matter.

Beat


----------



## hdsmile (May 21, 2019)

thanks Beat, I should try with other algo reverbs, could you advise some which can hide ER?


----------



## Zero&One (May 21, 2019)

Great posts, thanks all.

So, what about say EW Hollywood Orchestra. I only have the Gold, so only have the default Mid mic position. Would you still send this to the same bus that my dry libs are using?
Doesn't this also make Spitfire C, T, A setups redundant? The more I learn the more I'm confused on this :S


----------



## Dave Connor (May 21, 2019)

Divico said:


> Alan Myerson has done a lot of his mixing. I found some tips about his reverb settings:
> _“I also don’t like to EQ reverb returns. But I do roll off the low and high frequencies to the send of the reverb. This is important so that the reverb doesn’t become cluttered. There usually is a lot of low mid and low-frequency buildup that happens in a reverb. I don’t want to add to that with sounds that have it in the first place.”_


I’m guessing that he sends to an Aux; EQ’s it there and sends that to the reverb? (Or that’s how you would do it if you don’t have EQ on the sends.)


----------



## Divico (May 22, 2019)

Dave Connor said:


> I’m guessing that he sends to an Aux; EQ’s it there and sends that to the reverb? (Or that’s how you would do it if you don’t have EQ on the sends.)


to be honest putting an eq before the verb on the aux will do fine


----------



## hdsmile (May 22, 2019)

Divico said:


> to be honest putting an eq before the verb on the aux will do fine


exactly!


----------



## Divico (May 22, 2019)

James H said:


> Spitfire C, T, A


having different mic positions is a different beast. Ambience from a room mic is different when just reverb on a close mic.


----------



## Zero&One (May 22, 2019)

Divico said:


> having different mic positions is a different beast. Ambience from a room mic is different when just reverb on a close mic.



But people seem to be just using the close mics?
Would I then bus the room mics and mix them separately if required?


----------



## Divico (May 22, 2019)

James H said:


> But people seem to be just using the close mics?
> Would I then bus the room mics and mix them separately if required?


I think people use lots of mics. //Depending on your needs on horsepower you can either mix your mic positions while composing or export all of them to different stems. As stated above Alaln Myerson likes to put reverb only on the ambient mics. Think of your close mics as a detail and clarity tool. You have a solo, give it some close mic. Want some chuga chuga spiccato celli, give them some close mics for more precise and crispy chug a chug action.


----------



## Zero&One (May 22, 2019)

Divico said:


> I think people use lots of mics. //Depending on your needs on horsepower you can either mix your mic positions while composing or export all of them to different stems. As stated above Alaln Myerson likes to put reverb only on the ambient mics. Think of your close mics as a detail and clarity tool. You have a solo, give it some close mic. Want some chuga chuga spiccato celli, give them some close mics for more precise and crispy chug a chug action.



Thanks! Makes sense
I certainly need some chuga chuga in my life


----------



## Beat Kaufmann (May 22, 2019)

hdsmile said:


> thanks Beat, I should try with other algo reverbs, could you advise some which can hide ER?



*Freeware:*

https://www.kvraudio.com/product/orilriver-by-denis-tihanov
Probably your DAW-Reverb

*Some newer Reverbs at KVR* (Paid, but not very expensive)

https://www.kvraudio.com/product/ircam-verb-session-v3-by-flux (this I know, nice tail-sound)
https://www.kvraudio.com/product/eareverb-2-by-eareckon (this I know, nice tail-sound)
https://www.kvraudio.com/product/eareverb-se-by-eareckon
https://www.kvraudio.com/product/phoenixverb-by-exponential-audio-llc (this I know, nice tail-sound)

https://www.liquidsonics.com/software/illusion
https://klevgrand.se/products/kleverb
... Search at KVR...

Beat


----------



## Dave Connor (May 22, 2019)

Divico said:


> to be honest putting an eq before the verb on the aux will do fine


No need for an additional aux - just use the aux the verb is sitting in - got it.


----------



## Billy Palmer (May 22, 2019)

neblix said:


> For reverbs, I use Seventh Heaven, a fantastic Fusion-IR emulation of the Bricasti M7. So it's convolution based, but lets you manage reflections, decay time, and other neat parameters. My template is set up like this:
> 
> *1*. All tracks by default are set to output to a Null bus (-inf dB, so silent).
> *2*. Five send/busses as follows.
> ...




Awesome work


----------



## Divico (May 22, 2019)

Dave Connor said:


> No need for an additional aux - just use the aux the verb is sitting in - got it.


thats waht im saying


----------



## klavaus (May 22, 2019)

Beat Kaufmann said:


> Unfortunately, you can not hide the ER part with Breeze without some tinkering.



It would be very nice, if you could explain, how to disable the ER in Breeze2.


----------



## Andrew Souter (May 22, 2019)

klavaus said:


> It would be very nice, if you could explain, how to disable the ER in Breeze2.




You can use Hall Alg-Modes, use high Density, large Pre-delay (like 25-100ms or so), and perhaps negative Contour values... This will make Breeze behave as if it were only tails....

We are almost ready to share with you guys the next step in the evolution of our approach to these topics also, FYI...


----------



## MartinH. (May 23, 2019)

hdsmile said:


> View attachment 20273
> 
> 
> Beat, I got right now Reverberate 2 as main reverb (only for ERs) for use it on (group BUS channels section), but first of all I would like to set it up properly, because there are so many different control knobs that I don’t understand whether I'm doing right or wrong, could you help to set up it correctly?
> ...




Did you RTFM yet? I'm 90% sure that would answer all your questions about those knobs and switches.


----------



## hdsmile (May 23, 2019)

MartinH. said:


> Did you RTFM yet? I'm 90% sure that would answer all your questions about those knobs and switches.



m8 that's the problem that 90% -RTFM would answer some questions -and that is one crap, but experience is always another story!!!


----------



## hdsmile (May 23, 2019)

Beat Kaufmann said:


> 3. The big advantage now comes to the end: Because now all the different and still dry room depths (only ERs) are looped through one and the same Reverb with only Tail, so everything is now glued together nicely. Even if the instruments play in different depths the feeling of one concert hall is nicely given. Also the tail volume is the same for close and far away playing instruments which simulates perfectly the reality.



Beat, I got right now Reverberate 2 as main reverb (only for ERs) for use it on (group BUS channels section), but first of all I would like to set it up properly, because there are so many different control knobs that I don’t understand whether I'm doing right or wrong, could you help to set up it correctly?




As algo-reverb, for insert tail over all I chose EAReverb 2 where I also need help with settings, but it can be done later. Thanks in advance


----------



## Jerry Growl (May 23, 2019)

Me so wanna Briscati too!
If you want to know what the knobs do:
https://splice.com/blog/effects-101-reverb-explained/
https://www.harmonycentral.com/articles/understanding-digital-reverb-parameters

or more advanced:
http://downloads.liquidsonics.com/software/reverberate-core/manual/Reverberate_Core_User_Guide.pdf

On True Stereo:

*True Stereo*
The left input channel is convolved with the left and right impulse response file channels from IR1-A and the right input channel is convolved with the left and right impulse response file channel from IR1-B. The two output convolutions’ respective left and right components are then summed into a single stereo output. This configuration is necessary to take full advantage of true stereo impulse responses. True stereo impulse responses are required to be provided as two separate stereo files and loaded into IR1-A and IR1-B (or IR2-A and IR2-B). This configuration is typically found in high-end algorithmic reverbs.


----------



## Beat Kaufmann (May 23, 2019)

hdsmile said:


> Beat, I got right now Reverberate 2 as main reverb (only for ERs) for use it on (group BUS channels section), but first of all I would like to set it up properly, because there are so many different control knobs that I don’t understand whether I'm doing right or wrong, could you help to set up it correctly?
> 
> 
> 
> ...



Unfortunately, I do not know the Reverberate2.
So you demand a lot from me, if I should explain all the functions of the buttons. 

Nevertheless, here are a few hints ...

*Basically*

ER signals are typically 100ms to about 400ms long.
If possible, they should run out slowly.
Natural sounding ER signals are usually real those from real room Impulse Responses.
*Reverberate 2*
It seems to be a nice convolution-Reverb which is probably very usefull when it comes to insert ERs.

*1. Search for "good" room impulses*

Go through the entire IR library, set the Reverb to 100% Wet (Mix Knob) and watch the IR sound the farthest away.
Search especially natural room IRs. Although the real Bricasti M7 produces a nice reverb, it is an algo reverb. IRs from Bricasti are therefore rather good for Tails.
Also note that you choose impulse responses that do not sound completely falsified, but as neutral as possible. 

Maybe the Reverberate 2 actually offers ERs. If not, just cut a normal IR to about 100-400ms and then fade the signal out. How to do that in your Reverberate probably (hopefully) knows the manual.

*About the Settings of Reverberate2*

*Basically you can leave all settings at the buttons.*
With "IR1 EDIT" "Length" you may have to shorten the time to 100-400ms with "END".
*In the main menu "Mixer" you will probably find a wet/dry button to increase the amount of wet. This should cause the instruments to move away.*
The results https://www.beat-kaufmann.com/mixing-an-orchestra/downloads/timpani_close-depth_nur_er.mp3 (should sound similar to this)*. *

*About Settings of the Tail-Reverb*
Try to use no or less ERs again. 
Use Predelay so that the tail comes in after 70 - 150ms 

Beat


----------



## hdsmile (May 23, 2019)

Thank you very much Beat for your detailed explanation, I'm sure others will appreciate it 2!


----------

