What's new

How do you use Your reverb on orchestral samples?

camelot

Member
As a send, it will sit on an own fx channel which you can mix individually. Furthermore, you can set the ratio of dry and wet for each channel of the group individually using the send for the wet component and the channel volume as dry. Opposed to a reverb on a group, where the ratio is fixed for all of them.
 

Shredoverdrive

Active Member
- I have every instrument routed through a single instance of VSS2 to get the rough stage/panning correct.
Oh my, it never occurred to me that I could do that! Since VSS2 has presets for individual instruments, I thought I could not route every one on a single instance of VSS2. I really am dumb sometimes. VSS2 is not a CPU hog at all but I still must try this this week-end. Thanks for the idea.
Apart form this slight difference, I have the same setup as you, @BenG . I got inspiration for it from a post by @bennyoschmann, a while ago.
VSS2 with instruments presets for VSL, East West (or adapting them when they do not exist, as for my beloved CH stuff or others) routed by section to a QL Spaces verb. I have been trying to add a very slight UVI Sparkverb algo glue on top of all that but I'm not convinced so far.
 

Divico

Senior Member
I've always wondered, does, for example, Hans Zimmer @Rctec use additional reverb on his orchestra after he's recorded at somewhere like air. I guess it's kind of similar to using samples without the additional build up of noise/reverb from the millions of mics you're using when playing in samples.
Alan Myerson has done a lot of his mixing. I found some tips about his reverb settings:
“I also don’t like to EQ reverb returns. But I do roll off the low and high frequencies to the send of the reverb. This is important so that the reverb doesn’t become cluttered. There usually is a lot of low mid and low-frequency buildup that happens in a reverb. I don’t want to add to that with sounds that have it in the first place.”

“I also use multiple reverbs per stem and have the stem reverbs independent. So, my strings would have a set of reverbs, brass another set, percussion another etc. This gives me a lot of space to play with and build the movement. I don’t like having a static mix. Music has to move and the instruments have to play in the space and interact. So multiple reverbs help me with this.”

“I almost never send reverbs from the spot mics unless it is really needed”

“Now, when I send to the reverb aux, I send it from my room mics. I get a lot of body from them and use that to extend the room rather than trying it with the spot mics, because they don’t usually glue well in the mix that easily as they have more mid range content.”

“I sometimes add a devil loc or a decapitator in the return of the front or surround reverb just to give it a bit of grit and definition”

“I don’t use the pre-delays on the rooms and never use it to place the instrument in space. I use them only to get a definition for the reverbs. And for a long time, the New York pop world pre-delay value was 120 ms. That is what I use for most of the pre-delays to get that delay a bit separated from the main sound if I want it as an effect in a dense mix.”

Also if I am not mistaken he is a big Bricasti fan and uses a specific setup for both left and rigth.
 

BenG

Senior Member
Oh my, it never occurred to me that I could do that! Since VSS2 has presets for individual instruments, I thought I could not route every one on a single instance of VSS2. I really am dumb sometimes. VSS2 is not a CPU hog at all but I still must try this this week-end. Thanks for the idea.
Apart form this slight difference, I have the same setup as you, @BenG . I got inspiration for it from a post by @bennyoschmann, a while ago.
VSS2 with instruments presets for VSL, East West (or adapting them when they do not exist, as for my beloved CH stuff or others) routed by section to a QL Spaces verb. I have been trying to add a very slight UVI Sparkverb algo glue on top of all that but I'm not convinced so far.
Correct me if I'm wrong, but doesn't VSS2 automatically load everything into one instance where all the instruments appear on a single 'stage'?
 

Shredoverdrive

Active Member
Well it shows them all for sure but it never occurred to me it applied to the all the instruments in one instance. I thought it was just to keep track of the whole picture.
 

Beat Kaufmann

Active Member
I know this wasn't directed at me but I thought I'd try to sneak in a question anyway :)

What (if any) difference would you say it is between placing the ER portion of a reverb as an insert on a group channel as opposed to on a send?

Thanks by the way for all your contributions on this forum. :)
Thanks for the nice words, Bear Market!

So, first we have to say that the result always counts. That's why there is not THAT procedure.
Nevertheless, my proposed system (depths in a group channels) has 3 main "plus points":

1. Acoustically creating a room depth usually requires more effects than just a reverb. An EQ, for example, helps to better simulate the distance of instruments. Since all instruments come with samples at the same volume (alto flute ... bass drum), you always have to amplify the instruments in depth3 with Compressor & Co because the are even weaker (EQ cuts the high frequencies...).
All these procedures can be conveniently solved in each bus channel for optimizing every room depth perfectly. With more or less "send" into one single reverb this is not possible in the same way.

2. If you have a 4th bus (without any effect), you can collect all the sampled instruments there that have already integrated a room depth.

3. The big advantage now comes to the end: Because now all the different and still dry room depths (only ERs) are looped through one and the same Reverb with only Tail, so everything is now glued together nicely. Even if the instruments play in different depths the feeling of one concert hall is nicely given. Also the tail volume is the same for close and far away playing instruments which simulates perfectly the reality.

This system is almost always successful, especially with larger mixes. Maybe you can count this fact as a further Plus.

--------------------------------------------------
That's what about the text is above:

And that's how it can sound: Example
Observe that the tail is always the same. It is only the distance of the instrument that changes. Are you able to reach this result by using "send"?

All the best
Beat
 
Last edited:

Zoot_Rollo

Throbbing Member
What (if any) difference would you say it is between placing the ER portion of a reverb as an insert on a group channel as opposed to on a send?
that's the way i do it, especially if the instruments are not "in situ".

i'll use EAReverb 2 or Panagement/low CPU reverb on each track for placement and ER,

then route to a bus with a send to a(n) LR reverb.
 

neblix

Music, Math, Cats
For reverbs, I use Seventh Heaven, a fantastic Fusion-IR emulation of the Bricasti M7. So it's convolution based, but lets you manage reflections, decay time, and other neat parameters. My template is set up like this:

1. All tracks by default are set to output to a Null bus (-inf dB, so silent).
2. Five send/busses as follows.
  • Dry - This is a 0 dB unity gain bus by default with nothing on it.
  • ER - This has a subtle low roll-off into a reverb configuration set only to early reflections.
  • LR - This has a less subtle low roll-off into the identical reverb configuration, but set instead to late reflections.
  • Amb - This is like the LR send, but it has some more creative effects like Valhalla Shimmer adding a spacey, lasting shine. I don't write strictly traditional acoustic orchestration so this is a personal thing.
  • Sub - This is like the Dry send, but it has a low pass at around 100 Hz. This send is to artificially increase and manage the low end of my instrument tracks. In this send I could do things like stereo field management, compression, automation, etc.
3. All tracks in my DAW have the send faders available so I can control the blend on every element in my mix, right in the DAW mixer without opening plugins. I can place a choir further back at say, 10% Dry - 30% ER - 60% LR. For soloists, give them Dry detail and some Amb to add space without pushing them back into the room. It's case by case; this template is about allowing me easy access to use my ears and adjust things, not necessarily about pre-mix ideologies.

3 (sub). I have a few reasons to use a Dry send and not simply output the track to Master. One is that it allows me independent control of the track fader (which controls all of these sends together, because they're post-fader sends) vs. controlling the level of dry signal for blending. In other words, without doing this, I'd have to use the track fader to control Dry signal, making it useless for general mix adjustments and automation, and then changing the other sends to pre-fader so I can pump more signal into them should the Dry have to be really low (for spacier/further sounds).

Another reason is now I can process the detailed, clear parts of my mix without also processing the reverb. There are relatively few cases where I actually do this, but it's useful in some circumstances. One time, I took the Dry signal and used it as a key input to a sidechain, so that I was driving the master compressor only by detailed information and none of the buildup from room sound. I like to experiment with unconventional mixing techniques, sometimes it pays off.

EDIT: Worth mentioning, the Sub bus is completely dry, so if I shove something in the back of the room, like the string section, I can still steal their dry low end. It's one of those "larger than life" approaches to mixing.

4. All sample libraries have unloaded all mic positions except for the close positions. Exceptions made occasionally for drum overheads/rooms or certain libraries where I like the smoother sound of a slightly farther position (like Tree mics in Spitfire libraries). This isn't a hard fast rule but it's an important starting point for the most efficient RAM usage and easiest mixing process. Having all close mics and managing reverb through just a small number of plugins is not only incredibly efficient, it sounds way better than anything I used to do before, and blending libraries from different developers is a completely seamless thing.

Here is an example of a song I mixed utilizing this workflow. There are sample libraries from 4-5 different developers here, yet it's not even a thing to consider when mixing using this approach. There's barely any EQ happening because they're quality libs, just gentle filtering to control ranges.

Excuse the scratch composition; additionally, this demo was made before I got Seventh Heaven. It's the TSAR-1 from Softube. It would probably sound even better if I replaced the reverb config.


Here is an "alternate mix", where I arbitrarily changed the positioning of elements. I brought the choirs closer and moved the strings back for a more intimate sound, and this was done purely through managing the Dry, ER, and LR faders on those elements (celeste track, string bus, choir bus).


Here's a snippet of the Menuet from Ravel's Le Tombeau de Couperin. I just snagged the MIDI online from somewhere, so the sample sequencing probably isn't the greatest. But this is demonstrating combining Spitfire Strings and Berlin Woodwinds, and it's totally seamless.


EDIT: Make sure you know if your sends are pre or post-pan. Mine are pre-pan, which renders the pan control on mixer tracks unfortunately useless. I simply instead use the pans on the send faders or have a pan effect in the FX chain of the element. If you can make your sends post-pan, that's even better.
 
Last edited:

MartinH.

Senior Member
Do you guys worry about or see a benefit in designing your reverb setups with easy stem export in mind? E.g. if you were building a template for a job that will require delivery of stems for different instrument sections, would you try to make it so you can just export the master plus all separate stems in one go with reverb in the stems already?
(apologies if I have messed up some of the terminology, I've never actually worked that way so far)


@Beat Kaufmann: Thanks a lot for the in depth explanations and examples that you always give! I've learned a lot already from your posts on this forum.
 

98bpm

Member
[QUOTE

And that's how it can sound: Example
Observe that the tail is always the same. It is only the distance of the instrument that changes. Are you able to reach this result by using "send"?

All the best
Beat[/QUOTE]

I hope you don't mind my asking, but in your audio example, it sounded as if the instrument was moving farther away from the listener in real time. I think I understand the concept of creating space/depth by placing ERs on group channels at differing values to emulate depth. But how did you move the instrument in the example through those depths in real time?
 

Beat Kaufmann

Active Member

And that's how it can sound: Example
.... But how did you move the instrument in the example through those depths in real time?

Hi
In order to be able to simulate a large depth of space, one must find an "Impulse Response" as condition, which lets the instruments sound far away at 100% "wet". Search for this in your IR-Library.
If you look at the top scheme, you can adjust the distance with the wet/dry slider in the Depth Groups. Depth1 contains more of the dry signal ... Depth3 contents more of the wet signal. Tail: wet to taste.

The sound example shows how it sounds when you pull the ER knob from dry to wet in a depth BUS. I did this with a controller curve which controlled the dry/wet parameter. It shows the large range of different depths you can achive with a good IR (shortened to the ER-area - so without Tail).

All the best
Beat
 
Last edited:

hdsmile

Member
View attachment 20216

And that's how it can sound: Example
Beat, I never understood how to properly configure the reverbs, maybe you can show an example of settings for both reverbs, a few pics example would be grateful:) as convolution reverb I use Spaces II and for Algo-reverb is 2C-Breeze2

here is my example of settings for Spaces II, correct me if I'm wrong
Pic_spaces_2.gif
 
Last edited:

Beat Kaufmann

Active Member
Beat, I never understood how to properly configure the reverbs, maybe you can show an example of settings for both reverbs, a few pics example would be grateful:) as convolution reverb I use Spaces II and for Algo-reverb is 2C-Breeze2
Although I mentioned in a previous thread that usually natural space pulses are more suitable for creating room depths. Meanwhile, there are also Algorithmic Reverbs that can be used to create beautiful room depths. An example of this is the EaReckon reverb shown above.

Here's what you want to have - just with "your" BREEZE2 from 2CAudio.

--------------------------------------

Sorry for advertising - but maybe that helps solve many of your problems as well:
We all spend a lot of money for audio plugins. Unfortunately we often don't know "how to use them". That's why I wrote the tutorial "Mixing an Orchestra"... for half the price of the plugins ;).

All the best
Beat
 

hdsmile

Member
Here's what you want to have - just with "your" BREEZE2 from 2CAudio.
it's not exactly what I asking for, because there is only explanation about how to creating depth with BREEZE 2 only, but I need setup example for use a bunch of two reverbs: convolution + algo reverb, like on your picture example above.
because I can create pretty great depth with Spaces II, but as soon as I turn on the algo-reverb (on Master channel) after Spaces II, the sound deteriorates
 

neblix

Music, Math, Cats
This send is to artificially introduce some serious phase cancellations and troubles.
If you double a signal, it doesn't phase cancel. The phase correlates. This is basic signal theory. Perhaps you should ask questions instead of conjecturing on something you haven't tried or seen for yourself. I'd be happy to explain how everything works in further detail. I am not happy to see people being smartasses.

Never mind the fact I provided a very long informative post about my reverb send workflow, which is on topic, and you respond with a sarcastic gripe about a tiny part of the workflow that I provided the least detail on because it only has a tangential relevance. I'm not recommending people follow and recreate my template. The OP asks how people use reverb on their orchestral instruments, so I provided an answer.
 
Last edited:
Top Bottom