What's new

One or more different reverbs in the mix?

pulpfiction

Active Member
As I make music as a hobby, I naturally have to mix something. However, I am very unsure about reverb.

A professional said that you should definitely use the same reverb for all instruments (the same reverb copied to different buses) and only adjust the pre-delay to the instrument group in order to simulate the position of the individual instruments in the room well. But in his opinion, the room should always be the same (e.g. in Spaces II).

Another professional always loaded a different reverb onto the different instrument groups according to his mood (different halls). And finally, to round everything off, he sent everything through a second reverb (which is a no-go, according to the other professional)

I'm a bit confused now. What's the best way to do this?
 
What sort of material were each of these people making? What are you hoping to make?

The first suggests an emphasis on simulating a distinct real space. The second is more common in music production in general, though it sounds as though both are doing orchestral-dominated work.

There is no right answer to this other than what sounds good. Even if you're trying to emulate a symphony recorded in a concert hall, the reverb police aren't going to come round your house if you use more than one in a production. And, the first pro is more likely to be wrong as there may be conditions that can't be simulated well using a single space in a single reverb given the limited number of mic and speaker placements in even something like Spaces (less so with MIR). What the single reverb approach does have is relative simplicity. You're not second guessing yourself all the time while you gain experience. But it shouldn't hold you back from experimenting and making use of techniques from pop production where it's common to have three reverb buses, not including things like slapback delays which are also used for spatialisation, each with its own distinct settings, and where naturalness is the least of anyone's concerns.
 
What sort of material were each of these people making? What are you hoping to make?

The first suggests an emphasis on simulating a distinct real space. The second is more common in music production in general, though it sounds as though both are doing orchestral-dominated work.

There is no right answer to this other than what sounds good. Even if you're trying to emulate a symphony recorded in a concert hall, the reverb police aren't going to come round your house if you use more than one in a production. And, the first pro is more likely to be wrong as there may be conditions that can't be simulated well using a single space in a single reverb given the limited number of mic and speaker placements in even something like Spaces (less so with MIR). What the single reverb approach does have is relative simplicity. You're not second guessing yourself all the time while you gain experience. But it shouldn't hold you back from experimenting and making use of techniques from pop production where it's common to have three reverb buses, not including things like slapback delays which are also used for spatialisation, each with its own distinct settings, and where naturalness is the least of anyone's concerns.
Yes, exactly. Orchestral music.
 
There is no one way to do it, you will get as many possibilities as there are composers. Just try for yourself and listen what works best to your ears. I personally mainly use 3 reverbs: 1 long, 1 short and 1 extra extra long. But it doesn't mean you have to do it as I do or use the same plugins (Spaces 2, Cubase Reverence and Valhalla Supermassive in my case).

I tend (so not every time) to use short reverbs on instruments in the low register or shorts and long reverb for the rest. Maybe you could try this approach and adjust it to your taste? Or even scrap it and try a different approach.

Anyway, don't overthink this, reverb is important but it's just reverb. Too much reverb? Dial it down or switch to a shorter reverb. Not enough reverb? Do the opposite.
 
Last edited:
I use 5 busses:

3 reverbs: room (Relab LX480E), plate (UAD Pure Plate), and weird (Softube Wasted Space);
2 delays as slap L & R (Baby Audio Comeback Kid).

I went through dozens of reverbs and delays and ended up on these above each for their functionality and sound: the Abbey Road trick on the LX480E, the lushness of the UAD, the crunch of the Wasted, the extra controls (such as panning and filters) on an otherwise simple delay was all I needed from the Comeback Kid.

Sometimes I'll whip out a Soundtoys Effect Rack if I want something more crazy like an Echo Boy or Crystallizer, or a forever plate with the LittlePlate, but that's seldom.

Like others have said, there's no best way to do this, you'll have to experiment and reach a conclusion on what you like best yourself.
 
Neither approach is incorrect, because as @gamma-ut said, it's helpful to consider the type of music and intended effect you're talking about. Typically the first approach, bussing everything to the same reverb, is ideal for when you wish to make things sounds as realistic and live in the same room as possible: i.e., orchestral music, or any kind of "live" band music. If that is your intention, it will be helpful to place everything within the same "room" so to speak by using a singular verb which glues them together in that space.

However, the second approach of multiple and different types of verb, is equally valid for different types of music (modern production, pop, electronic, hybrid scoring, etc). With a more creative use of reverb, it's less about placing instruments in the same room, and more about depth and character. You might commonly hear a pop song where the vocals have tons of reverb (and delay) and yet the guitars are super dry, the pads have a shimmer reverb, the drums have a gated reverb, etc. None of which would be realistic if you were listening to them as a live band from a seat in a concert hall.

There also exists an in-between, where you do want something to sound like a realistic live performance in the same room, but you still have to use different reverbs, or different levels of reverb, in order to cheat certain libraries to sound consistent with others. This comes with playing around and seeing what works. But hope that helps.
 
Last edited:
As long as every reverb has a job that it’s accomplishing in the context of the mix, you can use as few or as many as you want.

If you have a super wet library(ies) all recorded in the same or similar space that have a long tail already, you may need no reverb at all.

Some common orchestral setups are a single reverb to glue the mix together, a room plus a sweetening reverb, separate room, and hall and tail reverbs.

Here’s an old post with examples of a bunch of different reverbs used with a before and after comparison.

Reverb automation is very common in non-natural, non-orchestral scenarios (pop, rock, EDM, etc.) to emphasize a certain note/phrase/hit. It's less common in orchestral or even hybrid works.

That being said, if you're aiming to get a nice reverb swell in the breaks of sound or only allow the big drum hits to trigger the reverb without muddying up everything else, here are some tricks you can employ.

Reverb ducking:
  1. If a reverb has a built in "ducker" then turn it on. This will lower the volume of the reverb when the dry signal is above a certain threshold, and allow the reverb to "rush in" to fill the space when the dry signal takes a break.
  2. If the reverb does not have a built-in ducker, you can place a compressor behind the reverb and send your summed dry signal bus into the compressor as a side chain. This basically acts like the "ducker" in #1. I really like Sonible's SmartComp for this, due to it being a spectral compressor and only ducking the frequency range that overlaps with the dry signal in this application.
Reverb gating:
Not to be confused with cutting off the reverb with a gate, a la the drums from "In the Air Tonight", but putting a gate in front of the reverb, so only the big hits will be allowed through. This is fun for when you want a really big dramatic reverb, but only on the big downbeats so that your percussion reverb doesn't just wash out everything.

For examples of the gating I'm referring to, listen closely to the first half of the below video. You'll hear a huge reverb tail on the big drum beats only by using this gating method.

For examples of the reverb ducking, listen starting around 1:39 and focus on the trumpets and horns. You'll hear a really big reverb sound, but it gets out of the way when they're actively playing their stabs and fills in the void once they chill out.



Reverbs used and the jobs they serve (all FX channels have section-level sends from WW, Brass, etc.):
Room Verb FX Channel (0 dB): Cinematic Rooms Pro Studio Hall to glue everything into a single "room"
Hall Verb FX Channel (-12 dB): Cinematic Rooms Pro Subtle Hall with SmartComp for reverb ducking to add a subtle amount of room size
Tail Verb FX Channel (-15 dB): HD Cart Super Smooth with SmartComp for reverb ducking to add a lush long tail
Plate Verb FX Channel (-21 dB): Seventh Heaven Rich Plate (epic perc sends only with gate) with SmartComp for reverb ducking
Strings Bus: SP2016 to increase perceived size of room to match WW, Brass and Perc
Damage 2 Metal Hits: Seventh Heaven Sandors Hall to push it further back and add more room to the metal clangs
Contrabass Trombone: Seventh Heaven Pro Mechanics Hall to make the BRAAM sound bigger and have more room

And since I was harping on the importance of hearing things relatively in the prior post, here is the same song with no reverb at all outside of what was included in the samples themselves.

View attachment FF7R - Let the Battles Begin Medley (No Verb).mp3
 
None of which would be realistic if you were listening to them as a live band from a seat in a concert hall.
Exactly. The thing is that 95% of the music we listen to is mixed in a way that it's not "natural". The first EQ tweak you do and the close-mic'd snare is already a game over in that sense. Even a "live recording" is performed live of course but usually mixed to sound better from multitracks after the performance. Commercially released orchestral music is not different either. And cinematic stuff? I don't think anyone assumes that Dennis Sands and the rest of the gurus are hanging at the studio room to only raise a volume slider when that trumpet solo kicks in.

I've gone a full circle with this in orchestral music- first doing "whatever sounds good" until the point I started to realize that "shit, I'm probably doing it wrong". Cue eternal impostor syndrome. Then I spent years desperately trying to make everything to sound the orchestra was recorded live with one take, working with dry mics until madness...and now I'm happily back in the squre one but with that knowledge I gained from the realistic approach to utilize when needed.

For me, when working with sample libraries, the depth comes from the mic positions, no matter the space. Every space has that "front" and "back" and those can be utilized with mic positions way more than one would assume if only thinking of the size of the recorded room. I tweak the mics in a way where I want the section/ player to be in the Z axis, adjust gain and pan and I feel that when that's done right, it's already half mixed and three quarters positioned. Then it's more about the common space and shared reverb, which is basically a walk in the park when the prequisites are already pretty much met.
 
Exactly. The thing is that 95% of the music we listen to is mixed in a way that it's not "natural". The first EQ tweak you do and the close-mic'd snare is already a game over in that sense. Even a "live recording" is performed live of course but usually mixed to sound better from multitracks after the performance. Commercially released orchestral music is not different either. And cinematic stuff? I don't think anyone assumes that Dennis Sands and the rest of the gurus are hanging at the studio room to only raise a volume slider when that trumpet solo kicks in.

I've gone a full circle with this in orchestral music- first doing "whatever sounds good" until the point I started to realize that "shit, I'm probably doing it wrong". Cue eternal impostor syndrome. Then I spent years desperately trying to make everything to sound the orchestra was recorded live with one take, working with dry mics until madness...and now I'm happily back in the squre one but with that knowledge I gained from the realistic approach to utilize when needed.

For me, when working with sample libraries, the depth comes from the mic positions, no matter the space. Every space has that "front" and "back" and those can be utilized with mic positions way more than one would assume if only thinking of the size of the recorded room. I tweak the mics in a way where I want the section/ player to be in the Z axis, adjust gain and pan and I feel that when that's done right, it's already half mixed and three quarters positioned. Then it's more about the common space and shared reverb, which is basically a walk in the park when the prequisites are already pretty much met.
Yeah, for that reason I pretty much go with the hybrid reverb approach even with orchestral music - you often have to cheat things in order to sound their best in recordings (and yep, actual professional orchestral recordings are edited from many comped takes and mic positions, so that's how the sausage gets made haha). Like you I also try to work first from mic positions, and also of course from my orchestration itself. If I really need to hear that clarinet solo louder, I have no problem with cheating the close mics up in the mix in a way that sounds a little drier, and therefore "unrealistic," since after all music production (and movies too if that's your bag) are all about using a little post-production artifice to tell the best story, perfect realism be damned ;)
 
When I start a project, I always start without a template. I have set this up with shortcuts in Reaper so that I can get to the instruments I need without templates.

In order to have a good sound when composing/creating music, I always leave the default reverb on in ComposerCloud and switch between Soft, Classic and Epic if necessary. These settings change the microphone positions and the reverb at the touch of a button in order to do fit to the respective description, e.g. Epic. These shortcuts make my work much easier, especially because fine mic settings can be adjusted very quickly and easily. Of course, I sometimes change a few controls if I want something specific, but I do this while composing (on the fly and not when mixing).

Do you have any recommendations on how to achieve a good mix for orchestras based on this (each instrument with its standard reverb)?

Should I dry all instruments before mixing and then add the reverb for the individual groups again (not so many different ones)? Or would it also be a good option from your point of view for the final mix to leave all the standard reverbs as I used and adjusted them when composing)?

Of course I know the top rule....If it sounds good, it's right...
Still, would love to hear your opinions on this...
 
When I start a project, I always start without a template. I have set this up with shortcuts in Reaper so that I can get to the instruments I need without templates.

In order to have a good sound when composing/creating music, I always leave the default reverb on in ComposerCloud and switch between Soft, Classic and Epic if necessary. These settings change the microphone positions and the reverb at the touch of a button in order to do fit to the respective description, e.g. Epic. These shortcuts make my work much easier, especially because fine mic settings can be adjusted very quickly and easily. Of course, I sometimes change a few controls if I want something specific, but I do this while composing (on the fly and not when mixing).

Do you have any recommendations on how to achieve a good mix for orchestras based on this (each instrument with its standard reverb)?

Should I dry all instruments before mixing and then add the reverb for the individual groups again (not so many different ones)? Or would it also be a good option from your point of view for the final mix to leave all the standard reverbs as I used and adjusted them when composing)?

Of course I know the top rule....If it sounds good, it's right...
Still, would love to hear your opinions on this...
The main problem with „dry“ and „close“ mics is, that lots of them are not intended for single use. When trying wet libraries like SSO or OT, you’d notice that the close mics will sound too in-your-face, too thin and lacking the richness in the lower mids and bass.

I’ve once tried using just close mics and Samplicities Berlin Studio. While BS did a great job in general, it didn’t sound as full, comparing it to a wet library mix with a small amount of Berlin Studio.

My approach is usually to match everything to the wettest library and then create four different depths of one matching reverb. So I’ve personally build an AIR Lyndhurst preset for Valhalla Room and will apply that to my bone-dry VSL and not so dry VSL Synchron libraries in varying amounts.

One single Seventh Heaven reverb gets on everything, just a little hint and without any Early Reflections.

I think the approach by @Henu is super awesome and I feel really stupid, not even thinking of that.
 
Last edited:
Here are 2 more cents from me to help you understand why professionals use different reverb effects per mix.

Reverb tail and spatiality are different things:
A good "reverb" effect can basically do 2 things, which can be particularly useful when mixing orchestras:

1. it can make instruments sound in the front or back of the room - with virtually no reverb tail.
Example (without reverb tail)

2. it can give instruments more or less of a reverb tail without pushing them forwards or backwards in the room at the same time. This is good for giving all instruments which has got its depth with Nr1) some additional tail.
Example (even the solo Cello got reverbtail it stays still in front)
-----------------------------------------------------

If you realize these two possibilities, it is quite possible to use different reverb effects. Some can do No. 1 better, others can do No. 2 better.
Unfortunately, most of the reverb effects can only do 1) + 2) in the mean time but not separately.

Beat
 
Yay, I'm awesome!!!! :dancer:Out of interest though, what in that approach was what you though good but hadn't thought before?
You are! :whistling:

Well, using the mic positions to place an instrument in the field of depth of the room. I usually just pre-set the mic positions so that I achieve a "good" sound but I never used it for the depth placement as I always thought: "Ah well, the depth is baked into the samples, no way to change the depth placement."

I guess, I know what I've got to try out in the evening.
 
Ah, I see! I think "striving for the good sound" still works nicely on solo and exposed things, but definitely try the positioning on the stuff you want to be more cohesive with the room!
 
Top Bottom