# Reverb on each track vs. Reverb groups



## Voider (Feb 20, 2021)

Fellow composers, 
how do you approach reverb especially in productions that have orchestra parts in it but aren't classical, e.g. hybrid stuff or cinematic music.

Do you find it better to group instruments into the same reverb (for instance based on their distance to the listener) for a more authentic feel that they're all playing in the same room, or do you believe that it doesn't really matter and you could just get well along just by adding a single individual reverb on each mixer track and tweak it as desired?

I'm looking forward to your opinions!


----------



## MusiquedeReve (Feb 20, 2021)

Voider said:


> Fellow composers,
> how do you approach reverb especially in productions that have orchestra parts in it but aren't classical, e.g. hybrid stuff or cinematic music.
> 
> Do you find it better to group instruments into the same reverb (for instance based on their distance to the listener) for a more authentic feel that they're all playing in the same room, or do you believe that it doesn't really matter and you could just get well along just by adding a single individual reverb on each mixer track and tweak it as desired?
> ...


This topic is relevant to me 
Subbed


----------



## chocobitz825 (Feb 20, 2021)

wouldn't this completely come down to creative choice rather than best practices? if you're not going for real, but rather creative, anything goes. I just did a song that had Piano, Viola, and Nagoya Harp, and some percussion. While the percussion and pianos share the same reverb bus, I wanted a broad spacier more romantic verb for the viola parts and something a little more subtle for the nagoya harp, so they had a reverb set on the individual tracks. Since it wasn't about simulating what's real, I felt fine using different verbs and blending them. In any case of trying to achieve something organic and "acoustic", I would probably stick to far fewer reverbs and instead try and simulate them being all in the same room.


----------



## iaink (Feb 20, 2021)

For my orchestral template, I use buses to group things in VEP, route the buses to each output, then send each group to the same reverb in Cubase.

The main advantage I find is that's it very easy to adjust or switch the global reverb. The groups might have a slightly different level of send depending on the library, etc.

But using inserts on each group (or track!) would make it too time-consuming to adjust.

For synths I'll usually use inserts instead. And if I need special effects on something I might keep it from the send and use the needed inserts either in VEP or Cubase.


----------



## Akarin (Feb 20, 2021)

For SFX, reverb is part of the sound. Like a big hit in a cavern. I use an insert for that. For all the rest, I will use reverb busses with sends. Usually one for the ERs with a convolution and one for the tail using an algorithmic one. 

Here's my process:


----------



## BassClef (Feb 20, 2021)

Hobbyist here... It seems somewhat common to group instruments sections (strings, brass, winds, etc) and then send those groups (not each instrument track) to your reverb aux buses. This way you can easily/quickly change the reverb send for all of your strings, etc. 

I do not do this because I want to maintain individual control over the reverb of EACH instrument. All of my string samples do not need the same reverb amount. I do set up groups in Logic for my orchestral sections (one for hybrid/synth instruments as well) but only for organizational purposes. I usually set up only two reverb aux tracks, one convolution with a "stage" preset (maybe 1.8 sec tail) and a second algorithmic with a longer tail "hall" preset. (I do tweak those presets) 

Every instrument track is sent to those two reverbs aux buses so that I can individually control the amount of each reverb they get. I will usually use lower "send" values for very wet instruments (like Spitfire samples from Air Studio) and often for shorts. Hybrids and synths often get much less reverb.

If I did want to change the reverb send for an entire section, I can quickly select all of my strings with a simple "click/drag" mouse action and then adjust the send of all those tracks at once.

I have experimented with using different reverbs for each section. I was using EW-SpacesII which has different presets for each orchestral sections... pretty cool... reduced the tail on each and had different delays and other settings... then adding a long "tail only" algorithmic reverb on the master bus. BUT in the end, that did not seem to make any difference to my ears over using a single reverb and varying the amount of your different sections to create depth. (my 70 year old ears are not what they used to be)

Off course all of this is adjusted per project, and I have no idea if this is good or bad practice but it's working for me.


----------



## MusiquedeReve (Feb 20, 2021)

BassClef said:


> Hobbyist here... It seems somewhat common to group instruments sections (strings, brass, winds, etc) and then send those groups (not each instrument track) to your reverb aux buses. This way you can easily/quickly change the reverb send for all of your strings, etc.
> 
> I do not do this because I want to maintain individual control over the reverb of EACH instrument. All of my string samples do not need the same reverb amount. I do set up groups in Logic for my orchestral sections (one for hybrid/synth instruments as well) but only for organizational purposes. I usually set up only two reverb aux tracks, one convolution with a "stage" preset (maybe 1.8 sec tail) and a second algorithmic with a longer tail "hall" preset. (I do tweak those presets)
> 
> ...


How do you "glue" it all together in the overall mix? A master bus reverb?


----------



## BassClef (Feb 20, 2021)

I don't need glue. All instruments are in the same room (convolution "stage" reverb but with varying levels) and all getting the same algorithmic "hall" reverb, but in varying levels. Of course I disable ANY reverb that is in the sample library's GUI.


----------



## easyrider (Feb 20, 2021)

I send so therefore I am....


----------



## Nate Johnson (Feb 20, 2021)

If you got the CPU juice to experiment with verbs per instrument, try it! I think its totally unnecessary though. I just use a single verb bus and send (in varying amounts) whatever instruments I want to it. Real verb, fake verb, whatever version of reality I want. My work is very much non-traditional and usually fairly detached from reality.


----------



## MusiquedeReve (Feb 20, 2021)

This blog post hit the spot:



Orchestral Positioning: Reverb in practice |


----------



## BassClef (Feb 20, 2021)

ChromeCrescendo said:


> This blog post hit the spot:
> 
> 
> 
> Orchestral Positioning: Reverb in practice |


Good article that I read sometime last year, and that's sort of what I was doing. But now back to my old way as stated above.


----------



## pondinthestream (Feb 20, 2021)

I just do whatever I think appropriate for the spatial effect I want - is it meant to simulate a real space with real sound sources or something else. Once you decide that it is just a matter of thinking through the instrumentation and how they are positioned and the baked in sound of the libraries. Maybe you want different ERs here and there plus a master buss reverb. Matbe you want a mix and match tailored to the characteristics of lots of different libraries. Maybe not.


----------



## MauroPantin (Feb 21, 2021)

I have two verb aux tracks where I feed my entire template. When it is time for stems, a script makes sure each stem gets rendered with its own verb. The send values are different but the reverb are the same for the entire project. If you want some sort of cohesiveness then that works. 

Otherwise you can get super creative, it's an FX after all, and it's all fake anyway. But for mockup "realism" purposes a couple of verbs in a few busses is more than enough and it saves a ton of CPU


----------



## mixtur (Feb 21, 2021)

The main reasons for a bus approach would be CPU for me, but it will also make sources glue better and genereally result in less mix density. I would use an insert for special effects rather than general ambience.


----------



## wst3 (Feb 21, 2021)

processing in general is a tricky topic, there is an element of personal taste, making advice a bit tricky. And yet we all try, including me.

One of the best tidbits of advice I received was that one needs to decide whether one is trying to capture (or in our case create) reality or fantasy. This simple (?) choice can guide all remaining decisions. This advice was intended for recording, but it carries through to working with virtual instruments as well.

Multiple microphone positions are a mixed blessing, on the one hand they can reduce any contribution from the room, on the other an instrument recorded with a close microphone sounds different than the same instrument recorded with a microphone some distance away, and it is not just the room, there will be attenuation, both broadband and specific to different registers.

Once you have selected your instruments, and placed them in the audio image you are trying to create (we are skipping over placement entirely!) it is time to add some reverberation.

If your instruments are all from the same library, or recorded dry, or the close microphone positions are reasonably isolated you can treat the entire ensemble as a single entity. You'll want to be able to play with levels, so I suggest sending each family of instruments to an aux bus feeding the same reverb plugin, and varying the level fed to the reverb.

If you want to take it one step further you can use two reverbs, one for the early reflections and one for the "tails". This is a very popular approach, but I've not been happy with my attempts, so I prefer to use a reverb that allows me to adjust pre-delay, early reflections, and the tail separately.

The other thing I try to do is apply processing in the order in which it would be applied in the real world. In my limited experience the usual order is the room followed by artificial reverb (e.g. plate). 

I like the UA Ocean Way room plugin. I don't have the horsepower to use too many of them, and it doesn't work as well on sample libraries as it does on live sources, but if I do use it then it is first in line.

Next comes a "general purpose" and realistic or natural reverb - I happen to like Exponential Audio Nimbus and 2C-Audio Breeze (and I've been experimenting with the Breeze & Precedence combo as well). 

If I want more character, or effect, I will add a reverb that is not ashamed to be artificial. The Lexicon 480, or even the 224 and Valhalla VintageVerb are excellent choices. Exponential Audio R4 and 2CAudio B2 are capable of sounding artificial as well.

Used to be I would always use a plate reverb for that last stage. Lately I've been experimenting with a chamber as the last stage. I was fortunate to work in a studio that had a small chamber, and the sound is amazing. 

Sometimes I feed the chamber with the plate, which is fed from the individual "family" plugins. And then I tear my hair out, but that's my problem, not yours!

So that's one approach. The other approach - creating a soundscape that wouldn't, or couldn't exist in the real world - has no rules. Or at least I'm not aware of any. Just do what sounds cool. I don't do this with virtual orchestras, but I might if I were writing trailer tracks or epic fight scene music. I do use it for pop tunes.

Hope you find something useful in all that.


----------

