# Reverb techniques + Library mic mixing vs reverb plugins



## Dan J. B. (Dec 30, 2015)

Hey all,

I couldn't find much here on the topic of reverb setup choices so thought I'd start this thread.

I'm wondering how composers here personally use reverb in your projects and templates, i.e. having one main verb on the master channel, different verbs on groups or maybe both or something totally different? Also what reverbs you choose to use, like/dislike etc and how you go about using libraries microphone mixes with 3rd party reverbs.

In one of Spitfire Audio's Youtube videos with Jake Jackson he said he likes to use a short reverb followed by a longer one for each section. Currently I have a similar setup in my template with different QL Spaces on each group (Strings, Perc, etc) but then just one main longer one on a channel before the stereo out.

This works well for me currently with Symphobia 2, Cinematic Strings 2 etc, however I recently got Albion One and downloading Mural 1 as I type. As these libs have brilliant microphone mixing options from Air Lyndhurst I'm wondering whether I'd be better having them bypass my reverbs with separate verb free groups. Maybe just having a subtle Spaces instance to bring everything into a similar space.

Look forward to reading how you use reverbs and your thoughts on mic mixes vs reverb.

Cheers,

Dan


----------



## Gerhard Westphalen (Dec 31, 2015)

I like the sound of Air and the ambient mic positions so I generally use the reverb to match other libraries to it. Having said that, I still apply some to the Spitfire to help put them all in the same room. I'm by no means a reverb expert so I could be wrong in my approach and someone could offer a better alternative. 

Re: where to put the reverbs - it all depends on your stemming. You can't have a single verb since then you can't stem it out unless you were to export each stem individually so that it goes through the verb alone. If you're not working with stems then you could use a single reverb. In my template I'm currently running 44 reverbs. 22 are a shorter reverb and 22 are a longer one. It's 11 stems but separated into front/rear. They are identical for all busses (although perhaps I could achieve better results by varying some settings). I just need to do it in order to have flexibility for stems. Running the 44 instances of Reverence takes up less than a quarter of my asio load in Cubase so it's not all that much of a performance hit.

If you're in Cubase you'll probably run into issues if you run multiple Spaces (even just 2 can be enough). It's what I used to use before I had to switch to Reverence because it makes the asio performance in Cubase very unstable. I believe it has something to do with the graphics in Cubase. I've asio guard (which does improve it somewhat), multiple versions of Cubase (started happening after 6 when they changed the graphics system), multiple versions of Windows, multiple graphics cards, and multiple computers all with the same issue so I ended up having to ditch Spaces even though I really liked it. Even tried running it in VEP which still gave the same issues which is strange.


----------



## Lassi Tani (Dec 31, 2015)

If I would have Spitfire libraries the most of my template, I would try to match other libraries to the reverb of Air. Most of my template is Eastwest, so it's a bit different for me. This is how my template looks like:

- Tracks don't have any reverbs.
- Tracks are grouped to sections according to the library. So e.g. strings would be HS Strings, Albion Strings etc.
- The groups send to reverb channels (strings reverb, woodwinds reverb, etc). Also they send to EaReverb2 channels, which I use only for early reflections and positioning of the sections.
- The reverb channels are routed to Wet channel.
- The sections are routed to Orchestra channel. The Orchestra channel sends to Orchestra Reverb, which is Spaces, which is kind of glue to make all sections sound like they are in the same room.
- The Orchestra channel is routed to Dry channel.
- The Dry channel and the Wet channel are routed to Mix channel.

Good thing about dry and wet channel is that you can easily change the level of the wet sound. But of course you need to use mostly close mics or as in Hollywood Strings Gold mid-tree mics. Perhaps you could route different mics separately, I'm not sure if that works?

A good point that Gerhard mentioned is try to avoid issues. Too many Spaces => your DAW won't probably like it.


----------



## mirrodin (Dec 31, 2015)

As I blend libraries since my collection is quite small I tend to vary a bit from project to project. However, as a common-note in all my mixes when using sample libraries and adding "gel" reverb, I always use the Reverb module(s) in auxiliary channels/buses and send "groups" to it individually. Rarely have I had to route individual track sends unless it's a soloist and I intend to not group the solo performance with the instrument group bus in order to keep it from building up the same reverb tail (ie: simulating using close / spot mics on the instrument to "feature it" subtly.

All things in context, and it depends on what is in the samples. If you have close, stage, and room (5.1 surround ready), I personally like reading through the documentation to know what was involved in the sampling sessions. Everything from which venue the recording took place, the recording chain, and mic position notes can only help when conforming in mix.

For example, when blending multiple libraries that cross different venues and have different RT60 profiles, I'll research a bit on the stage/room documentation, and try to blend the shorter tailed library with the longer tailed library by seeing if I can find IR's of the venue they were recorded at. 

I might listen carefully to figure out if I need to do multiple stages of Reverb, one to try and emulate the early reflections only and give the samples the "smear" they need to blend between each other or at least mask their incohesiveness between each other. OR, layer the two and use one to mask the other (like an 90%/10% blend) so long as it's not immediately audible in the final mix, but if you mute the quiet one the element begins to stick out. That sort of "glue" layer tricks the ear into hearing the interaction of the longer-tail with the shorter samples.

I'm not so sure how well this will work across EVERY library, there are so many big name companies out there with quite impressive collections - and even smaller developers now that are putting out some cool solo instruments. 

Whether you prefer to add reverb in individual sections/auxes for each group and/or a global "gel" reverb aux, always remember to keep it subtle. A moderate 4-8% increase might be immediately audible and pleasant at first (sounds more exciting) but sleep on it and come back with an objective ear and mind and be sure to ask yourself does this serve the music? Is it too much, or is it not even noticeable until I take it away and feel the illusion of the orchestral performance being cohesive is just plain broken without it. 

If you can't live without it, you've got a good balance of reverb.


----------



## Dan J. B. (Dec 31, 2015)

Thanks for your response guys.

My template isn't currently set up with stemming in mind as I'm only doing stereo mixes at the moment. So having just groups for each section with an instance of Spaces on followed by a final verb works fine for me. If/when stems and/or surround sound is required I'll either do them separately or alter my template. However Spaces is quite CPU intensive as you say Gerhard. I'm in Cubase 7.5 and have 8 Spaces instances (7 on groups and the 1 final 'gel') tho and running fine.

I've regretted purchasing Spaces a few times in the past with various problems. One big one was it crashing Pro Tools 10 upon loading in open sessions in which I've used it for audio post (Had to do the reverb/location sound work in Cubase and export back to PT). I've just upgraded to 12.4 tho and it works fine now. I've been and still am slightly tempted to create a template and start working in Pro Tools instead but that's another topic.

I like the idea of having a short and longer reverb on each stem/section but think that may cause problems having so many instances of Spaces. Another reason to use something else.

I think with the Spitfire additions to my collection, having library specific sections like you say Sekkosiki is a great idea and I may well incorporate that myself.

BTW whilst wanting to gain ideas I kind of started this thread as somewhere to post and discuss how we personally choose to use reverb and microphone mixes rather than purely looking for advise (although much appreciated!), as I couldn't find much here on the topic. Sorry I guess I could have made the title and my post a little clearer.

Anyway Mural 1 has just finished downloading so I'm off to play! HAPPY NEW YEAR EVERYONE!


----------



## Gerhard Westphalen (Dec 31, 2015)

Dan J. B. said:


> Thanks for your response guys.
> 
> My template isn't currently set up with stemming in mind as I'm only doing stereo mixes at the moment. So having just groups for each section with an instance of Spaces on followed by a final verb works fine for me. If/when stems and/or surround sound is required I'll either do them separately or alter my template. However Spaces is quite CPU intensive as you say Gerhard. I'm in Cubase 7.5 and have 8 Spaces instances (7 on groups and the 1 final 'gel') tho and running fine.
> 
> I've regretted purchasing Spaces a few times in the past with various problems. One big one was it crashing Pro Tools 10 upon loading in open sessions in which I've used it for audio post (Had to do the reverb/location sound work in Cubase and export back to PT). I've just upgraded to 12.4 tho and it works fine now. I've been and still am slightly tempted to create a template and start working in Pro Tools instead but that's another topic.



The issue with Spaces isn't the processing power it uses as on my system it's pretty much the same as running Reverence. The issue is that it has some sort of bug which causes spikes when certain things are done on the Cubase GUI while working on a project and the more instances there are, the more severe it is.


----------



## Andrajas (Jan 1, 2016)

I really have a hard time understanding how to handle reverbs. I own Spaces and Valhalla Room and I own different kind of libraries (Cinebrass,Albion, Adagietto etc). I get confused when reading all this how to put everything in same space.I feel I'm doing this in the wrong way and I would really need an easy explanation on tips to achieve a great space that glues everything together. How to use convolution/algorithmic reverb in my case.


----------



## wst3 (Jan 1, 2016)

First things first - it is not always necessary to put everything in the same space, even if your goal is realism, and especially if it isn't. Listen to pretty much any pop titles engineered by Bruce Sweiden or produced by Quincy Jones - they are masters of combining different spaces (sometimes very different spaces). Sometimes this is done to create a new space that never existed. Sometimes, well, who knows what goes through their minds!

If I had all the instruments from a single developer (e.g. Spitfire, Cinesamples) where the recordings were made with multiple microphones but using the same microphones in the same space then I would probably start with the balance between the microphones to establish a sense of space. I'm pretty sure I could get a very nice sound that way.

I don't, however, have that luxury, so this is what I do...

I have four 'standard" reverbs that live in aux busses:
1) UAD Ocean Way Studios- not actually a reverb, and it does not work for every sample library, but all my live tracks go through this in re-mic mode. It's amazing!
2) UAD Plate 140
3) Reverberate 2 (this is new to me, so I'm still deep in the learning curve - previously I used Pristine Space)
4) PSP 2445 - this is a new addition to the toolkit, so I'm still sorting it out. This is a very animated reverb, similar (in that characteristic) to the Lexicon 224 or the Valhalla VintageVerb or the Exponential Audio R2. I think I may end up adding one of those as a fifth send.

I sub-mix strings, winds, brass, percussion, guitars, keys, etc and send each submix to each of the reverb auxes. This is one of the trickier parts - sometimes I run them in series, pretty much in the order above, and sometimes I run them in parallel, and sometimes I do some weird series/parallel thing. I think (if I could afford it<G>) it would be really cool to run everything through the first two in series, and then sum them to feed Reverberate, and then maybe take the series outputs and feed them to different algo reverbs. I think!

Sometimes I still need a reverb to glue it all together, at which point I will insert a reverb into the 2-mix. I try to avoid this, as it feels like I didn't do something properly, I don't know if that is true. It could be that some things just NEED that final reverb.

I've also been experimenting with Virtual Sound Stage, and I think that it will become part of my workflow. I have version 1, which is a bit cumbersome. I've tried the demo of version 2, and I'd like to upgrade when finances allow. So far it doesn't feel like a pressing need.

I also have a smattering of VSL libraries. These present a challenge as they are nearly devoid of any spatial information. On the other hand their MIR reverb is beautiful, except it doesn't (for me) seem to play well with my usual work flow. On the third hand, the VSL libraries play very nicely with Ocean Way. (an observation about Ocean Way Studios - it is the first plugin I've owned where the presets are so good that I seldom end up doing a lot of tweaking. I still tweak, but as often as not I end up undoing the tweaks<G>!)

Finally, there are 'room' microphones in several of my libraries. I use them because they sound really good - why shouldn't they, they are recordings of the space! But they don't always fit together nicely. So I start with the close microphones, and play around with the reverbs in the auxes. When I've gotten something I like I will go back and add room microphones, sometimes muting the reverb auxes, sometimes not. When it works it works really well, and I suspect that when it doesn't work I've either run out of time, or perhaps that particular piece doesn't want that sound.


If that sounds messy, well, it is.


----------



## Dan J. B. (Jan 4, 2016)

Those Spaces spikes in Cubase are an odd thing Gerhard but from frustration of it crashing Cubase numerous times I can concur.

I've never really become friends with Reverence, it can sound good I'm sure it's just the GUI etc. I'd love to get Altiverb but that's a chunk of cash to blow in the future when I'm content with instrument libraries (If that ever happens haha).

Not looked at many other reverbs but I like the sound of Breeze from 2CAudio. Seems like it might be better for pop stuff but there's classical demos of it on their site that sound good too. And for $74.95/£50.75...


----------



## Gerhard Westphalen (Jan 4, 2016)

Dan J. B. said:


> Those Spaces spikes in Cubase are an odd thing Gerhard but from frustration of it crashing Cubase numerous times I can concur.
> 
> I've never really become friends with Reverence, it can sound good I'm sure it's just the GUI etc. I'd love to get Altiverb but that's a chunk of cash to blow in the future when I'm content with instrument libraries (If that ever happens haha).
> 
> Not looked at many other reverbs but I like the sound of Breeze from 2CAudio. Seems like it might be better for pop stuff but there's classical demos of it on their site that sound good too. And for $74.95/£50.75...



I thought it was an odd thing but considering that it's happened to me on both my computer which had different versions of Windows and graphics cards...

I've been using the Bricasti impulse responses. It's my understanding that running them in Reverence vs Altiverb should essentially sound the same. The thing with Altiverb is that it just offers tons of IR's but if you find separately available ones that you like and throw it into Reverence, I think that it makes getting Altiverb unnecessary (even if it does have a better GUI).


----------



## MarcelM (Jan 6, 2016)

i dont think it only depends on the impulse response, and there should be quite a difference if you use
different plugins. there wouldnt be any need for many of them if they all would sound the same,huh?


----------



## Dan J. B. (Jan 6, 2016)

Perhaps there isn't a need for many of them, with impulse responses usable in different ones. But each company wants a piece of the action and isn't going to say the other guys is best 

Also I wasn't aware (or forgot) you can import IR's into Reverence. Might try some sometime but I've got a nice template incorporating Spaces now


----------



## scoringdreams (Jan 7, 2016)

wst3 said:


> I've also been experimenting with Virtual Sound Stage, and I think that it will become part of my workflow. I have version 1, which is a bit cumbersome. I've tried the demo of version 2, and I'd like to upgrade when finances allow. So far it doesn't feel like a pressing need.



I am very interested in how you use VSS! Currently, I am trying to mix Spitfire Murals Stereo Mics with VSS and 2CAudio B2... Any tips on that? =)


----------



## re-peat (Jan 7, 2016)

scoringdreams said:


> I am trying to mix Spitfire Murals Stereo Mics with VSS and 2CAudio B2... Any tips on that?



Yes. Don’t do it.

It is beyond me why anyone would want to send Spitfire libraries — which contain everything to allow you to place them just about anywhere in your mix (provided it is a place consistent with Spitfire’s recording concept) — through a hazardous plug-in like VSS.

I’ll accept, although very hesitantly, that VSS may have some use when working with dry sources like, say, the VSL instruments, but other than that, I’ve heard it do nothing but serious damage (imaging- and phase-problems) to anything sent through it.

If you can solve a spatial problem without the use of VSS, that is always how I would do it.

_


----------



## maxime77 (May 27, 2016)

I was wondering—I did not want to open a new subject for that—,

When we mix with samples, we usually group the long strings together, separately from the shorts and we mix them differently (applying a different amount of reverb, maybe different microphone levels, etc.).

But how do the pros do when they have to mix real orchestras: do they necessarily choose the same mic levels, amount of reverb & EQ for violins shorts & longs, etc. (I guess it is 1 .wav file/instrument and you can't separate shorts from longs)?


----------



## Daryl (May 28, 2016)

maxime77 said:


> I was wondering—I did not want to open a new subject for that—,
> 
> When we mix with samples, we usually group the long strings together, separately from the shorts and we mix them differently (applying a different amount of reverb, maybe different microphone levels, etc.).
> 
> But how do the pros do when they have to mix real orchestras: do they necessarily choose the same mic levels, amount of reverb & EQ for violins shorts & longs, etc. (I guess it is 1 .wav file/instrument and you can't separate shorts from longs)?


Certainly working with a real orchestra you don't do anything like this. However, that's not to say that different pieces won't have different levels and lengths of reverb, or more or less close mic from time to time, but the idea of trying to split articulations up is not going to work.

That's not to say that people who are after a fake mix, such as film composers, won't split their orchestrations into more than one pass, so that they can have different mixes for both staccato and sustains, but if you are writing melodic music, that by it's nature has many different articulations within one phrase, this idea of fake mixing between articulations doesn't really work.

However, the other thing to think about is that if you are using live orchestra, you don't have to worry about jumped up twats on a forum telling your mix sounds synthy, so you have make it as fake as you like.


----------



## Ashermusic (May 28, 2016)

Daryl said:


> However, the other thing to think about is that if you are using live orchestra, you don't have to worry about jumped up twats on a forum telling your mix sounds synthy, so you have make it as fake as you like.




Hysterical.


----------



## maxime77 (May 29, 2016)

Daryl said:


> Certainly working with a real orchestra you don't do anything like this. However, that's not to say that different pieces won't have different levels and lengths of reverb, or more or less close mic from time to time, but the idea of trying to split articulations up is not going to work.
> 
> That's not to say that people who are after a fake mix, such as film composers, won't split their orchestrations into more than one pass, so that they can have different mixes for both staccato and sustains, but if you are writing melodic music, that by it's nature has many different articulations within one phrase, this idea of fake mixing between articulations doesn't really work.
> 
> However, the other thing to think about is that if you are using live orchestra, you don't have to worry about jumped up twats on a forum telling your mix sounds synthy, so you have make it as fake as you like.


Thank you, Daryl, for your very detailed answer


----------

