# Reverb and other Fx for matching StaffPad stems with additional non-StaffPad instruments inside the DAW



## Montisquirrel (Dec 2, 2020)

I just watched a StaffPad tutorial and they said that the StaffPad reverb is "modelled on the famous Bricasti M7". This made me thinking of buying LiquidSonics 7th Heaven, which is on sale right now. 

What kind of tipps and tricks do you have for matching non-StaffPad instruments with StaffPad stems? Anyone has good experiences with 7th heaven?


----------



## prodigalson (Dec 2, 2020)

Montisquirrel said:


> I just watched a StaffPad tutorial and they said that the StaffPad reverb is "modelled on the famous Bricasti M7". This made me thinking of buying LiquidSonics 7th Heaven, which is on sale right now.
> 
> What kind of tipps and tricks do you have for matching non-StaffPad instruments with StaffPad stems? Anyone has good experiences with 7th heaven?



I would disable the reverb in staffpad on everything and then mix in the DAW, if you ask me


----------



## Montisquirrel (Dec 3, 2020)

prodigalson said:


> I would disable the reverb in staffpad on everything and then mix in the DAW, if you ask me



umm... yeah... you are totally right. haha, that was the too much wine yesterday while watching StaffPad videos + the wish to have a reason to buy a new reverb.


----------



## muratkayi (Dec 11, 2020)

I think rhe question still remains valid, because most of the libraries feature the decca tree mic position as the single mic position recording available and that has the recording room baked in.


----------



## MauroPantin (Dec 11, 2020)

That is correct, the StaffPad samples are decca tree. But in that case (with SP reverb Off) the reverb to use is library dependent (ie, what libraries you are using on StaffPad vs what you have in your DAW). The settings for matching reverbs in those situations would be no different than matching the actual libraries the samples come from within the DAW with other libraries you may have.

A specific reverb plugin is not really needed, but one that allows control of ERs and Tail separately is. If they allow you to fiddle with the EQ all the better. Otherwise you EQ before the verb, on the verb bus. I use R4/Nimbus, but any algorithmic verb works.

Early reflections should be turned off because instruments are already spatially placed with a Decca. Once you match the RT60s you just have to EQ a bit to get the tone close, adjust the pre-delay and you are kind of set. You CAN do a lot more but I think this gets you a good result with minimal effort and preserves CPU usage, too.

For reference, here are some ballpark RT60s that I always have handy:

.85: East West Studio I
.95: Sony
1.4: Teldex
2.0: Abbey Road
3.0-4.0: AIR (exact number depending on the roof, I believe)

So if you are matching CineSamples with Abbey Road One you would substract their RT60s. The reverb to add to the CS instruments should have a tail of about 1.05 seconds. Of course, this is not an exact science and those beautiful round numbers we humans love are an approximation. But at least with this method you have a number to start with and then fine tune instead of wasting time with a lot of guesswork.


----------



## muratkayi (Dec 11, 2020)

Wow, this is an interesting post, because it went right over my head from a certain point onwards, hahaha. 

So, question: whats RT60s? RT would have been Reverb Tail, I guess? But RT60?

Based on my guess that you are talking about reverb tails, subtracting them makes a lot of sense. Follow up question about that, where did you get the numbers for the specific rooms? 

Also (sorry, but your post really sparked my curiosity), provided you have a IR based reverb plugin that lets you tweak the reverb tails regardless of the actual IR, would that work out in your opinion or is there something else that makes this tech the wrong choice for matching rooms? 

I once had to try and match Spitfire Studio Strings from a VST to the Staffpad audio stems of Spitfire Chamber Strings and that worked out pretty well in the end. Also did some stereo panning to match the place in the orchestration.


----------



## MauroPantin (Dec 11, 2020)

muratkayi said:


> Wow, this is an interesting post, because it went right over my head from a certain point onwards, hahaha.
> 
> So, question: whats RT60s? RT would have been Reverb Tail, I guess? But RT60?



RT60 is just a standardized way to measure reverb times. It has a specific procedure that I can't recall right now, but for simplicity just think of it as clapping in the middle of the room and measuring the time it takes for the room to go back to silence. And now imagine you can do that in a lot of rooms, with the exact same clap at the exact same volume. Since you are doing the same thing the only variable is the room and hence you can compare apples to apples. The RT60 measurements are the reverb time for each room, in seconds.



muratkayi said:


> Based on my guess that you are talking about reverb tails, subtracting them makes a lot of sense. Follow up question about that, where did you get the numbers for the specific rooms?



The numbers for the specific rooms have been around the forums over the years and also the Virtual Orchestration courses by the great Peter Alexander have a few of those. This is just best guess stuff, though. Unless a room specifically lists it's RT60 (and some do), you are kind of eyeballing it. I think there is also a way to calculate it using math with just the room dimensions, but it is my understanding that it is not as simple as that because of things like dampening.



muratkayi said:


> Also (sorry, but your post really sparked my curiosity), provided you have a IR based reverb plugin that lets you tweak the reverb tails regardless of the actual IR, would that work out in your opinion or is there something else that makes this tech the wrong choice for matching rooms?



It depends. Some people do, but generally I prefer not to. IRs have several components to them when they are recorded, it's not just the tail you're bringing on board.

For instance, the IRs have encoded within them the recording position and the sound source position of the moment they are captured. When you use an IR you are reproducing what it is like to have a sound play in the designated sound source position at the time of the IR capture, whilst standing at the recording position within the room. You can turn off early reflections in some IR convolution plugins, or use an IR that doesn't have any and is thus is not "spatially committed", lets call it, to solve this. But it is not a complete solution. The IR will still color the sound depending on the distance between those two aforementioned points within the room. Because of the way acoustics work, when you hear stuff from a distance you loose some of the highs, there is natural EQ happening, and even if you turn off the ERs this effect will still be present.

There are other factors to adjust. Pre-delay, dampening (which is essentially like a tilt EQ over the entire reverb sound), etc. Those come embedded in the IR as well... I mean, it CAN be done, sure. But I'd much rather use a good, flexible convolution reverb that allows me to tweak the specific parts of the sound that I need in order to match it to something else. If you know what to listen for it's just easier. Trying out IRs until one works seems super frustrating to me.

The only exception is if you use super dry instruments and match them to something that is already quite wet. Like, matching something like SampleModeling to Spitfire Symphonic Strings. Then IRs can work beautifully. You find a similar room, pan, adjust the RT60 and pre-delay and it's close enough. But when you have to match two rooms I prefer to use an algo to supplement the shorter verb.


----------



## muratkayi (Dec 12, 2020)

Thank you very much that was super informative! I also understand the characteristics of convolution reverbs a bit better now!


----------

