# Talk to me about convolution reverbs



## blaggins (Oct 27, 2021)

(The motivation for this question: EW Spaces 2 is 60% off.)

I've been trying to read every reverb thread on here I can find but I'm still left wondering, do I need a fancy library of convolution reverbs? I have what I think is pretty good coverage with algorithmic reverbs: Valhalla Room, Fabfilter Pro-R, and Eventide Blackhole. As far as sample libs I mainly have Spitfire SSO String/Brass/Woods libraries and I have been told (and have had mild success with) just using the included mics to create a realistic sense of placement and space, I assume a bit of algorithmic reverb to blend is good enough in any case, and Pro-R seems lovely for this. I also have the included IR reverb in Cubase but I haven't used it much and it gets panned quite a bit in any case.

However, when it comes to the Strezov choirs plus their Balkan Ethnic Orchestra, I have to admit I'm not in love with the included reverb on any of it. The recording quality is superb but the sense of space feels very static and one-dimensional if I use the included reverb, and if I just use the mics, it's a very dry studio sound. (Maybe Sofia Session Studio is a small space? or a particularly dead one?) For percussion I'm mainly using CinePerc. I'm not currently messing with adding reverb to CinePerc but I'm sure that I probably should be in order to blend it better with the Spitfire orchestra.

In terms of reverb character I've come to the conclusion that I favor cleaner reverbs over colorful ones. For example: I haven't gotten along well with Valhalla Room at all. I bought it based on the overwhelming number of glowing reviews, but I hardly ever use it b/c whenever I add it to a track (pretty much any preset I've tried except for the ambient set), it colors the sound in ways I don't ever seem to actually like. I'm not sure I understand exactly what it is doing and why I don't care for it, but I always prefer Pro-R's sound over Valhalla Room's.

So... I'm wondering if a good convolution reverb would be useful to me, given the above. Seems it would deliver on the "clean" character while offering the flexibility to blend things with the Spitfire orchestra or even help me "move" the AIR Studios recordings into smaller spaces or more ambient ones very easily. At least that is what I'm assuming, am I correct?

As far as which reverb, the seemingly near-unanimous king of convo reverbs is Altiverb but I don't want to spend that much on a reverb. Which leaves me with the included IR in Cubase which get panned quite a bit, or spending like $140 on Spaces 2.

Do folks who own Spaces 2 find it offers something significant over say Pro-R? Is the included IR verb in Cubase really that bad? (I don't have anything to compare it to.) And finally, do folks with Spaces 2 and Spitfire SSO find that it's useful to combine the two?


----------



## Duncan Krummel (Oct 27, 2021)

I have a pretty good collection of verbs, among those Pro-R and Spaces I & II. For the comparison between these two, I'd say you definitely wouldn't need to worry about crossover. Obviously they're in different camps, with Pro-R being algorithmic and Spaces II being convolution, but sometimes that's brought up without considering what's really being asked, which is: how will they improve my sound?

Pro-R's biggest benefit is it's easy-to-understand tweak-ability. In theory, you could probably get decently close to emulating some of the IRs in Spaces II, but - speaking from experience - it's almost too laborious to be worthwhile. You have Pro-R, so I won't talk much about it aside from the fact that I see many demonstrations of reverb for orchestral music often focus on tail lengths that are too long and muddy the sound. Pro-R, with the size set up a bit (maybe around 2-3.5 seconds), then cut down to 50% or so of the tail length is something really special.

Spaces II has some absolute gems. The stage IRs are not-so-secret secret weapons of mine for spatializing drier libraries/recordings. Many times I've even followed it with Pro-R specifically to add a rich tail. The concert halls are great, even those without the special in-situ IRs. The Northwester Hall in particular is absolutely gorgeous on just about anything you send through it. The churches are also divine (pun intended). Angel's Cathedral has a stereo widening effect that I can't replicate with any other reverb I own. The opera house is perfect for subtly placing brass in a moderately wet but controlled space. Gives them a bit of extra bloom. I don't find a ton of use for the IR captures of analog gear, or the more unusual spaces (for music, that is), but I've probably used just about all of them at _some _point or another. Oh, and the studio spaces! The deca IR is another fantastic way of widening a source very naturally. These sound great for re-amping digital synths.

As for SSO, don't have it, so can't speak to that, but I'd be happy to help answer any questions or provide demos for Spaces II if you'd like.


----------



## Trash Panda (Oct 27, 2021)

Personally, I'm not a big fan of Spaces for anything that isn't bone dry (like VSL Silent Stage or Infinite Brass/Woodwinds with the built in IRs turned off) because you get a ton of this metallic-sounding build up and mud if the samples have any room information baked in at all.

I'm also not a fan of EWQL's 1 iLok activation per license BS either. I'm not buying 2 copies just to use it on my laptop when I'm not at my PC.

Maybe if you did close mics only and then used Spaces to place them?

Personally, I'm finding more favorable results with my semi-dry libraries (Audio Imperia, Cinematic Studio Series, Strezov, etc.) by combining a spatialization plugin like Precedence, Panagement or DearVR Pro with a really clean/natural algorithmic reverb like Cinematic Rooms, Chameleon or Nimbus.

Eventide SP2016 or Stereo Room 2016 have also proven to be really good for increasing the perceived size of the room a semi-dry sample is in. See Joel Dollie's thread here: https://vi-control.net/community/threads/early-reflections-and-room.115898/


----------



## Dewdman42 (Oct 27, 2021)

The main value of Spaces II is the collection of rooms and on sale its a great price for that collection of rooms. Hard to emulate that exactly with algorithmic reverbs and most any other convolution reverb with good rooms will be much more expensive.

The cost for Spaces II new right now on sale is lower then the cost to upgrade from Spaces I. FWIW.


----------



## blaggins (Oct 27, 2021)

Thanks for the responses everyone! I had not heard that advice for Pro-R yet @Duncan Krummel but will be putting it to good use.


Trash Panda said:


> Personally, I'm not a big fan of Spaces for anything that isn't bone dry (like VSL Silent Stage or Infinite Brass/Woodwinds with the built in IRs turned off) because you get a ton of this metallic-sounding build up and mud if the samples have any room information baked in at all.


Is this true even if using smaller spaces to create a bit of extra depth where it is missing (say like in the Freya choirs from Strezov if you turn their included reverb off?)



> I'm also not a fan of EWQL's 1 iLok activation per license BS either. I'm not buying 2 copies just to use it on my laptop when I'm not at my PC.


Neither am I to be honest, but I figure at this point I've got enough one-computer-at-a-time license garbage on my USB dongle that there's no getting away from swapping it back and forth between my desktop and laptop if I want to take any music production stuff on the road with me (or to the couch). Hell I can't even load up Cubase without the stupid dongle.



> Personally, I'm finding more favorable results with my semi-dry libraries (Audio Imperia, Cinematic Studio Series, Strezov, etc.) by combining a spatialization plugin like Precedence, Panagement or DearVR Pro with a really clean/natural algorithmic reverb like Cinematic Rooms, Chameleon or Nimbus.
> 
> Eventide SP2016 or Stereo Room 2016 have also proven to be really good for increasing the perceived size of the room a semi-dry sample is in. See Joel Dollie's thread here: https://vi-control.net/community/threads/early-reflections-and-room.115898/


This is really interesting. I watched @Joël Dollié 's video (his mixing book is super useful as well, have read it twice, will probably need to read it a few more times to really internalize it all). I'm wondering though, instead of using the Eventide SP2016 to create early reflections, could I just use one of the presets from a convolution reverb? This might be a very stupid question, but don't people often use an IR to get a sense of a room and then use an algorithmic reverb to add the desired amount of tail? What is the main advantage of SP2016 over something else for this purpose?



> The cost for Spaces II new right now on sale is lower then the cost to upgrade from Spaces I. FWIW.


That's quite silly @Dewdman42 , I'd be grumpy if I was a Spaces 1 owner.


----------



## Dewdman42 (Oct 27, 2021)

tpoots said:


> That's quite silly @Dewdman42 , I'd be grumpy if I was a Spaces 1 owner.


I refuse to upgrade for noted grumpiness.... But the current sale price as a new user is still fabulous.


----------



## CGR (Oct 27, 2021)

tpoots said:


> . . .
> That's quite silly @Dewdman42 , I'd be grumpy if I was a Spaces 1 owner.


I'm an owner of Spaces 1 and I'm grumpy!


----------



## Trash Panda (Oct 27, 2021)

tpoots said:


> This is really interesting. I watched @Joël Dollié 's video (his mixing book is super useful as well, have read it twice, will probably need to read it a few more times to really internalize it all). I'm wondering though, instead of using the Eventide SP2016 to create early reflections, could I just use one of the presets from a convolution reverb? This might be a very stupid question, but don't people often use an IR to get a sense of a room and then use an algorithmic reverb to add the desired amount of tail? What is the main advantage of SP2016 over something else for this purpose?


SP2016 seems to let more of the transient through when the Mix % is set to 100% and produces what sounds like a wider stereo image to my ears. It's really effective and simple to use.

On the same token, as I play with Cinematic Rooms and Pro-R more, I'm finding that I can achieve a similar effect with about a 40%-60% Mix % as an insert and tweaking the parameters. Can always add a stereo widener before or after to get the same stereo widening effect.

But if you can't wait for a sale on SP2016 ($249 retail) it is really, really easy to get that extra ambience with little effort.


----------



## CGR (Oct 27, 2021)

Think I'll start a Spaces 1 Owners Support Group so we can all wallow in our grumpiness together.


----------



## CGR (Oct 27, 2021)

Trash Panda said:


> SP2016 seems to let more of the transient through when the Mix % is set to 100% and produces what sounds like a wider stereo image to my ears. It's really effective and simple to use.


+ 1. A simple and effective tool.


----------



## Piotrek K. (Oct 28, 2021)

Trash Panda said:


> you get a ton of this metallic-sounding build up and mud if the samples have any room information baked in at all.


Oh, so it's not only me? I've been reading only good things about Spaces but when I tested it (via composer cloud) there was almost always something ugly, metallic in the sound. Never bothered to understand why, too many options out there to re-think (or overthink) stuff like that imo.

And as for convolution reverbs my go to is now Inspirata Personal (previously it was Waves IR + Eareverb 2). Fantastic reverb, deep, customizable and huge amount of awesome rooms (with more coming soon). Maybe will go on sale on BF.


----------



## CGR (Oct 28, 2021)

Piotrek K. said:


> . . . And as for convolution reverbs my go to is now Inspirata Personal (previously it was Waves IR + Eareverb 2). Fantastic reverb, deep, customizable and huge amount of awesome rooms (with more coming soon) . . .


After a few initial hiccups when Inspirata was first released (CPU spikes and consistency issues) it's benefited greatly from incremental updates and improvements and is now very stable on my Logic Pro/OS Catalina setup. The quality and naturalness of this reverb is unsurpassed for me. Instruments very much sound as though they are in the spaces, as opposed to sounding like a reverb effect stuck on top of the source sound.

No need for me to update my Spaces 1 license to Spaces 2 (at a higher cost than purchasing a Spaces 2 license from scratch I might add!!). In my opinion, Inspirata is a far more sophisticated and flexible convolution reverb.


----------



## tf-drone (Oct 28, 2021)

Hi,

you could try a free convolution reverb before buying, like:
Acon Verberate Basic
Aurora FenrIR
Convology XT
LiquidSonics Reverberate LE
Melda MConvolution EZ
Nuspace Riviera
Plektron IRcab
Sir Audio Tools SIR2
Sourceforge Freeverb3


----------



## darkogav (Oct 28, 2021)

Hi OP. based on what you already have, I think you are covered. FWIW.. one of my most used reverbs in my collection is the Waves freebie.. IR convolution. Not the pretties UI, but really works well for me.


----------



## blaggins (Oct 28, 2021)

darkogav said:


> Hi OP. based on what you already have, I think you are covered. FWIW.. one of my most used reverbs in my collection is the Waves freebie.. IR convolution. Not the pretties UI, but really works well for me.


I appreciate that perspective, and I'm sure you are probably right, but I find that I'm still chasing that elusive "perfect cathedral" space to put a choir into. Sigh.



> No need for me to update my Spaces 1 license to Spaces 2 (at a higher cost than purchasing a Spaces 2 license from scratch I might add!!). In my opinion, Inspirata is a far more sophisticated and flexible convolution reverb.



Huh, I had so far not heard of Inspirata. This is throwing a wrench into things. There isn't nearly as much written about it but I did a little bit of background reading and a bit of listening. What appeals to me is that these guys are *highly* technical minded acoustics engineers. The parent company is https://acoustics.entel.hu, they consult in acoustic treatments, design of acoustic spaces, etc. I would have to guess they know a thing or two about modeling IRs. The examples I've heard (although there aren't so many) seem so to me to be very transparent.... hella expensive but a quick google suggests there are occasional deep discounts. I'm definitely considering it.


----------



## Serge Pavkin (Oct 31, 2021)

Maybe someone can tell me how it is possible to get a 10-day trial version of Spaces 2? I read somewhere that it is available but cannot find it on the site. Thanks.


----------



## ed buller (Oct 31, 2021)

tpoots said:


> I appreciate that perspective, and I'm sure you are probably right, but I find that I'm still chasing that elusive "perfect cathedral" space to put a choir into. Sigh.


try Cinematic Rooms. I'm not sold on "convolution"...it's voodoo !..Fun to have and sometimes very useful but it's NOT putting your sound in all the pretty pictures . The whole convolution idea is smoke and mirrors. Passing a sine wave through a bookshelf speaker in the nave of a cathedral does NOT put a trumpet there later .....

best

ed


----------



## Living Fossil (Oct 31, 2021)

tpoots said:


> I'm wondering though, instead of using the Eventide SP2016 to create early reflections, could I just use one of the presets from a convolution reverb? This might be a very stupid question, but don't people often use an IR to get a sense of a room and then use an algorithmic reverb to add the desired amount of tail? What is the main advantage of SP2016 over something else for this purpose?


The best thing is to make a lot of experiments and to listen to different tracks that are produced with different tools.

Some days ago there was another thread where this whole Convo ER legend came up again.

As i've stated there, in my experience algorithmic solutions work much better. I prefer the results by a large margin. 

With convos there are different aspects which are responsible for the discrepancy between expectation and reality. 

First, the dry part of the convoluted signal usually destroys the illusion of depth to a big extent.
The easiest way to experience what i mean consists in listening to examples that were produced that way.

Second, if more instruments are in a room, the acoustic reality that is happening there is much more complex than an addition of multiple convoluted information because of the behaviour of sound waves in space.

And then, convolution to my ears always sounds kind of muddy and grainy. This may also be a part of what is going on in the realm of competing phase issues.

However, i also have to add that at one scoring stage i had a score mixer who was a real Altiverb wizzard. It was really amazing working with him and watching how he tweaked that tool.
Still, i suppose that surround mixes are more forgiving since as they are played back in a "real" 3D setup they "rebuild" part of the information that is lost in convolution.


----------



## re-peat (Oct 31, 2021)

I use convolution reverbs only for smaller spaces. That’s when the character and colour of the room is important and that’s something that can be captured very well in a good IR. It’s doable with an algorithmic reverb as well, but only the very-very best algorithmic reverbs do rooms and smaller spaces really well and it requires a lot more precise parameter-tweaking than simply loading an IR.

For big spaces however — anything that has to house a medium-sized orchestra or larger — I don’t even look at convolution reverbs. Then it is algorithmic all the way.
And the often heard argument that convolution does ER’s more realistically than algorithmic reverberation, is of hardly any importance in larger spaces anyway because there, owing to the size of the space, the ER’s play a much smaller part, if any at all, in the spatialization.

Until I bought the UVI Plate — a truly amazing piece of software —, I also always used convolution (AA Ebony, or SpaceDesigner with Cupwise IR’s) for my plate reverbs.
Algorithmic plates may sound nice and convincing when you demo them in isolation, but put them in a mix and then you often find they either don’t nest at all or nest too much (and dissolve). The UVI Plate has everything to make a plate reverb nest just perfectly. The UAD is not bad either, certainly not, but I do believe that in the UVI it has met its superior.
(This entire paragraph only because I use A LOT of plates in my mixes.)

_


----------



## blaggins (Oct 31, 2021)

ed buller said:


> try Cinematic Rooms. I'm not sold on "convolution"...it's voodoo !..Fun to have and sometimes very useful but it's NOT putting your sound in all the pretty pictures . The whole convolution idea is smoke and mirrors. Passing a sine wave through a bookshelf speaker in the nave of a cathedral does NOT put a trumpet there later .....
> 
> best
> 
> ed


What do you mean by "Passing a sine wave through a bookshelf speaker in the nave of a cathedral does NOT put a trumpet there later"? I thought the whole premise of convolution reverb (which is to say doing a convolution of a full-spectrum IR response with your own music) was that each frequency your trumpet emits, across the full spectrum of human-audible sound, is treated to the exact delay, phasing, cancellation, and whatever else is happening, that a real trumpet would be treated to in the same space. (Yeah this assumes your trumpet sample is dead try, which is probably isn't, so maybe that is the difference you are referring to?) Anyway, I appreciate your perspective on this and just want to understand more...


----------



## blaggins (Oct 31, 2021)

Living Fossil said:


> First, the dry part of the convoluted signal usually destroys the illusion of depth to a big extent.
> The easiest way to experience what i mean consists in listening to examples that were produced that way.


I'm probably misunderstanding here, but which dry part? The bit that is the dry part of the wet/dry mix of the reverb? Wouldn't you run a convolution reverb fully wet and just tune the ER to delay the really dry bit enough to be realistic?



Living Fossil said:


> Second, if more instruments are in a room, the acoustic reality that is happening there is much more complex than an addition of multiple convoluted information because of the behaviour of sound waves in space.



I'm probably just going to be putting my own lack of understanding on display here.. but aside from phasing effects at the listener, which should resolve themselves naturally even with multiple instruments getting processed independently in the DAW, since the signals are summed in the end, which sound save behaviors are you referring to?


----------



## blaggins (Oct 31, 2021)

re-peat said:


> I use convolution reverbs only for smaller spaces. That’s when the character and colour of the room is important and that’s something that can be captured very well in a good IR. It’s doable with an algorithmic reverb as well, but only the very-very best algorithmic reverbs do rooms and smaller spaces really well and it requires a lot more precise parameter-tweaking than simply loading an IR.
> 
> For big spaces however — anything that has to house a medium-sized orchestra or larger — I don’t even look at convolution reverbs. Then it is algorithmic all the way.
> And the often heard argument that convolution does ER’s more realistically than algorithmic reverberation, is of hardly any importance in larger spaces anyway because there, owing to the size of the space, the ER’s play a much smaller part, if any at all, in the spatialization.
> ...


Very interesting @re-peat. Where do you draw the line between big and small? A bedroom is obviously small, a concert hall is obviously big, but what about a small church or a scoring stage?


----------



## Living Fossil (Oct 31, 2021)

tpoots said:


> I'm probably misunderstanding here, but which dry part? The bit that is the dry part of the wet/dry mix of the reverb? Wouldn't you run a convolution reverb fully wet and just tune the ER to delay the really dry bit enough to be realistic?


Of course the reverb itself is fully wet. 
But your original signal (which you send to the reverb) still remains as it is.
And if it has dry information this will overrule what comes back from the reverb.
Since a signal that is further away has a different frequency than a signal that is right in front of you.



tpoots said:


> I'm probably just going to be putting my own lack of understanding on display here.. but aside from phasing effects at the listener, which should resolve themselves naturally even with multiple instruments getting processed independently in the DAW, since the signals are summed in the end, which sound save behaviors are you referring to?


Yes, you misunderstand what i'm writing.
I'm speaking of different instruments in a room.
The sound of each instruments fills the room and this results in interactions of those many soundwaves.
These interactions aren't replicated by running different instruments through (one or more) convoluted responses of that room. 
But if it's too complicated, just forget this part.
The more problematic part is above (the mentioned original signal that feeds the reverbs).


----------



## blaggins (Oct 31, 2021)

Living Fossil said:


> Of course the reverb itself is fully wet.
> But your original signal (which you send to the reverb) still remains as it is.
> And if it has dry information this will overrule what comes back from the reverb.
> Since a signal that is further away has a different frequency than a signal that is right in front of you.


I think you are saying this happens because the reverb is being used a a "Send effect" (cubase terminology), aka an FX bus or whatever it is that folks call it in other DAWs. So the end result still consists of some of the original "dry" signal being summed with the processed reverb signal. But what if the IR verb is used an an insert instead? No dry signal at all. Although perhaps this would be a nutty idea, I'm not sure, I haven't really played with convolution reverbs. Are they meant to be used as send effects or as inserts?



Living Fossil said:


> Yes, you misunderstand what i'm writing.
> I'm speaking of different instruments in a room.
> The sound of each instruments fills the room and this results in interactions of those many soundwaves.
> These interactions aren't replicated by running different instruments through (one or more) convoluted responses of that room.
> ...


I'm still following (I think). 

My understanding has been that the interactions of sound don't matter unless they are in the point in space of the listener. Sound waves are non-destructive in the sense that they don't lose energy just by passing through each other, but of course they are waves and so do interact destructively at points in space. And of course they do lose energy when they are reflected due to absorption. Hence if you are sitting in just the wrong spot, two pure tones can completely cancel each other out, but as soon as you move an inch or two, you start to hear one or both of them again. Given that, the result is that a real room causes *many* reflections to reach your ear, some of them will cause phase cancellation, some of them will have warped frequency, some of them will be more or less delayed, softer/louder, etc. Is this right so far?

Given all that, let's pretend that a trumpet can occupy the exact same physical space as a clarinet on a stage (the exact space where the proverbial bookshelf speaker sat when they capture the IR of the room in fact). If you process both your trumpet and your clarinet with the very same IR, wouldn't you pretty precisely capture the sound of the room this way?

BTW - not trying to be overly pedantic here, I'm just trying to get some deeper understanding. I didn't realize looking into reverbs would send me into such a spiral of questioning.....


----------



## Living Fossil (Oct 31, 2021)

tpoots said:


> I think you are saying this happens because the reverb is being used a a "Send effect" (cubase terminology), aka an FX bus or whatever it is that folks call it in other DAWs. So the end result still consists of some of the original "dry" signal being summed with the processed reverb signal. But what if the IR verb is used an an insert instead? No dry signal at all. Although perhaps this would be a nutty idea, I'm not sure, I haven't really played with convolution reverbs. Are they meant to be used as send effects or as inserts?


I was refering to the use as a send, so you got me right.


tpoots said:


> I'm still following (I think).
> 
> My understanding has been that the interactions of sound don't matter unless they are in the point in space of the listener.


Phase cancellations do indeed destruct acoustic information.
And the signal that reaches the listener is always the result of different interactions.


----------



## jcrosby (Oct 31, 2021)

tpoots said:


> Given all that, let's pretend that a trumpet can occupy the exact same physical space as a clarinet on a stage (the exact space where the proverbial bookshelf speaker sat when they capture the IR of the room in fact). If you process both your trumpet and your clarinet with the very same IR, wouldn't you pretty precisely capture the sound of the room this way?


Not necessarily. People are absorbers. (And can have a significant affect on sound depending on the number of people)... So for a 1:1 copy you would also need the same people in the room in the same locations.

Next time you find yourself at a venue and you happen to get there early and the venue is fairly empty, notice how the timbre and perceived level of what you hear from the speaker system changes as the venue fills up...

I also wouldn't overthink it. Sure, IRs can create weird resonances that might clash with a library, but some can also sound nice. As an example if you watch some of the SF videos where Jake walks through one of his mixes you'll see there are a few real hall IRs in Altiverb he uses frequently.


----------



## blaggins (Nov 1, 2021)

jcrosby said:


> Not necessarily. People are absorbers. (And can have a significant affect on sound depending on the number of people)... So for a 1:1 copy you would also need the same people in the room in the same locations...


Good point, though I imagine they all accounted for this through EQ and processing internal to the plugin itself, otherwise all the various halls would sound very bright. Maybe whatever processing magic (in addition to the convolution itself) that is happening in the plugins is the "voodoo" that @ed buller was referring to?

Anyway, for what it's worth I have downloaded demoes for both LiquidSonics reverbs that everyone keeps talking about a lot: Cinematic Rooms and Seventh Heaven (lite versions only). I've been playing around with Freya and Wotan (sans included reverbs of course) and the Ravenscroft 275 (close mic only) this morning. Now I have only tried the included reverb presets, so this is hardly a conclusive test of the plugins' capabilities... and... I'm not sure if I can really trust my inexperienced ears anyway, not to mention this wasn't double blind testing.

Now all that being said, I'm perceiving a big difference between Seventh Heaven and Cinematic Rooms. Seventh Heaven feels like Pro-R to me. Pretty tails, cool effects, but it doesn't any more realistic than Pro-R. It's lush but without actually providing a sense of space. I've run everything 100% wet as an insert, and it still feels very in-your-face. I'm not inspired to buy it to be honest. However, Cinematic Rooms feels to me like a different game entirely. Two things in particular stood out for me this morning.

(1) I put the piano in the CR "Piano Chamber" space, and I have to say I don't think I've heard anything like that outside of actually sitting in a rehearsal hall and listening to someone play the piano. I tried to reproduce the effect with all the different SH presets (I did even try to mess with the tail length a bit to match) I couldn't get close to it.

(2) I put various choral combos into cathedral-type spaces. I felt once again like SH provided this lush reverberant sound but without actually moving the choir back into the space. In insolation it sounded really good, but as soon as I AB'd it against CR, the difference was very apparent. The CR cathedral just sounded better. It felt massive next to SH, in a way that was less messy and more grand.

I do realize the incredible irony of testing out two algorithmic reverbs after I started a thread on convolution reverbs, but there you have it.


----------



## ed buller (Nov 1, 2021)

tpoots said:


> I do realize the incredible irony of testing out two algorithmic reverbs after I started a thread on convolution reverbs, but there you have it.


Ears don't lie. I think Cinematic Rooms is the best reverb i've heard for years. 

best

e


----------



## Dietz (Nov 1, 2021)

jcrosby said:


> Next time you find yourself at a venue and you happen to get there early and the venue is fairly empty, notice how the timbre and perceived level of what you hear from the speaker system changes as the venue fills up...


Good (classical) concert halls take care of this, usually by giving special treatment to seats that remain unoccupied.


----------



## jcrosby (Nov 1, 2021)

Dietz said:


> Good (classical) concert halls take care of this, usually by giving special treatment to seats that remain unoccupied.


True.. I was thinking in the context of IRs in general though... I'd imagine a studio for example would sound slightly different if you had 20-50 musicians seated vs the room being empty, and in that case the IR wouldn't be a _theoretical_ 1:1 copy.

Even then that's not really the right mindset for working with IRs, which was really the main point I was making... People shouldn't assume all IRs will yield a _metallic _or unnatural result...


----------



## jcrosby (Nov 1, 2021)

tpoots said:


> Seventh Heaven feels like Pro-R to me. Pretty tails, cool effects, but it doesn't any more realistic than Pro-R. It's lush but without actually providing a sense of space. I've run everything 100% wet as an insert, and it still feels very in-your-face. I'm not inspired to buy it to be honest. However, Cinematic Rooms feels to me like a different game entirely. Two things in particular stood out for me this morning.
> 
> (1) I put the piano in the CR "Piano Chamber" space, and I have to say I don't think I've heard anything like that outside of actually sitting in a rehearsal hall and listening to someone play the piano. I tried to reproduce the effect with all the different SH presets (I did even try to mess with the tail length a bit to match) I couldn't get close to it.
> 
> ...


Seventh Heaven are IRs from an algorithmic hardware reverb so that's why.
Ironically CR is also algorithmic. I haven't demoed it myself, but interesting that you find it to sound more realistic. (Not that it shouldn't, just interesting to hear... May demo it at some point).


----------



## Pier (Nov 1, 2021)

tpoots said:


> I think you are saying this happens because the reverb is being used a a "Send effect" (cubase terminology), aka an FX bus or whatever it is that folks call it in other DAWs. So the end result still consists of some of the original "dry" signal being summed with the processed reverb signal. But what if the IR verb is used an an insert instead? No dry signal at all. Although perhaps this would be a nutty idea, I'm not sure, I haven't really played with convolution reverbs. Are they meant to be used as send effects or as inserts?


It doesn't matter.

You can use send channels pre fader and then lower the output gain of the channel, so everything will go to the send channel and it will be fully wet (obviously as long as the effect on the send channel is 100% wet).

You can also use insert effects with varying levels of dry/wet.

FYI the terminology of send effects comes from the analog mixing consoles. Engineers needed to be efficient with the use of effects (no plugins!) so they had these send channels where they put the reverb and used it to add reverb to multiple channels at once.


----------



## blaggins (Nov 1, 2021)

jcrosby said:


> Ironically CR is also algorithmic. I haven't demoed it myself, but interesting that you find it to sound more realistic.


Well I didn't compare it against any IR reverbs so who knows, maybe one of those would have sounded even more realistic to my ears. I would find it pretty hard to judge the sound of *any* reverb in isolation I think. But putting two side by side, it becomes a lot easier to spot the differences.


----------



## Dietz (Nov 1, 2021)

jcrosby said:


> People shouldn't assume all IRs will yield a _metallic _or unnatural result...


Of course not! :-D Quite contrary, if done properly.


----------



## Trash Panda (Nov 1, 2021)

How does one remove the metallic sound then?


----------



## Pier (Nov 1, 2021)

Dietz said:


> Of course not! :-D Quite contrary, if done properly.


So what does "done properly" mean? Could you share some tricks? 

Do pros (eg: Altiverb) really use exploding balloons?


----------



## Dietz (Nov 1, 2021)

Pier said:


> So what does "done properly" mean? Could you share some tricks?


Well, in a way I'm sharing all my tricks since over a decade now: -> https://www.vsl.co.at/en/Vienna_Software_Package/Vienna_MIR_PRO 



Pier said:


> Do pros (eg: Altiverb) really use exploding balloons?


When I accompanied Arjen van der Schoot (founder of Audio Ease) on IR recordings he made himself, he used sweeps, and so do we when we record multi-IRs for MIR. - But I can't say that with authority about other entrances to the vast and quite diverse AltiVerb library. Exploding balloons as a source of IRs are not considered state of the art, that's for sure.

The main tricks are: Know your equipment very well, to work around certain unavoidable physical limitation (... especially the loudspeaker used for sweeping); listen closely to the room before recording; do very careful post-processing; don't be shy about using EQs during mixing. 

And finally: Don't expect _one_ IR to give you a good sonic impression of a hall. That's about like sampling the middle C of a Grand Piano in one velocity and trying to play Chopin with the result.


----------



## Pier (Nov 1, 2021)

Dietz said:


> Well, in a way I'm sharing all my tricks since over a decade now: -> https://www.vsl.co.at/en/Vienna_Software_Package/Vienna_MIR_PRO
> 
> 
> When I accompanied Arjen van der Schoot (founder of Audio Ease) on IR recordings he made himself, he used sweeps, and so do we when we record multi-IRs for MIR. - But I can't say that with authority about other entrances to the vast and quite diverse AltiVerb library. Exploding balloons as a source of IRs are not considered state of the art, that's for sure.
> ...


I imagine there must be some kind of correction to take into account the speaker that produced the sweep since no speaker is perfectly flat from 20 to 20khz.

So the idea is to record multiple mic locations and then mix those to produce the final IR?

I imagine having multiple IRs could allow for moving the source sound around. Eg: deciding in realtime if the source is closer to the front or to the back.


----------



## Tralen (Nov 1, 2021)

tf-drone said:


> Hi,
> 
> you could try a free convolution reverb before buying, like:
> Acon Verberate Basic
> ...


I love Verberate Basic, but would like to point out that it is fully algorithmic.

Adding to the list, my favourite convolver is Fog, which costs $65.


----------



## blaggins (Nov 1, 2021)

I did a bit more testing today. I played (pretty roughly mind you) the first bit of Chopin's Nocturne in C# Minor using the VI Labs Modern U Piano. An unconventional choice of piano
perhaps, but I like the clear bright tone of it. Here's the original where I am using the PM40 Close Mic plus the Room mic. You'll have to excuse the mistakes, I'm pretty rusty.



I put this through 4 different reverbs, where I tried for a kind of bright smallish performance hall. I took off the room mic from the Modern U, leaving just the close mic on for the reverb tests, and I also had each reverb as an insert at 100% wet. Then I did my best to level match and slapped a limiter on all four tracks but only to bring the levels up a bit (no compression is applied).

In no particular order the reverbs are:

- Cubase REVerence: Music Academy A (this is an IR verb included with Cubase Pro)
- Pro-R: Bright Large Room
- Cinematic Rooms: Piano Chamber
- Seventh Heaven: Large & Bright (this is the only preset I tweaked by reducing the tail a bit to match the rest better).

I'm not going to tell you which audio file corresponds to which reverb, but I am very curious to see if anyone has a strong opinion about which one they like best.


----------



## jcrosby (Nov 1, 2021)

I like 1 & 3, 2 & 4 sound kind of cold. The faster trills also sound blurry on 2 & 4, & my attention gets drawn to the hammer noise. Overall my 1st impression is that I like 1 the most. It has a warmer low end and the stereo image is really nice. It also sounds the most intimate which suits the piece well... It (1) is noticeably scooped and colors the original quite a bit, but at least in the context of the source audio I find it flattering.. 3 also has a nice intimacy as well, and the image is more pleasant (at least to me... It's all subjective though, someone who prefers the character of a hall might hate 1 & 3)...

I'm going to guess 1 is the IR, but it's honestly a guess. The image and intimate vibe strike me as more realistic. If that's CR than color me impressed as I personally feel it sounds noticeably warmer and the trills don't get blurry..

(I only own 1 of the 4 reverbs you mention, Pro-R btw...)


----------



## Dietz (Nov 2, 2021)

Pier said:


> So the idea is to record multiple mic locations and then mix those to produce the final IR?


Yes, this is the way many AltiVerb IR-sets were created (if I'm not mistaken). 

However, VSL's MIR goes one step further and offers free*) access to all these different locations: You can choose the position of the source signal in a selected Venue, you can choose its directivity (i.e. the way the signal interacts with the room), you can choose one or two of up to four microphone positions, and you can decode up to eight virtual microphone capsules with variable characteristics from these microphone positions, based on the Ambisonics format in which MIR's impulse responses are recorded.

_*) ... "free" in the sense that MIR interpolates between the discrete source positions on a Venue's stage._

I don't want to take this thread off-topic too far. If you're interested in how MIR works, there's a little primer on VSL's website called "Think MIR!".


----------



## Living Fossil (Nov 2, 2021)

tpoots said:


> I put this through 4 different reverbs, where I tried for a kind of bright smallish performance hall. I took off the room mic from the Modern U, leaving just the close mic on for the reverb tests, and I also had each reverb as an insert at 100% wet.


Honestly, apart from the original track they all sound terrible.
And at this point it's not about the reverbs per se (allthough Nr. 1 add so much boomy-ness to the signal that it's massively disturbing), but the usage of them.
(in fact, all of these used reverbs can sound really good, but not at 100% wet...)

One thing that is obvious on my studio monitors is that in all of your cases the perceived depth isn't constant.
The boomy 100 Hz range sounds close and the higher notes are far away.
(the effect is less prominent in example 2)
Also, the distance as suggested by the reverbs doesn't correlates with the width of the instrument.

If you want a piano sound from far away (_if _you want that) you have to massively reduce the stereo width. The direct sound of an instrument gets more and more punctual if it's more far away.

If you really want to make the instrument sound as if it would be far away, you could get better results e.g. with the SP2016 or a combo of e.g. Waves S1 or Precedence (to narrow the signal) and the IRCAM verb.
However, the usual reason to make a piano sound from far away is when it's a part (not a soloist!) of the orchestra.

A solo piano sounds much better if the instrument is quite close to the listener (plus has a nice, tasteful, tail). There's a reason people try to get seats close to the virtuoso and not in the back of the hall.... Also, keep in mind that soloistic literature for pianos was often performed in salons (i.e. big living rooms of rich people) rather than in huge halls.

Edit:
if you want to experiment with putting the instrument back, you could also try the free proximity plugin (TDR, iirc) as an insert and then repeat the experiment with your reverbs on a send (witch less reverb...)


----------



## blaggins (Nov 2, 2021)

Living Fossil said:


> Honestly, apart from the original track they all sound terrible.


Just gave them all a fresh listen. I think I agree with you about 1 & 2, but to me 3 & 4 don't sound too bad. 


Living Fossil said:


> And at this point it's not about the reverbs per se (allthough Nr. 1 add so much boomy-ness to the signal that it's massively disturbing), but the usage of them.
> (in fact, all of these used reverbs can sound really good, but not at 100% wet...)


Right, I wouldn't use them 100% wet in context but I was really trying to bring out the differences of the reverbs. I found that as I blended them with the dry signal I started to have a harder time telling them apart. I do recognize this probably has a lot to do with my lack of mixing experience though...



Living Fossil said:


> One thing that is obvious on my studio monitors is that in all of your cases the perceived depth isn't constant.
> The boomy 100 Hz range sounds close and the higher notes are far away.
> (the effect is less prominent in example 2)
> Also, the distance as suggested by the reverbs doesn't correlates with the width of the instrument.


Having a heard time hearing this effect to be honest. I don't know if it's a headphone vs. monitors thing or a lack of auditory finesse on my part.



Living Fossil said:


> If you want a piano sound from far away (_if _you want that) you have to massively reduce the stereo width. The direct sound of an instrument gets more and more punctual if it's more far away.


I wasn't going for far away per se, I was trying for a mid-sized performance hall suited to piano recitals like you might find at any university with a music school, with the listener sitting at a good spot. BUT, I do have lots of questions... When you say "massively reduce the stereo width" what exactly do you mean? The reverb experiments only used a single close mic from the Modern U VST, would that not already have a narrow stereo width by the very nature of the recording? I think I'm not understanding something very fundamental here.



Living Fossil said:


> If you really want to make the instrument sound as if it would be far away, you could get better results e.g. with the SP2016 or a combo of e.g. Waves S1 or Precedence (to narrow the signal) and the IRCAM verb.
> However, the usual reason to make a piano sound from far away is when it's a part (not a soloist!) of the orchestra.
> 
> A solo piano sounds much better if the instrument is quite close to the listener (plus has a nice, tasteful, tail). There's a reason people try to get seats close to the virtuoso and not in the back of the hall.... Also, keep in mind that soloistic literature for pianos was often performed in salons (i.e. big living rooms of rich people) rather than in huge halls.
> ...


I appreciate the input @Living Fossil. I'm not trying to make it seem far away in this case though, just trying to see if I can put a reverb on a close mic and get a really nice sense of space, in a tasteful and realistic way. What would you advise for this?


----------



## Living Fossil (Nov 2, 2021)

tpoots said:


> When you say "massively reduce the stereo width" what exactly do you mean? The reverb experiments only used a single close mic from the Modern U VST, would that not already have a narrow stereo width by the very nature of the recording? I think I'm not understanding something very fundamental here.


Your original sound contains a lot of stereo information. It's stereo field is quite wide (see picture 1)
I made two other screenshots, one with a narrower stereo field, coming from the left (picture 2)
and finally one where you piano comes from the left in mono (picture 3).















Keep in mind i'm talking about the width of the direct signal. Of course the room information extends again to the full stereo width.
Picture 4 is the mono-source in Ircam verb (at 40% wet) and picture 5 in the same at 100% wet.
Still, in both of these cases your ear would locate a piano that's at the left and quite some distance away. 








Finally i made a drawing that shows what i'm talking of.
(next posting)


----------



## Living Fossil (Nov 2, 2021)

And here's the image i draw.
Both ears perceive both, low and high notes, yet they travel distant ways -> this creates the location of the sound.


----------



## Pier (Nov 2, 2021)

Living Fossil said:


> Both ears perceive both, low and high notes, yet they travel distant ways -> this creates the location of the sound.


Isn't this called "cross feed"?


----------



## Consona (Nov 2, 2021)

CGR said:


> I'm an owner of Spaces 1 and I'm grumpy!


Discounted upgrade price is 185 dollars and I can't even resell the original version...

Just grumpily waiting for the SP2016 sale.


----------



## blaggins (Nov 2, 2021)

Living Fossil said:


> Your original sound contains a lot of stereo information. It's stereo field is quite wide (see picture 1)
> I made two other screenshots, one with a narrower stereo field, coming from the left (picture 2)
> and finally one where you piano comes from the left in mono (picture 3).


Thanks for the detailed response @Living Fossil. I see what you mean, although I'm a little surprised I probably shouldn't be since I guess this is what an omni-directional microphone is for... 

In terms of narrowing the stereo field... I played around a bit with the SuperVision plugin in Cubase which has a vectorscope so I can see what I'm doing. It seems I can use the Imager plugin to narrow the stereo image by frequency bands, and so I can narrow lower frequencies more than higher ones which seems like a good choice since the ear isn't as good at localizing high frequencies as well as low ones anyway, right? Then I can use the stereo combined panner to move the whole sound a bit to the left. If I turn off the room mic and just leave the close mic (no reverb) I get something like this:





(that's just a snapshot, not an average, can't seem to figure out how to get the plugin to show me an average across the whole audio).

Here's the resulting close-only-mic with a narrower stereo image:


Would this a good way to proceed with preparing the audio for the application of reverb?

And if it is, where does one go from here? If I use one of those reverbs as a send, I'll still be getting some amount of the original signal so the piano will still feel very up front. That's probably not a bad thing for this case since you'd want to be sitting close to the stage for a soloist, as you have said.

But if I have a bunch of orchestral instruments as well... then it would get a little hairy since some of them do need to be father away than others. I guess that's probably a moot point if I use spitfire stuff anyway as I have the room mic and outriggers and such to create a sense of space if I want it, I just can't really change the space that I'm in. 

What role would SP2016's ER's play in creating a more realistic sense of distance from the audience to the piano?


----------



## Living Fossil (Nov 2, 2021)

Pier said:


> Isn't this called "cross feed"?


Iirc the term is used in the context of headphones, where left and right signal normally are strictly separated unless there is some crossfeed added at some stage (iirc i came across such a device for headphones 25 years ago. It aimed to simulate listening in a space...)


----------



## Living Fossil (Nov 2, 2021)

tpoots said:


> What role would SP2016's ER's play in creating a more realistic sense of distance from the audience to the piano?


This new version feels *much* better imho...

Regarding SP2016 you could demo it.
On some sources it's fantastic, on other i don't like it that much.
As mentioned, i have different tools for this task, as Precedence and Schoeps Mono Upmix [it also works with stereo signals]. And if nothing works, sometimes DearVR Pro does.
But in my experience the results do indeed depend of the sources, so i try out the usual suspects if there is a new scenario.

In any case, before you spend money, you should also try the free Proximity. And the free version of panagement.









Proximity | Tokyo Dawn Records


A distance pan pot offering intuitive access to psychoacoustic models. Finalist of the KVR Developer Contest '12.




www.tokyodawn.net


----------



## Pier (Nov 2, 2021)

Living Fossil said:


> This new version feels *much* better imho...
> 
> Regarding SP2016 you could demo it.
> On some sources it's fantastic, on other i don't like it that much.
> ...


Are there other plugins like Proximity and Panagement?

I can't download Proximity but the Panagement demos sound fantastic!

Edit:

I was able to download Proximity from this URL which includes everything Vladg Sound did with TDR:



https://www.tokyodawn.net/labs/vladgsound/vladgsound_stuff.zip


----------



## blaggins (Nov 2, 2021)

Living Fossil said:


> This new version feels *much* better imho...
> 
> Regarding SP2016 you could demo it.


Ugh, fine.  I guess I'm deep enough into this reverb testing madness that I may as well.

Here's a new round of experiments. I did another round of A/B tracks using the narrower and slightly-left-panned track as the base (from above)

All of these are still using the four reverbs from above:
- Cubase REVerence: Music Academy A (this is an IR verb included with Cubase Pro)
- Pro-R: Bright Large Room
- Cinematic Rooms: Piano Chamber
- Seventh Heaven: Large & Bright (this is the only preset I tweaked by reducing the tail a bit to match the rest better).

but once again the order doesn't match the list, although it does match the previously posted tracks.

Also I am using all reverbs as SENDS this time, not as inserts. All the sends are at -3dB from the level of the instrument track. (Well they are at -6dB pre-fader but the track itself is at -3dB so that should be an equivalent statement.)

I have each of them with and without the SP2016, which, if it's on the track, it's on as an insert with the following settings in an attempt to push the piano back a bit and add some realistic ERs:
- 100% wet
- 19ms predelay
- 740ms decay
- 40% position fader
- 100% diffusion fader

To my ears it sounds like the non-SP2016 tracks are like if I was sitting in the first couple of rows and the SP2016 tracks are more like sitting in the back of the auditorium/hall. I did my best to level match everything as well.

Still trying to decide which one I like best myself, and how much I think the SP2016 is doing a good job with positioning in a forwards/backwards direction. Very curious to gets some thoughts on this if anyone has the patience to go through all this. 

Reverb 1, no SP2016


Reverb 1, with SP2016 as insert


Reverb 2, no SP2016


Reverb 2, with SP2016 as insert


Reverb 3, no SP2016


Reverb 3, with SP2016 as insert


Reverb 4, no SP2016
https://soundcloud.com/tpoots/chopi...no-sp2016?si=92edaae35f034387a3d8505a7817c2ec

Reverb 4, with SP2016 as insert
https://soundcloud.com/tpoots/chopi...-4-sp2016?si=06a0dad96006424399785b9443f2a847


----------



## Consona (Nov 2, 2021)

Living Fossil said:


> Regarding SP2016 you could demo it.
> On some sources it's fantastic, on other i don't like it that much.


Such as? And why?


----------



## Living Fossil (Nov 2, 2021)

Pier said:


> Are there other plugins like Proximity and Panagement?


There is also Precedence, which in its concept can be paired with Breeze2.
I've used that combo quite a bit, but actually also use Precedence on with other reverbs.

And then there are some others that i haven't tried out yet... (like Ircam's SPAT, which probably is the best of them all...)


----------



## Living Fossil (Nov 2, 2021)

Consona said:


> Such as? And why?


It's hard to give a precise answer...
As mentioned above, i often try out different combinations (often when matching different libraries).
Some days ago, i found SP2016 wonderful on CSS in combination with the IRCAM verb but prefered Precedence for MSB (plus IRCAM verb). 
And on Djembes and Saxes (that i record myself) i usually prefer Schoeps Mono Upmix (also on the stereo mic'd Djembe) to SP2016 and to Precedence which then have a send to IK's Sunset Studio Reverb with one of the Live rooms (which btw. is a convo).
Somehow spatialisation is always work in progress...


----------



## Zanshin (Nov 2, 2021)

Pier said:


> Are there other plugins like Proximity and Panagement?
> 
> I can't download Proximity but the Panagement demos sound fantastic!











Panagement 2 (VST/AU) by Auburn Sounds - Audio Plugin Deals


Panagement gives you raw power over your stereo tracks.Now only $11.99 instead of $52 for a limited time only, don't miss out!




audioplugin.deals





On sale for $12 here.

MIR Pro does this too as part of it's process. My "go to" set up now for sound sources that need placing in an environment is MIR Pro on insert and then CRP as a send "to taste". I lower IR length in MIR a bit and then CRP is set up as 100% wet reverb only (no ER).


----------



## blaggins (Nov 2, 2021)

Zanshin said:


> Panagement 2 (VST/AU) by Auburn Sounds - Audio Plugin Deals
> 
> 
> Panagement gives you raw power over your stereo tracks.Now only $11.99 instead of $52 for a limited time only, don't miss out!
> ...


Nice find on Panagement, still, I'd be worried that it's $12 wasted on my part if I just end up getting a more comprehensive solution down the road.

You use MIR as an insert in your DAW? For some reason I have this impression that folks don't usually use MIR outside of VEP, and that it's more or less the best thing for VSL instruments but doesn't have a lot of advocates among the non-VSL users. I think I've read things to that effect on this forum anyway, am I way off base?


----------



## Trash Panda (Nov 2, 2021)

If you're going to buy Panagement, it's best to get it from the developer directly for the full $52 if the goal is to support them.

Functionally, all you get from the full verses free version is the delay module and the reverb chip, so not much point in buying the full version even on deep discount since all the magic is in the panning and distance part of the plugin.






Auburn Sounds - Panagement, free reverb audio plug-in







www.auburnsounds.com


----------



## Zanshin (Nov 2, 2021)

Also I want to say the official video for Panagement is one of the most bizarre things I have watched lately!!


----------



## Zanshin (Nov 2, 2021)

tpoots said:


> You use MIR as an insert in your DAW? For some reason I have this impression that folks don't usually use MIR outside of VEP, and that it's more or less the best thing for VSL instruments but doesn't have a lot of advocates among the non-VSL users. I think I've read things to that effect on this forum anyway, am I way off base?


It's not just for VSL stuff, for example:


TLDR: Alan Meyerson used it while mixing Mank. Everyone recorded remote and sent their recordings to him. Anyway, it's just a VST, I don't use VEP at all.

EDIT: Also I think you can demo without a dongle these days, so if you want to try all the options, it's another one (a good one IMO).


----------



## cedricm (Nov 2, 2021)

Have any of you tested Nugen Audio's Paragon ST?


----------



## Dietz (Nov 2, 2021)

tpoots said:


> You use MIR as an insert in your DAW? For some reason I have this impression that folks don't usually use MIR outside of VEP, and that it's more or less the best thing for VSL instruments but doesn't have a lot of advocates among the non-VSL users.


I was also asked a similar question a few weeks ago in an interview I gave to Giovanni Rotondo for his blog "Film Scoring Tips". So I put together a very strange and quite arbitrary playlist on Spotify, consisting of an extremely diverse collection of music from the last 10 years or so, with only one common denominator: No VSL instruments were played _(ok, admittedly quite a few here and there in some of the scores  ...)_, but the mixes use MIR Pro (non-VEP-based plug-in) as their main source for panning and depth. ... of course, other effects got employed, too - after all, we're talking about full-fledged commercial releases here, not a software showcase.

-> 


_... don't tell me that you don't like the music, because I just mixed it. 8-)_


----------



## blaggins (Nov 2, 2021)

Thanks for the input on MIR @Zanshin and @Dietz, the Alan Meyerson interview was very interesting. I watched a bunch more videos to figure out what all it does, the feature list is very impressive. Given I'm demoing all this expensive stuff already maybe I should put MIR Pro on the list too, though it might take the cake expense-wise after adding room packs and such, but I do admit I'm curious what would happen if I just slapped the piano track onto a MIR stage.


----------



## Consona (Nov 2, 2021)

This is how Alan uses SP2016 Stereo Room.


----------



## RicardoSilva (Nov 2, 2021)

tpoots said:


> (The motivation for this question: EW Spaces 2 is 60% off.)
> 
> I've been trying to read every reverb thread on here I can find but I'm still left wondering, do I need a fancy library of convolution reverbs? I have what I think is pretty good coverage with algorithmic reverbs: Valhalla Room, Fabfilter Pro-R, and Eventide Blackhole. As far as sample libs I mainly have Spitfire SSO String/Brass/Woods libraries and I have been told (and have had mild success with) just using the included mics to create a realistic sense of placement and space, I assume a bit of algorithmic reverb to blend is good enough in any case, and Pro-R seems lovely for this. I also have the included IR reverb in Cubase but I haven't used it much and it gets panned quite a bit in any case.
> 
> ...


After trying many for the past 3 years including Spaces 2 I found Mir Pro to incomparable, the amount of control you have is unapparelled, you can even rotate the instruments, bellow I put a piece I made, you only need to listen to the first 20 seconds,the spatialization is superb. Kind regards.


----------



## justthere (Nov 2, 2021)

MIR sounds terrific to be sure - very flexible, great for placing things in a space, very usable spaces, and to me the right philosophy for reverb when writing for virtual orchestra - and if I had a much, much faster computer I would likely be using it, but for my purposes (scoring and mixing as I go) which requires that I play things in realtime, I found the extra layers of buffering and latency to be too much to be convenient. Since it's not the sort of software I would use as a send effect, and since I mix as I go, it wound up just not working for me. Mixing only would be a different story.

My go-to has been Altiverb. The stages are very useful, especially as they can play a great part in fleshing out virtual instruments that are full-range but utterly dry, as is the case for modeled instruments or the older VSL library. I will often use the positioner feature to orient sections, and this has worked very well for me. I also will use a tail reverb of some kind as the stage settings I use are fairly short, but this actually emulates what it is to record an orchestra on a stage and add reverb to that. Pretty much always algorithmic, or maybe a plate even, depending on the desired sound.

QL Spaces sounds very good also. One thing I think they do very well is not just capture a space but use a great chain and great speakers to do so, which imparts extra character, and I like their choices. It's not as flexible as most others but is a good tool to have on hand.

I just demo'd Cinematic Rooms Pro, which has settings and spaces that are not in the basic version. I very much liked their Orchestration Hall. Lots of good stuff, and useful control over the sound. It was a little strange to me getting balances right with it - easy to overshoot in either direction - but very pleasant. It's on the list.

Inspirata Pro is kind of in a class by itself. It sounds very good, detailed and clear - these folks definitely know a lot about acoustics. For placement beyond left-right for multiple instruments, it definitely requires a multichannel send, but that's kind of great too. 

Valhalla Room is unbelievable for fifty bucks. It's not intended to be a realistic imaging reverb, but rather a lovely tail, and it does that very well. Never sorry I used it on a vocal.

For some things that can be placed in a section, Virtual Soundstage has been very useful. I don't know that it's still being supported, but it's a great idea, similar to MIR - it's like MIR but with only early reflections, and no tail. And it is all about placement - left-right and front-to back in a room with various mic position types and movable mics. One thing I like using it for is essentially emulating spot mics. I have heard some say it introduces phase problems - and that's generally because they aren't feeding it correctly, I think.

Speaking of spot mics - here's an opinion that may or may not resonate: I am utterly over the million mic libraries. Pretty much every last one of them is predicated on using a mix of all of the mics or at least it's clear what the main ones are - because the close mics rarely sound like you would use them on their own. And the reason is - that's how close mics with orchestra are. Generally not full-range and meant to add focus to a main position. So though it's great that, say, Spitfire gave us three or five mic perspectives on their libraries up to BBC etc., it's not like I ever use the close ones on their own. And the main ("intended") sound still sounds like Air Lyndhurst. Which is great, with one caveat - the agility of any library is compromised by lots of mic positions and room reverb time. Crossfading dynamics, legato transitions - all of that suffers because due to the time differences between the mics there will always be more of a smear. So to my taste, pretty much all of the newer libraries that follow this trend are a bit sluggish. I'd been hoping that rather than pursuing that, that library makers would instead embrace and develop impulse responses of the space they are recording in - infinitely more flexible, more variation, increased usability. It would be nice to take a string section I liked and place it anywhere in a way that didn't sound like I was in a hall playing a recording of them made in a different hall. This is one reason why I love working with modeled instruments, besides the expressiveness and continuously variable, seamless dynamics changes - that those instruments aren't happening anywhere, so they can be placed anywhere.


----------



## RicardoSilva (Nov 2, 2021)

justthere said:


> MIR sounds terrific to be sure - very flexible, great for placing things in a space, very usable spaces, and to me the right philosophy for reverb when writing for virtual orchestra - and if I had a much, much faster computer I would likely be using it, but for my purposes (scoring and mixing as I go) which requires that I play things in realtime, I found the extra layers of buffering and latency to be too much to be convenient. Since it's not the sort of software I would use as a send effect, and since I mix as I go, it wound up just not working for me. Mixing only would be a different story.
> 
> My go-to has been Altiverb. The stages are very useful, especially as they can play a great part in fleshing out virtual instruments that are full-range but utterly dry, as is the case for modeled instruments or the older VSL library. I will often use the positioner feature to orient sections, and this has worked very well for me. I also will use a tail reverb of some kind as the stage settings I use are fairly short, but this actually emulates what it is to record an orchestra on a stage and add reverb to that. Pretty much always algorithmic, or maybe a plate even, depending on the desired sound.
> 
> ...


Hello, strange that you have experience buffering and latency with Mir, I too mix and compose as I go and never encountered that problem, my piece above has 68 instances of instruments and my pc was struggling still had no buffering or latency problems, thats really odd,kind regards.


----------



## justthere (Nov 2, 2021)

RicardoSilva said:


> Hello, strange that you have experience buffering and latency with Mir, I too mix and compose as I go and never encountered that problem, my piece above has 68 instances of instruments and my pc was struggling still had no buffering or latency problems, thats really odd,kind regards.


Hi Ricardo - as I said, I would need a faster computer. Mine is a 12-core 2013 Mac Pro. When my template was fully opened the buffer size was often 1024 at 48k, so any buffer multiplier was fairly dramatic, and even at smaller buffers it was pretty noticeable. I will definitely be in the market for a new machine in a little while, but it’s an odd time on the Mac side as chip technology shifts.


----------



## RicardoSilva (Nov 3, 2021)

justthere said:


> Hi Ricardo - as I said, I would need a faster computer. Mine is a 12-core 2013 Mac Pro. When my template was fully opened the buffer size was often 1024 at 48k, so any buffer multiplier was fairly dramatic, and even at smaller buffers it was pretty noticeable. I will definitely be in the market for a new machine in a little while, but it’s an odd time on the Mac side as chip technology shifts.


Hello,I see,templates are a killer,I tried them,made my own,but I want the power of my machine to go to live intruments and not be wasted in instruments which I am not using for the sake of having them there,thats why I dont use it, I like to rummage my collection, I am using 8 cores (windows) and I am also looking to upgrade to 16 but yes we live in a very bad time to upgrade anything ,I think in a years time my new processor will be available if I am lucky,kind regards.


----------



## cedricm (Nov 3, 2021)

Hey Ricardo, I really like "Rainforest"!
How about making a video on how you did it?


----------



## RicardoSilva (Nov 3, 2021)

cedricm said:


> Hey Ricardo, I really like "Rainforest"!
> How about making a video on how you did it?


Hello Cedric,thank you so much for taking the time to listen and to bring me a smile with your comment,making a video about it I guess its possible in the future, that piece took me almost 3 months to make, I was so exhausted that I cant listen to it even now, I am really busy right now but you ignited a flame that will keep burning inside my head,knowing me,one day I will just jump to the pc and do it right there and then. Thank you once again Cedric for brighten my day,kind regards.


----------



## cedricm (Nov 3, 2021)

RicardoSilva said:


> Hello Cedric,thank you so much for taking the time to listen and to bring me a smile with your comment,making a video about it I guess its possible in the future, that piece took me almost 3 months to make, I was so exhausted that I cant listen to it even now, I am really busy right now but you ignited a flame that will keep burning inside my head,knowing me,one day I will just jump to the pc and do it right there and then. Thank you once again Cedric for brighten my day,kind regards.


Looking forward to it! 
I absolutely can understand why Rainforest took you 3 months, it's a masterpiece. 
I will be very proud indeed, the day I can present a piece of such quality.


----------



## RicardoSilva (Nov 3, 2021)

cedricm said:


> Looking forward to it!
> I absolutely can understand why Rainforest took you 3 months, it's a masterpiece.
> I will be very proud indeed, the day I can present a piece of such quality.


What a wonderful thing to say,thank you very much ,I am very grateful for your kindness,kind regards Cedric.


----------



## justthere (Nov 20, 2021)

RicardoSilva said:


> Hello,I see,templates are a killer,I tried them,made my own,but I want the power of my machine to go to live intruments and not be wasted in instruments which I am not using for the sake of having them there,thats why I dont use it, I like to rummage my collection, I am using 8 cores (windows) and I am also looking to upgrade to 16 but yes we live in a very bad time to upgrade anything ,I think in a years time my new processor will be available if I am lucky,kind regards.


16 cores would be a big leap, but clock speed and the efficiency of the system are also crucial. 

Regarding templates - the only way to do them, I think, is by making an instrument per track and deactivating everything at the start. So you have the toy piano and kazoo and viola da gamba in there but inactive, so they aren’t using any CPU cycles until you use them. For the last show I did, I started with only a piano active with a small buffer size, so I could sketch with it at low latency, and if there were, say, a particular clarinet line I wanted to play in I could do it at the lowest possible latency. Then I would turn the sketch into string parts and so on, which being orchestration could tolerate having a slightly higher latency (larger buffer), and by the end of the cue it would be full orchestra and rock band and so on, but I would be mixing, not performing. Still not ideal, but it gets one through a project. 

Thanks for all of your work on music that makes these libraries actually speak.


----------



## cedricm (Nov 28, 2022)

Zanshin said:


> It's not just for VSL stuff, for example:
> 
> 
> TLDR: Alan Meyerson used it while mixing Mank. Everyone recorded remote and sent their recordings to him. Anyway, it's just a VST, I don't use VEP at all.
> ...




I just watched the Mix with the Masters video on Meyerson and and Trent Reznor and Atticus Finch for the film Mank by David Fincher.
I'd say 20 % is related to MIR PRO Stereo.

"a very elegant solution to a very difficult problem."

The very difficult problem: during pandemic, each musician was recording himself/herself at home.
Meyerson had to put them in a space that gave the illusion of being in the same space, as if the orchestra was recorded in a hall.
Solution: mostly MIR Pro, plus a tiny bit of Seventh Heaven and Cinematic Room Pro.

Recommended viewing for people with the same issue.


----------



## X-Bassist (Nov 28, 2022)

Dewdman42 said:


> The cost for Spaces II new right now on sale is lower then the cost to upgrade from Spaces I. FWIW.


This is what ended my EW purchases. So ridiculous. Every year I’m surprised they have no upgrade sales for Spaces. I haven’t used Spaces 1 since. Crazy that they would alienate existing paying customers.

I use Cinematic rooms and the Lexicon Reverb bundle. Two best sounding to me.


----------



## justthere (Dec 17, 2022)

X-Bassist said:


> This is what ended my EW purchases. So ridiculous. Every year I’m surprised they have no upgrade sales for Spaces. I haven’t used Spaces 1 since. Crazy that they would alienate existing paying customers.
> 
> I use Cinematic rooms and the Lexicon Reverb bundle. Two best sounding to me.


You don’t use it because it no longer works, or for philosophical reasons?


----------



## X-Bassist (Dec 18, 2022)

justthere said:


> You don’t use it because it no longer works, or for philosophical reasons?


I like the sounds ok, but considering the extent of the controls are input and output levels, it's missing alot compared to other reverbs. I was kind of hoping Spaces 2 would be an improvement, but I'm not willing to spend a lot to find out. Both CR and Lexicon have more extensive controls that lets you tweak it (ER times, size, Hi rolloff I use all the time) and needing to tweak it comes up all the time depending on the song and the part. Sometimes an EQ before or after helps, but it's more to setup.


----------

