# Orchestral Placement Using PURELY Algorithmic Reverbs?



## PaulWood (Mar 26, 2009)

Hi everyone,

More and more convolution reverbs are being used for orchestral "room" placement (especially with all the scoring stage and concert hall IRs available). Has anyone had good results from doing room placement solely with various instances of "high end" algorithmic reverbs (such as CSR)?

What would be the pros/cons of this method do you think?

P


----------



## Waywyn (Mar 26, 2009)

For me personally the pros for IRs are:

1. if you have an IR from a certain position, it already keeps all or most of the information you need to have it naturally sounding as if your instrument would be on this position - more or less

2. to me it always sounds more natural than an algo reverb. This doesn't necessarily mean better, but if I compare it to samples, ... an IR to me feels like a prerecorded phrase from an e.g. string run, while the algoritmic reverb is more like single samples ....

... which leads me to the advantage of the algo reverb 

1. Speaking of single samples comparison. With an algo reverb you have lots of more options how to change the room settings and characteristics .. but if you wanna go physically correct, its not really that easy.

2. On certain situations an algo reverb feels much better ... but this totally depends on what you are aiming for.


----------



## synthetic (Mar 26, 2009)

I started a thread about Alan Meyerson a week ago where he talked about using Haas panning for placement. This is essentially the same as algorithmic reverb as far as L/R placement. 

I know that TC has room placement technology in their VSS4 reverb algorithm, but that's not in Powercore and it would get expensive to do a whole mix with that technique – it would require a rack of Reverb 4000s or M6000s. I haven't used either box but it seems interesting. 

IR for room placement seems like a great solution, but no one has done it right IMO. What we really need are samples taken from multiple locations in the room (not simulated like Altiverb) with the decay portion of the IR chopped off so you only hear the early reflections. Then you use one reverb for the overall decay.


----------



## PaulWood (Mar 26, 2009)

waywyn said:


> 2. to me it always sounds more natural than an algo reverb. This doesn't necessarily mean better, but if I compare it to samples, ... an IR to me feels like a prerecorded phrase from an e.g. string run, while the algoritmic reverb is more like single samples ....



That's a good analogy actually.

I appreciate that there is more "immediacy" with the room - you have the response of the room as soon as you load the IR - but in reality I seem to spend so much time EQing and damping that the fact that I have a room right there seems to be negated...



waywyn said:


> 2. On certain situations an algo reverb feels much better ... but this totally depends on what you are aiming for.



I always "wash" the whole mix with an algo reverb, but for room placement specifically, it seems that convo is by far the preferred method.




synthetic @ Thu Mar 26 said:


> IR for room placement seems like a great solution, but no one has done it right IMO. What we really need are samples taken from multiple locations in the room (not simulated like Altiverb) with the decay portion of the IR chopped off so you only hear the early reflections. Then you use one reverb for the overall decay.



With a few of the Altiverb IRs (Todd AO in mind here) there are several mic/speaker combinations. If you're talking about instrument positions, I agree, but the drain on the CPU is enormous with what we currently use. I can't imagine having to have an ER instance over EVERY separate instrument channel...!

P


----------



## PaulWood (Mar 26, 2009)

synthetic @ Thu Mar 26 said:


> Then you use one reverb for the overall decay.



This is something I'm afraid I STILL can't get my head around... What is the difference between 1 tail and several. Why is one recommended method that you have a separate ER and Tail instance for each group - if you are separating instances for placement why not just have the ER and the Tail for each group (close, mid, far or strings, WW, Brass, Percussion etc) in the same instance?

I just don't get why you would separate the ER from the Tail... Is it a pre-delay thing? Can't you adjust the predelay separately for the ER and Tail in a single instance? I thought you could...

P


----------



## Waywyn (Mar 26, 2009)

PaulWood @ Thu Mar 26 said:


> synthetic @ Thu Mar 26 said:
> 
> 
> > Then you use one reverb for the overall decay.
> ...



1. separating the tail from the IR:
maybe you like the ER of a room but not the tail - that easy. Then you like the tail of a hall but not the ERs of it.

2. Why would you use one general tail: In terms of nitpicking, you would't record 4 ensembles in 4 halls right? You would just record the whole orchester in the same hall


----------



## synergy543 (Mar 26, 2009)

synthetic @ Thu Mar 26 said:


> I started a thread about Alan Meyerson a week ago where he talked about using Haas panning for placement. This is essentially the same as algorithmic reverb as far as L/R placement.


Hass panning, while interesting, is rudimentary compared the the multiple ERs that shift as you move position.

As I've said, the math for simulating this is very simple so its just a matter of time for someone to build the software if there is a market need.

Very strange no one had done it yet. When I started programming for Sony, they just developed the http://bandblog.net/music/images/DPS-D7a_small.jpg (DPS-D7) that is a 76-tap panning delay so this would give pretty good ER simulation with the right data. Although it would be nice to add filters to simulate various acoustic materials (wood, curtians, chairs, people, etc.) so software is a better solution.

Maybe a guy like U-he could make something like this?

For now, Altiverb is the cheap cost-effective solution although you're locked into the position where they actually recorded and I don't think they did positions for an entire orchestra.


----------



## bryla (Mar 26, 2009)

I once again must chime in and recommend people getting The Composer's Approach from Mike Novy, that covers room acoustics and software simulation.


----------



## PaulWood (Mar 26, 2009)

synergy543 @ Thu Mar 26 said:


> Very strange no one had done it yet.



Going back to a question I asked last week - is the Waves S360 Panner/Imager for ERs in conjunction with the R360 for decorrelated tails not this piece of software?



synergy543 @ Thu Mar 26 said:


> For now, Altiverb is the cheap cost-effective solution although you're locked into the position where they actually recorded and I don't think they did positions for an entire orchestra.



Their placement function is based on a large set of Impulses Maarten S did for them, which they then converted into algorithms for the math. To be honest I tend to pre pan/image stereo signals before I send them to Altiverb, and don't bother with that function of the software - it's CPU intensive enough!



> 1. separating the tail from the IR:
> maybe you like the ER of a room but not the tail - that easy. Then you like the tail of a hall but not the ERs of it.
> 
> 2. Why would you use one general tail: In terms of nitpicking, you would't record 4 ensembles in 4 halls right? You would just record the whole orchester in the same hall



Ah ok. I get the point that you would like to use separate (different) tails, but as regards your second point - does the response of the tail not differ with different positions from the "observation (listening) point". For example, if you play a note on an oboe while it is only 2m away from you, ignoring the ERs, does the tail not sound different from that if you played the same note on the same instrument from 15m away, or is it so subtle as to be irrelevant?

Cheers!

P


----------



## synthetic (Mar 26, 2009)

The ER plug-in sounds like a great idea. 

I think that the one reverb for decay simulates the way many soundtracks are recorded. The musicians sit in a large but treated room, so that is where the ER reflections come from. Then the sections are sent through a (Lexicon, TC, Bricasti, etc.) hall program in the mix.


----------



## Frederick Russ (Mar 26, 2009)

synthetic @ Thu Mar 26 said:


> The ER plug-in sounds like a great idea.
> 
> I think that the one reverb for decay simulates the way many soundtracks are recorded. The musicians sit in a large but treated room, so that is where the ER reflections come from. Then the sections are sent through a (Lexicon, TC, Bricasti, etc.) hall program in the mix.



Bricasti also does a great job with early reflections. The long tails in their true stereo hall patches are great but their true stereo early and loose reflection patches also do very well. They are very natural sounding with the perception that instead of adding a patch, you're adding more mics - never grainy, phasey, boxy, muddy or the build up of frequencies as would be expected using a similar application in an IR (never thought I would ever say that.) Problem with this idea is the cost especially when considering more than one Bricasti - considering that a lot of composers use an IR chain of at least 4 IR instances (some use more) along with a final hall.

Probably the best case scenario for Altiverb in early reflection land would be to use all true stereo (4 mono) patches and watch to make sure there isn't an inherent build up of frequencies that could easily overtake a mix which would be detrimental to a project's final result. It might be good to say that each IR has a Q which can be measurable by comparing it to pink noise (this information comes to me by a fellow composer who has been experimenting with IR EQ). Even Todd AO has a Q but imo its not advisable to remove the Q because there are inherent sweet frequencies which give the room character.


----------



## Waywyn (Mar 26, 2009)

Frederick Russ @ Thu Mar 26 said:


> synthetic @ Thu Mar 26 said:
> 
> 
> > The ER plug-in sounds like a great idea.
> ...



Now I see a mission for Peter Roos here :D
I know there is lots of stuff going on inside the machines, which actually can not be covered with a few sampled IRs, .... but nevertheless it would be cool to have the actual sound 

Yeh you are right about ToddAOs Qs, but some of it, especially on CBass and Cello can be very annyoing ... but then again, since I just use their ERs and have Peters Samplicity Large Hall as a tail, everything starts to sound really good ... uhm ... at least for me.


----------



## Peter Emanuel Roos (Mar 26, 2009)

PaulWood @ Thu Mar 26 said:


> bryla @ Thu Mar 26 said:
> 
> 
> > I once again must chime in and recommend people getting The Composer's Approach from Mike Novy, that covers room acoustics and software simulation.
> ...



Hi Fred,

This is my approach also, I use a number of TrueVerbs on input channels (some dry, some slightly wet, like VSL) and use 2-3 busses with IRs with the ER section attenuated. This way I can move different lib samples to the "same" position.


----------



## lee (Mar 26, 2009)

re-peat @ Thu Mar 26 said:


> ...Give me two or three quality algorithmic reverbs *and one or two instances of decent (short) delays*, and I'm perfectly happy and ready to tackle whatever space- or placement-requirements any of my music might need.



Would you like to share your knowledge in creating your own ERs using two short delays? I understand if it´s a secret.  Creating virtual stages and positioning instrument samples using delays must need quite some skills and knowledge of room acoustics, mathematical differences between different ERs etc?

/Johnny


----------



## synergy543 (Mar 26, 2009)

PaulWood @ Thu Mar 26 said:


> Going back to a question I asked last week - is the Waves S360 Panner/Imager for ERs in conjunction with the R360 for decorrelated tails not this piece of software?


Actually yes, the Waves S360 Panner/Imager looks like it generates ERs. I don't have this so I can't tell you how well it works but it sure looks interesting.

Anyone using it?




> Ah ok. I get the point that you would like to use separate (different) tails, but as regards your second point - does the response of the tail not differ with different positions from the "observation (listening) point".


I believe the ERs mostly define the localization and the dense build up of reverb afterwards comes from so many different directions that it doesn't really provide any "significant" localization cues. Yes, technically reverbs from different source positions would be different but I don't think they would different enough to provide important localization cues or to be signficantly different. So I think we can sort of ignore that aspect for now. Kind of like going from 24-bits to 25-bit recording. The big leap was from 16 to 24, beyond that point becomes exponentially less significant. Same idea with reverb localization. The big meat and potatoes with localization is in the ERs.

And I think this is often why many people don't like Altiverb because the ER cues and reverb coloration from the room are too strong. I find that more often than not, when I want a reverb tail in Altiverb, I pull up a synthetic reverb tail (for its smoothness).


----------



## tonecarver (Mar 26, 2009)

Hi folks. I am pretty much a newbie here (and have much to learn about composing) but do happen to have written a VST plugin (windows only) that specializes in producing early reflections and attenuated delays to simulate room placement, so I figured I'd take this chance to decloak and contribute rather than just lurk and learn. 

The plugin is called Ambia. It's audio engine is highly optimised but the GUI is quite huge and unfriendly. :oops: At least until I can getòäí   š£qäí   š£räí   š£säí   š£täí   š£uäí   š£väí   š£wäí   š£xäí   š£yäí   š£zäí   š£{äí   š£|äí   š£}äí   š£~äí   š£äí   š£€äí   š£äí   š£‚äí   š£ƒäí   š£„äí   š£…äí   š£†äí   š£‡äí   š£ˆäí   š£‰äí   š£Šäí   š£‹äí   š£Œäí   š£äí   š£Žäí   š£äí   š£äí   š£‘äí   š£’äí   š£“äí   š£”äí   š£•äí   š£–äí   š£—äí   š£˜äí   š£™äí   š£šäí   š£›äí   š£œäí   š£äí   š£žäí   š£Ÿäí   š£ äí   š£¡äí   š£¢äí   š££äí   š£¤äí   š£¥äí   š£¦äí   š£§äí   š£¨äí   š£©äí   š£ªäí   š£«äí   š£¬äí   š£­äí   š£®äí   š£¯äí   š£°äí   š£±äí   š£²äí   š£³äí   š£´äí   š£µäí   š£¶äí   š£·äí   š£¸äí   š£¹äí   š£ºäí   š£»äí   š£¼äí   š£½äí   š£¾äí   š£¿äí   š£Àäí   š£Áäí   š£Âäí   š£Ãäí   š£Ääí   š£Åäí   š£Æäí   š£Çäí   š£Èäí   š£Éäí   š£Êäí   š£Ëäí   š£Ìäí   š£Íäí   š£Îäí   š£Ïäí   š£Ðäí   š£Ñäí   š£Òäí   š£Óäí   š£Ôäí   š£Õäí   š£Öäí   š£×äí   š£Øäí   š£Ùäí   š£


----------



## PaulWood (Mar 26, 2009)

synergy543 @ Thu Mar 26 said:


> Actually yes, the Waves S360 Panner/Imager looks like it generates ERs. I don't have this so I can't tell you how well it works but it sure looks interesting.
> 
> Anyone using it?



Well I have it as part of the Mercury bundle, but have never used it... The problem is that I only have 1 iLok, and that is generally plugged into my main DAW (I use a lot of the Waves stuff for mixdown) which is Sonar based (and Waves 360 + Sonar = not very happy).

I'll plug it into my Nuendo system tomorrow and install Mercury on that and see what the 360 stuff is all about. I've always been interested in it, but only recently started using surround sound on a regular basis.

From what I can tell, the imager creates the IRs as well as panning, and the Panner just pans - you can swap the imager and panner out with each other if you find that ER generation is necessary. The R360 apparently creates perfectly decorrelated 5.1 tails to go with the S360 ERs.



synergy543 @ Thu Mar 26 said:


> I believe the ERs mostly define the localization and the dense build up of reverb afterwards comes from so many different directions that it doesn't really provide any "significant" localization cues. Yes, technically reverbs from different source positions would be different but I don't think they would different enough to provide important localization cues or to be signficantly different. So I think we can sort of ignore that aspect for now. Kind of like going from 24-bits to 25-bit recording. The big leap was from 16 to 24, beyond that point becomes exponentially less significant. Same idea with reverb localization. The big meat and potatoes with localization is in the ERs.
> 
> And I think this is often why many people don't like Altiverb because the ER cues and reverb coloration from the room are too strong. I find that more often than not, when I want a reverb tail in Altiverb, I pull up a synthetic reverb tail (for its smoothness).



Thanks for the explanation. Maybe a "3rd way" (how very New Labour of me :D ) will work - a combination of IR and algorithmic reverbs together...

Cheers!

P


----------



## PaulWood (Mar 26, 2009)

Cheers Bill. Will check that out.


----------



## PaulWood (Mar 26, 2009)

Waywyn @ Thu Mar 26 said:


> re-peat @ Thu Mar 26 said:
> 
> 
> > Tnd on a sidenote: I really don't share in the enthusiasm for ToddAO IR's (and many other of Altiverb's large venue IR's). To my ears, they all tend to sound annoyingly heavy and muddy and quite a few of them also seem to be suffering from being somewhat out of balance (stereowise).
> ...



TBH, this is my biggest bugbear with convo reverbs - they take so much tweaking to get acceptable that it's not really any different to heavily programming an algorithmic unit.


----------



## bryla (Mar 27, 2009)

lee @ Thu Mar 26 said:


> re-peat @ Thu Mar 26 said:
> 
> 
> > ...Give me two or three quality algorithmic reverbs *and one or two instances of decent (short) delays*, and I'm perfectly happy and ready to tackle whatever space- or placement-requirements any of my music might need.
> ...


This is all covered in the Mike Novy book, I'm talking about


----------



## Peter Emanuel Roos (Mar 27, 2009)

tonecarver @ Thu Mar 26 said:


> Hi folks. I am pretty much a newbie here (and have much to learn about composing) but do happen to have written a VST plugin (windows only) that specializes in producing early reflections and attenuated delays to simulate room placement, so I figured I'd take this chance to decloak and contribute rather than just lurk and learn.
> 
> The plugin is called Ambia. It's audio engine is highly optimised but the GUI is quite huge and unfriendly. :oops: At least until I can get some time to polish it up a bit. You can find the latest version, Version 02 Beta, on my humble web-page:
> 
> ...



Welcome!

I will surely check this out!

In my opinion ERs are soooo important to get correct placement and get samples into a common space, even when they have different amounts of ERs embedded (for example Dan Dean versus VSL).

A good tail is easier to get. Most studios just add some Lex 960L or TCE 6000 tails to their recordings in studios that already have added those beautiful and coherent ERs to create that single ambience that we are all looking for.


----------

