What's new

What are your techniques for early reflections on dry orchestral libraries?

shawnsingh

Senior Member
A major weak point of my dry-orchestral-library mixes was the lack of 3D depth, missing a lot of early reflection information. This is why I preferred wet or studio-dry libraries these days, but lately I've been missing the flexibility and expressive nuances possible from dry libraries. So I was thinking of trying again and revisiting how I could better mix them.

Here's a bunch of questions:

For generating/emulating/mixing early reflections - what have you tried, and what did you feel works well?

Any tips about how you combine early reflection simulations with EQ and with long-tail reverbs?

I know in various threads people have given positive mention to Virtual Sound Stage 2 and EAReverb 2. Do you all use them for your dry libraries? what do you feel are their strengths and limitations?

And lastly, the Altiverb TODD AO room seemed to be an open trade secret for dry libraries 10 years ago. Is that still a thing? Any other IR sets that can achieve different early reflections for individual instruments across the stage?
 
Hi Shawnsingh
If you have a convolution reverb with natural space pulses, and the reverb can fade out those pulses with a curve-possibility (VCA) earlier, then you can fade out the pulse after the first 100-300ms.
There remain the Early Reflections - without the tail.

...Here shown as an example with SIR: the Attack-Envelope-function is used to shorten the Impulse Response.
First, you can use the "Length function" to generally shorten the IR-length.

desert2sir-v7dpygJisWxkuRSqek7a6gDJ.l57o4jj.jpg


------------------------------
When you move an instrument into the depth of space, it may sound too "thick" or too bright, measured distance, because it was recorded close up. With the EQ, you turn until the sound appears natural. There are no rules, but mainly the ear.

I recommend you read and listen to this short review that briefly highlights all the Hall-issues.
Abit the EQ-issue you need to read at "Filter".



Have fun and all the best
Beat
 
For generating/emulating/mixing early reflections - what have you tried, and what did you feel works well?

Any tips about how you combine early reflection simulations with EQ and with long-tail reverbs
Hi,
I use VSL's MIR Pro. It places your instrument in a room (panning and IR). You can shorten the length of the IR to get only the early reflection. You can then use an additional algorithmic reverb for the tail (MIR also comes with a algorithmic reverb for late reflections). MIR has a Room-Eq that only changes the wet sound (for example activate a high-pass if it sounds too dense). Also there are instrument profiles that are containing information on how the sound of a instrument is directed in 3-dimensional place. If you want MIR can simulate air absorbtion as well as distance scaling of the volume. Also you can define the used microphones, or use one of the many presets. It outputs your sound in stereo or various surround formats.

You will need some time to get started with MIR and understand what parameter changes what, but I get a good spacial sound that I can shape however I want.

Best, Ben
 
Beat and Ben, thanks for the replies. Though, I actually am already comfortable with reverb fundamentals and own MIR pro already.

I was leaning against the idea of shortening a single IR for early reflections, because I wanted to try to emulate different reflections for each instrument, hoping that could help enhance the positioning of each instrument in the stereo image.

But, I hadn't thought about shortening the MIR reverb in the same way - sounds so obvious now that you pointed it out. I'm trying it, and it seems to work decently enough!

Ben - any personal preference of which venue and some settings you like to use for MIR if you try to use it for ER-only? I tried some of the studio venues so far and felt that Studio 2 worked OK.

Anyone using VSS2 or EAReverb for this purpose instead?

Beat - one thing from your tutorial that was new to me is your explanation of predelay. It leads to an opposite conclusion than what I think most people would typically say about predelay: longer predelays can create the feeling of a longer richer reverb tail while also affecting some notion of "clarity" - example of this perspective here - https://www.waves.com/8-reverb-mixing-tips. I think both perspectives are valid depending on the specific mixing scenario. What are your thoughts?

Thanks again and cheers!
 
I think the torture, some of which is described above, of trying to fix close / dry / anechoic samples presents the best argument for using samples recorded further away. Orchestral samples that already reflect (get it??) room or even hall sound work with less fuss and sound more musical to me.

Electric guitars and that kind of thing is a different discussion.

None of this stuff sounds real, of course.

That said, if you really love the dry samples I've not heard anything better than Vienna's MIR.
 
I have mirpro and I think it is brilliant. It might be the best thing vsl ever made. It’s very good at simulating real acoustical spaces and very easy to use once you learn your way around it a bit. It’s also pricey if you get all the room packs and even pricier if you need it on multiple machines.

I truly love working with mirpro though.

It handles all panning, width, distance, ER’s, late reflections, and many eq tasks as well. For orchestra it’s rare to need any other fx on each instrument because mirpro has already done it. It has some ability to tweak the room sound, there is a wet dry slider in each instrument and for the room and there is a total reflection time, and there are some ways to position the instruments and play around with it but it’s not much different really then if you were actually in the room with mics and you basically set up your mics in the room and the room sound is there, so you move Mics around or use different mic arrays or move players around and basically get some variance from the room but the room sound is there and is what it is. You have different rooms you can use depending on which packs you get. One thing you can do that would not be that easy in a real space is you can turn down the room a bit with the above mentioned sliders and room eq.

I feel it really models not only the sound of real rooms but the entire recording experience of a real room. This lends itself to very realistic acoustical simulation. I specially want that realism as much as possible.

That being said, there is a lot to be said for doing it manually sometimes. You might like a bigger then life sound that doesn’t really come from a particular acoustic space but sounds like one. I think it’s a lot more work that way and I think it’s hard to replicate real spaces, but you can certainly simulate nice sounding rooms and define a character of your own.

I got eareverb recently for this but haven’t gotten into it yet because I’m enjoying mirpro a lot. But anyway as others have said already, to simulate the space you need some panning and distance related EQ and ER’s to simulate spaciousness but frankly I haven’t got a clue yet how to use all those pieces to get something that sounds convincing. At some point I will play around with it and try to learn more about eareverb ER’s; but mirpro just makes it so easy and intuitive without having to understand so much acoustics theory
 
any personal preference of which venue and some settings you like to use for MIR if you try to use it for ER-only?
I'm not answering for Ben, but personally I would definitely start with one of the large scoring stages. They will give you the maximum freedom to put your signal sources more or less anywhere around the virtual Main Mic. Synchron Stage Vienna was captured with more individual IR positions than any other MIR Venue, thus it will give you the most discernible ER patterns.
 
I've not heard anything better than Vienna's MIR.

Ircam SPAT is. In my opinion. Not only does it sound better, in my opinion, but it is, in my opinion, a LOT more powerful and versatile as well, completely unrestricted as it is by IR's and paradigms.
I certainly don't want to take anything away for the brilliant achievement that is MIR(Pro), but SPAT tops it in just about every aspect of what a spatializer is expected to contribute to our type of work. In my opinion.

_
 
Ben - any personal preference of which venue and some settings you like to use for MIR if you try to use it for ER-only?
As @Dietz suggested I would start with the Synchron Stage. If the orchestra is small (chamber size or smaller) I would try the Großer Saal or the Mozart Saal.
Also depending on your instruments and the orchestra size you want to change the dry-wet-ratio and the length of the late reflection. I use more reverb on choir / singers then on orchestra when mixed together.

I use the routing described in the MIR manual on page 59:
upload_2019-5-29_17-7-24.png

Here are settings I used in my recend projects:

upload_2019-5-29_17-33-19.png

(Edit: I only own Roompack 1 and 6)

But keep in mind I'm no expert ;)

Best, Ben
 
@Ben one question why do you split the dry and wet signals and send them to two different miracle busses? What difference in sound do you hear and/or what do you do differently with Miracle on each buss?
 
Last edited:
I know almost everyone will say this is wrong but I find using pre send reverb the best for positioning and sound of the reverb iR.
Reverb stays static and you move the instruments back and forth which for me is more realistic.
Cons,
The use of automation which I don’t find my self using it a lot with orchestral music.

Ben
 
Does it work well on samples recorded on a live stage or hall?

It's at its best when giving it dry/dryish material. As is MIR. If you send a Spitfire Lyndhurst sample through MIR or SPAT you're doing neither the sample nor the spatializer a service in my opinion.

---

I do agree that for many instruments or instrument sections — particulary those that have to be recognized as belonging to a full orchestra in a fitting environment —, the interaction with a sympathetic room is an essential ingredient of their sound, and that's something that can't be mimicked entirely with virtual spatializers. (Which is only one reason, but the principal one, why the Synchron Percussion is a far superior library when compared to the original VSL Percussion. And if VSL doesn't make the same mistakes as they made with the strings, I expect the Synchron Brass to be far superior to any of their current brass offerings too, for the very same reason: making all that important vibrating and resonating air around the instrument part of the sampled sound.)

---

On the subject of ToddAO: it always seemed to me that the myth and the legend of the venue had a lot to do with the popularity of the Audioease IR's. Much more so than the actual quality of those IR's themselves anyway. But people see the name ToddAO and immediately are convinced that IR's recorded there, surely, must be the ticket to great sound, and they never bother to check if it's actually so. In the case of those ToddAO IR's, I don't believe it is. To my ears, they are in fact among the weaker ones in the Altiverb IR-collection: their balance is off and they're a bit dull and muddy sounding. As were several others among the earlier generations of Altiverb IR's as it happens, giving the software for quite a while the reputation of producing a rather heavy and boomy sound. Nothing could be further from the truth though, as the later batches of new and much better IR's would prove.

_
 
It's at its best when giving it dry/dryish material. As is MIR. If you send a Spitfire Lyndhurst sample through MIR or SPAT you're doing neither the sample nor the spatializer a service in my opinion.
I do agree that for many instruments or instrument sections — particulary those that have to be recognized as belonging to a full orchestra in a fitting environment —, the interaction with a sympathetic room is an essential ingredient of their sound, and that's something that can't be mimicked entirely with virtual spatializers.

definitely! Close mic positions are the only ones that should be attempted from Spitfire or other wet libraries into MirPro..and don't forget to make the signal mono before hitting Mir!

(Which is only one reason, but the principal one, why the Synchron Percussion is a far superior library when compared to the original VSL Percussion. And if VSL doesn't make the same mistakes as they made with the strings, I expect the Synchron Brass to be far superior to any of their current brass offerings too, for the very same reason: making all that important vibrating and resonating air around the instrument part of the sampled sound.)

That is kind of debatable. Most discussions I have seen on this topic by users that have used both the VI and Synchron version of the library (together with MirPro), prefer the VI version, mainly because ViPro has more flexibility and power for nuanced performances. I do not think at all that the consensus is that Synchron player rooms sound better then a properly configured MirPro+ViPro setup. The MirPro setup provides far more flexibility. I think the Synchron version is more about providing a self contained and easier to use product..which is totally fine, but I would definitely not go so far as to say that the synchron version sounds better then a properly configured ViPro+MirPro. And that is not the word I have heard from folks using both until now.
 
I know almost everyone will say this is wrong but I find using pre send reverb the best for positioning and sound of the reverb iR.
Reverb stays static and you move the instruments back and forth which for me is more realistic.
Cons,
The use of automation which I don’t find my self using it a lot with orchestral music.

Ben
You're wrong! ;-D

But seriously: I understand the approach. OTOH, like you wrote yourself: As soon as you start to adjust - let alone automate! - the instrument's channel's volume, you actually move the instrument back and forth on the stage, too, which is most likely not what you want when you're already working on the finer balances of your mix. :)
 
You're wrong! ;-D

But seriously: I understand the approach. OTOH, like you wrote yourself: As soon as you start to adjust - let alone automate! - the instrument's channel's volume, you actually move the instrument back and forth on the stage, too, which is most likely not what you want when you're already working on the finer balances of your mix. :)
You are correct, but if you also adjust the pre send everything would be fine. I first set the stage and all sends to 0, then do the positioning and then if I want to automate I use vca fader usually per section. It’s a different approach.

Ben
 
Some point over the next few days I'll try to render a few examples of my efforts to match VSL, OT Berlin Brass, and EWQL Hollywood Brass. VSL's sound with ER emulation + teldex convolution is acceptable and already much better than what I had 5 years ago, but it's still night and day difference compared to OT's natural decca tree sound. EWQL gets closer to OT's natural sound because of the strong studio-sized early reflections from the Gold mic position.
 
...Beat - one thing from your tutorial that was new to me is your explanation of predelay. It leads to an opposite conclusion than what I think most people would typically say about predelay: longer predelays can create the feeling of a longer richer reverb tail while also affecting some notion of "clarity" - example of this perspective here - https://www.waves.com/8-reverb-mixing-tips. I think both perspectives are valid depending on the specific mixing scenario. What are your thoughts?

Thanks again and cheers!

Hi shawnsingh
Physically, my theory is correct:
A) Big Predelay
When you stand in the middle of a church and clap your hands, the sound needs a few ms from the walls until the first reflections come back. The bigger the room the later the reflections come. But: you hear your clapping very close, because at first you only hear the direct clapping sound. Nevertheless, the big "predelay" tells your brain that you are in a big room.

B) little Predelay
At the far end of the church, far away from you, another person claps the hands. With you, a mixture of different first reflections and even a small amount of direct sound will arrive at almost the same time. There is a very small time difference between the direct sound and the first reflections - which corresponds to a small predelay.

-------------------------

These physical facts are correct and in fact logical for everyone. Of course, a plugin can give the impression that when you increase the predelay by a few ms, the instrument seems to move away a bit as well. Maybe that's just because the space for our brain seems to be getting bigger with more predelay... It is an impression auf our brain, that finally feel rooms, distances etc. because of those time differences of ERs and direct sound. In the end, of course, the result counts.
It also could be that some people means bigger room with predelay and others means bigger distance of instruments.

-------------------------
I tried to produce a sound example which shows the theory above...
Sound-Example with predelay (170ms) & without predelay.
It shows that instruments without predelay sound farther away. In practice, one would attenuate the early reflections a bit more at the signal with the big predelay. Then the timpani would appear even closer, although it plays in a big room.
Beat
 
@Ben one question why do you split the dry and wet signals and send them to two different miracle busses? What difference in sound do you hear and/or what do you do differently with Miracle on each buss?
This allowes me to create a dense and realistic reverb with modulation + light algorithmic reverb for the last 200-400ms (the MIRacle ontop of MIR is 200-400ms shorter) to sweeten up the sound.
 
Top Bottom