What's new

Yet another Mir question

Dear Villain

Senior Member
There's been some very informative discussion on a few Mir Pro threads the past week, and it's always nice to have Dietz offer his expertise. I have another question regarding the placement of instruments on the virtual stage: it seems that some people criticize the wide-panning of the instruments, suggesting that in an acoustic orchestral recording, the sound is more centered, and you wouldn't hear hard-panned violins 1 coming only out of the left speaker, for instance. A recent post on this forum, showed a very talented composer's snapshot of the Mir venue, with all the instruments clustered quite close together on the stage. The sonic results were very pleasing, but I couldn't help wonder why if I select a MirX preset for the Grosser Hall for example, the instruments are spread far and wide across the stage. The end result is clarity, but perhaps it doesn't reflect the sonic reality of many conventional orchestral recordings.

Any thoughts on whether having the vast majority of the orchestra panned more centrally is preferred to wide panning?

Thanks,
Dave
 
Here's my guess..

We should keep in mind two different strategies for panning:
level difference between L and R
delay difference between L and R

When you say "panned wide", I suspect you're hearing the effect of level panning.

There's a similar way to think of mic techniques.

Coincident mic techniques (like blumlein, X-Y) have the advantage of being essentially phase-aligned, but that means the stereo image won't be attributed to delay differences. Also, the polarity patterns of these mic techniques are usually intended to overlap less. So this way, when an instrument is picked up strongly by one mic, the other mic is probably not picking up as much of that signal - i.e. level differences.

Spaced techniques (moderately spaced, tree. maybe not far-spaced outriggers though) may pick up most instruments across the stage with roughly the same level. So in this case, it's the delay info between L and R that helps to position the instrument.

So... from the "Think MIR" document, as far as I can tell, one ambisonics array (4 mics) was used to represent a mic position. But the awesome part is that ambisonics allows the mic to have arbitrary rotation and arbitrary polarity pattern. So to have a stereo signal, a single ambisonics array can be used twice, each time with different parameters, to represent L + R mics - but please note, we'd really need Dietz to confirm this. I could be very wrong.

But this is where the big caveat is - the way ambisonics is used to achieve stereo mic techniques, this means only coincident mic techniques can be achieved. So how can we get a spaced mic effect with MIR? (a) MIR does have an advanced option to offset the distance of a mic, so that it's possible to emulate a spaced mic pair. But for me personally, I've never felt that this distance offset had the desired effect. To do it really correctly, individual reflections within the same IR would have to be warped with different delays in order to accurately simulate a different mic position, and that would probably require a costly complete acoustic simulation. (b) A more practical way would be to have captured multiple mic positions that would represent where L and R spaced mics could be positioned. But all venues I looked at in MIR did not capture specific L + R mic positions like that.

Which leads to (c): I imagine that the advanced offset option probably does accurately change the delays in the dry signal (EDIT: Dietz has pointed out this is incorrect, please see his reply). In that case, it might make sense that "clustering all the instruments together on the stage" - which you saw in a composer's template - probably could be making decent use of dry signal delays to provide more of a sense of space?
 
Last edited:
Here's my guess..

We should keep in mind two different strategies for panning:
level difference between L and R
delay difference between L and R

When you say "panned wide", I suspect you're hearing the effect of level panning.

There's a similar way to think of mic techniques.

Coincident mic techniques (like blumlein, X-Y) have the advantage of being essentially phase-aligned, but that means the stereo image won't be attributed to delay differences. Also, the polarity patterns of these mic techniques are usually intended to overlap less. So this way, when an instrument is picked up strongly by one mic, the other mic is probably not picking up as much of that signal - i.e. level differences.

Spaced techniques (moderately spaced, tree. maybe not far-spaced outriggers though) may pick up most instruments across the stage with roughly the same level. So in this case, it's the delay info between L and R that helps to position the instrument.

So... from the "Think MIR" document, as far as I can tell, one ambisonics array (4 mics) was used to represent a mic position. But the awesome part is that ambisonics allows the mic to have arbitrary rotation and arbitrary polarity pattern. So to have a stereo signal, a single ambisonics array can be used twice, each time with different parameters, to represent L + R mics - but please note, we'd really need Dietz to confirm this. I could be very wrong.

But this is where the big caveat is - the way ambisonics is used to achieve stereo mic techniques, this means only coincident mic techniques can be achieved. So how can we get a spaced mic effect with MIR? (a) MIR does have an advanced option to offset the distance of a mic, so that it's possible to emulate a spaced mic pair. But for me personally, I've never felt that this distance offset had the desired effect. To do it really correctly, individual reflections within the same IR would have to be warped with different delays in order to accurately simulate a different mic position, and that would probably require a costly complete acoustic simulation. (b) A more practical way would be to have captured multiple mic positions that would represent where L and R spaced mics could be positioned. But all venues I looked at in MIR did not capture specific L + R mic positions like that.

Which leads to (c): I imagine that the advanced offset option probably does accurately change the delays in the dry signal. In that case, it might make sense that "clustering all the instruments together on the stage" - which you saw in a composer's template - probably could be making decent use of dry signal delays to provide more of a sense of space?

Wow, now that's some info to digest! Thanks so much, Shawn, for such a detailed explanation. Your response is a quick reminder of how little I understand about recording and production. It almost makes me want to go back to using a quill and parchment for my music, and forgo the midi mockup completely :)

Looking forward to seeing if anyone else can chime in, but your explanation sounds quite plausible.

Cheers!
Dave
 
I'm still discovering MIR Pro and dealing with exact the same questions. I don't use halls, I prefer scoring stages (Teldex, Synchron). I always set the instruments closer to the main microphone, because in other libraries (recorded on scoring stages) they sound closer. The MIRx presets are too distant sometimes, especially the strings.


A recent post on this forum, showed a very talented composer's snapshot of the Mir venue, with all the instruments clustered quite close together on the stage.

I'm very interested... Could you share a link to this post?
 
I'm still discovering MIR Pro and dealing with exact the same questions. I don't use halls, I prefer scoring stages (Teldex, Synchron). I always set the instruments closer to the main microphone, because in other libraries (recorded on scoring stages) they sound closer. The MIRx presets are too distant sometimes, especially the strings.




I'm very interested... Could you share a link to this post?

Sure:
 
The MIRx presets are too distant sometimes, especially the strings.
I would suggest to change the Dry/Wet-ratio as a first remedy. Part of the main idea behind MIR is to make use of the typical changes in the hall's "voice" when changing the position of the source. It defies that approach when you cram everything into an arm's length around the main mic.

... mind you: This is not your typical "random early reflections plus generic algorithmic reverb" setup. ;)
 
it seems that some people criticize the wide-panning of the instruments, suggesting that in an acoustic orchestral recording, the sound is more centered, and you wouldn't hear hard-panned violins 1 coming only out of the left speaker, for instance.
You're right that it's maybe a good idea to avoid panning your 1st violins mono / hard left. :) But as a music mixer I always try to get an image as wide as possible, and this is what I was trying to achieve with MIR's Venue Presets as well as with its Main and Secondary Microphone setups. Getting a narrow sound stage out of MIR is quite easy, by comparison. Just use one of the Main Mic settings with a small opening angle between left and right channel and little out-of-phase signal content, like conventional X/Y, or an M/S-array with a dominant M (=mid) channel.

Kind regards,
 
the awesome part is that ambisonics allows the mic to have arbitrary rotation and arbitrary polarity pattern. So to have a stereo signal, a single ambisonics array can be used twice, each time with different parameters, to represent L + R mics - but please note, we'd really need Dietz to confirm this.

This is correct. :) As a matter of fact you could decode as many virtual capsules from an Ambisonics recording as you like - theoretically. Considering the fact that MIR still uses 1st Order Ambisonics right now limits this to about eight capsules you could use with somewhat meaningful results ... which by incident is the maximum MIR Pro allows for in its recent form. :)


(b) A more practical way would be to have captured multiple mic positions that would represent where L and R spaced mics could be positioned. But all venues I looked at in MIR did not capture specific L + R mic positions like that.

The decision for using Ambisonics for MIR was made to allow for free positioning on its Venues' stages. Every format that relies on runtime differences ("A/B", "L/C/R") would have imposed serious restrictions upon that idea. We even tried to introduce Haas-panning by adding delays between individual capsules, but there's simply no proper way to avoid all those ugly phasing issues that come with it


(c): I imagine that the advanced offset option probably does accurately change the delays in the dry signal.

Actually not. MIR's "Distance" parameter offered for individual capsules uses a very different approach for more acoustic "enveloping" achieved by the IR-based reverb. It introduces some clever decorrelation algorithm for the late part of the reverb tail only. The dry/direct signal components are not(!) affected by this parameter at all, only by the "pure" Ambisoncis decoder defined by the Main Microphone's settings (... the Secondary Mic ist "wet only", too).
 
The decision for using Ambisonics for MIR was made to allow for free positioning on its Venues' stages. Every format that relies on runtime differences ("A/B", "L/C/R") would have imposed serious restrictions upon that idea.

quick question, though - is "free positioning" here referring to positioning of instruments, or positioning of mics? If you're referring to positioning of instruments, then I'm not understanding how additional sampled mic locations would affect the ability to freely position instruments.
 
We're talking about the ability to change the position of sources freely in MIR (hence "Multi Impulse Response"). There's no proper way to interpolate positions of with sources recorded non-concident mic setups, e.g. A/B or Decca without deteriorating sound and/or phase. Apart from that you wouldn't be able to change mic patterns, let alone their angle in relation to the source. :)
 
Top Bottom