# How well can you emulate mic positions with reverb?



## romplin (Dec 21, 2019)

I'm a beginner with orchestral music, but have good mix experience in other music styles.

I'm starting again after a long pause with orchestral music and I'm wondering how well can you emulate mic positions now?

I'm looking at Nucleus at the moment, but it doesn't have mic positions, so you would need to use reverb to get them. What's your experience with this? How important is it to have different positions for mixing? Are there any good impulse responses for that?

I assume that it's not easy to get the sound from a room mic from a close mic plus reverb, because the sound is different. It's not just the room. By I have not much experience for orchestral instruments in this regard. Some guidelines would be much appreciated.

When they are really important I might have a look at a bigger library with different mic position. What ever it will be (feel free for recommendations).


----------



## neblix (Dec 22, 2019)

In order to sufficiently put to rest the need for other mics, you need a quality reverb with impeccable and natural sounding early reflections. I like to use Seventh Heaven by LiquidSonics. I have a send for ER only and a send for LR only, and each instrument can independently have an ER/LR blend (blend = distance from listener). Doing so this way, I can place all samples and synths in the same room, with only 2 reverb instances, and no RAM eaten by mic positions.









Ocean Way Studios | UAD Audio Plugins | Universal Audio


Retain mic bleed, proximity, and other naturally occurring behaviors for realism with the Ocean Way Studios Plug-In. Learn more.




www.uaudio.com





My friend also raves about this plugin for inserting dry signals into a quality room.

Replacing real mic positions with artificial reverb requires spending a bit of money for the good stuff. Lesser quality or also "creative/sound design" reverbs will not do the job well. The early reflections are key to room tone, and this is something reverb plugins are generally less good at. However if you developed skill at doing this it'd help you save a lot of RAM in the future, and let you easily blend different libraries without wasting much time trying to mix them.


----------



## Consona (Dec 22, 2019)

Grab the free NI Raum reverb. And whatever EQ you use.

To put the sound back, increase the wet amount on the reverb and start taking off frequencies around 400Hz, with the curve starting gently somewhere around 1KHz. You can also cut high frequencies to put it back.

You basically want take out those frequencies that are present when the instrument is playing close to you, as increase the reverb to make it sound it sits further in the room.


----------



## vitocorleone123 (Dec 22, 2019)

The answer is both  

Manipulating early and late reflections + EQ. Some people prefer Seventh Heaven, but that sounded too heavy-handed to my ears in comparison to my preference: Exponential (now iZotope) Nimbus. Both are excellent, though, and there's a couple others that I liked a lot as well. Nimbus, for example, lets you change early and late reflections and has an EQ built in, for both the signal and the reverb. The UI isn't pretty, but it's pretty well designed in terms of usability - and it sounds great.

I'm not generally trying to create an orchestra. I just use orchestral sounds sometimes in my music.


----------



## d.healey (Dec 22, 2019)

> How important is it to have different positions for mixing?



90% of your audience won't care and will be listening on tiny headphones or crappy speakers. Just make it sound nice to you.

Mike Verta discussed some techniques for faking it in one of his classes, I think it was the Virtuosity class.


----------



## JohnG (Dec 22, 2019)

The mic position choice mostly arises when trying to emulate an orchestral sound, maybe a concert or church or something like that.

Personally, I have not heard convincing "fixes" for mic positions. If it's recorded close, it will sound close, no matter how much you torture it. "Far" mic positions raise different issues, of course. 

Engineers usually want everything as dry as a bone so they have more control. I don't really like giving engineers that much control, honestly. I prefer making it sound the way I want even if it handcuffs them somewhat. So, I use my ears, but of course also take into account whether or not there will be any parts of the music replaced with live recordings, or sweetened with an overdub of live playing.

My usual goal is that I want a more natural sound than otherwise, recognising that nothing is exactly "natural" when one is using samples. 

The mic position question is quite different for a pop record, and also varies depending on what era -- 90s, 60s, 20s, recent. 

There aren't any rules, though. Some people will record a Stradavari and then put distortion on it...


----------



## robgb (Dec 22, 2019)

Yes, you can do it with varying degrees of success. Experiment. Use different reverbs. A combination of room and hall, algorithmic and IR. Put each mic position on a different send (obviously) and adjust accordingly. Experiment with pre- and post-fader configurations.

Of course, you'll have the most success with this if the instrument is recorded dry. It would be impossible to get, say, Albion One strings to sound any closer thanks to the baked-in room sound.

Sometimes you can layer drier libraries with the wetter ones and play around with the levels. I've combined the Spitfire Studio Strings Core (tree mic only) with the Soundiron Hyperion Strings Elements (very dry) and it sounds pretty fantastic.


----------



## re-peat (Dec 22, 2019)

robgb said:


> Yes, you can do it



No, you can’t, Rob. Not really. The sound of the sum of the sound of a source and that of the room in which the source is recorded in, is a *much* more complex affair than simply stacking the dry signal and its reverberation. Room and source interact in countless ways — cancelling frequencies and/or accentuating them — that no reverb, nor whatever complex combination of reverbs and various processors you might come up with, can quite simulate.

That said, with some instruments the results are more acceptable than with others, yes. I’ve never been able to turn dry-ish percussion, a dry grand piano or dry, close-miked brass into convincing ‘concert’ versions of these instruments. And that’s because that complex interaction between the room and the source is strongest with instruments that have a lot of energy and strong, explosive transients, as is the case with the examples mentioned.

The instrument category which, in my experience, lends itself best to artificial spatialization are woodwinds. Here’s *an example* of what I find rather decent virtual spatialization.

_


----------



## Nick Batzdorf (Dec 22, 2019)

I dunno. To me reverb works extremely well.

I can argue different sides. On one hand, multiple mic positions are a great feature in sample libraries.

On the other hand, VSL's MIR especially proves that you can do a whole lot with convolution processing. I don't own it, but you can still do a lot with garden variety convolution processors + regular regular reverbs.

On the third hand, if it's that critical then what you really want is live musicians.

I say you can create very live-sounding ensembles with libraries that don't have multiple mic positions.


----------



## re-peat (Dec 22, 2019)

Nick Batzdorf said:


> To me reverb works extremely well.



To me too, Nick. But there’s a difference between adding reverb, and spatializing a sound. The latter implies that you end up with a result where the source has become a part of the total sound (and where the room adds significantly to the instruments' sound). As opposed to a dry signal with some reverberation added.

Brass makes for excellent testing material. There’s no software in existence which can turn the sound of dry, closely-recorded brass into the sound of symphonic brass. The leap between the two is far too big for even the best reverbs or spatializers.

_


----------



## robgb (Dec 22, 2019)

re-peat said:


> No, you can’t, Rob. Not really.


Maybe I should have clarified. You can FAKE it (like we used to before people got lazy). But the truth is, trying to emulate microphone positions is pretty pointless. As David says, just make it sound good using whatever combination of reverb and delay sounds good to you.

The whole microphone position deal didn't become a thing until developers decided they didn't want to do dry libraries anymore. Not sure why that became a thing, but it was clearly a good gimmick because people are always asking about microphone positions.

Honestly, learn to engineer properly. Your world will be a lot less complicated if you do.


----------



## JohnG (Dec 22, 2019)

Rob that's a condescending, snotty reply. Suggesting that people prefer different mic positions are "lazy" or need to "learn to engineer properly" is obnoxious.

I've been at this for quite some time and I make the choices I make for the sound, not out of sloth or ignorance.

I do think it depends on what kind of material you're writing, but your categorical rejection and dismissal of those who do things differently just isn't friendly or helpful.


----------



## Fredeke (Dec 22, 2019)

@romplin : Have you seen this other thread: Creating depth - one possible method tutorial ?


----------



## Robert_G (Dec 22, 2019)

You guys sound like you know way more than me. Here's your challenge.
Make the stupid tree mic (no other mic options) in my Spitfire Studio Woodwinds *Core* sound even a little bit good. I've tried every scenario I can think of. Reverb, Delay, and other effects. Nothing works.

Its one mic and its the wrong mic for having a one mic option and trying to make it work is so frustrating. 
If you can truly emulate mic positions with reverb, one of you guys should be able to do this.

Thanks and sorry for high jacking the OP.


----------



## David Kudell (Dec 22, 2019)

This is a fun experiment you can try. Take any library you own that has multiple mic positions. Take the close mic and add reverb to it, then compare that to a tree or room mic without reverb added. See if you can get them to sound the same.

I tried this and found they sound very different....the tree/room mic sounds much better in every case.


----------



## Living Fossil (Dec 22, 2019)

re-peat said:


> That said, with some instruments the results are more acceptable than with others, yes. I’ve never been able to turn dry-ish percussion, a dry grand piano or dry, close-miked brass into convincing ‘concert’ versions of these instruments. And that’s because that complex interaction between the room and the source is strongest with instruments that have a lot of energy and strong, explosive transients, as is the case with the examples mentioned.
> 
> The instrument category which, in my experience, lends itself best to artificial spatialization are woodwinds. Here’s *an example* of what I find rather decent virtual spatialization.



Piet - since that flute example is pretty amazing - i'd be extremely interested to hear an example that involves percussion (and maybe also brass).
Just to make a correlation between what you perceive as "non convincing" and my own perception.


----------



## Nick Batzdorf (Dec 22, 2019)

On a promo cue I wrote before Giga and streaming samples, I used all samples but added just a trombone, and then recorded him at half speed playing a trumpet part (and played it back at regular speed an octave up).

It worked really well.


----------



## re-peat (Dec 24, 2019)

*Fossil,*

When I mentioned brass and percussion earlier, I meant the big ejaculations (I use the word in the Doylean sense). It’s those sizzling fanfares, massive epic brass and thunderous percussion which, to my ears anyway, are quite impossible to spatialize convincingly — if you have to start from dry, close-up sampled instruments, I mean — because the room is such a big and essential part of the sound and that’s something that software just can’t generate yet.

For smaller brass and percussion parts however, SPAT (the software which I used in the Flute example above) is, in my view, _sensationally_ good.

Here’s *a little video*, showing how SPAT deals with SampleModelling’s The Trumpet. (The video is the result of a live capture of me messing about with some of SPAT’s parameters, so please allow for some rough and clumsy moments.)
The music uses the theme of *Stravinsky’s “L’Histoire Du Soldat” (Royal March)*, but as you’ll hear, it’s more une histoire du cousin espagnol du soldat.
You’ll also hear how a very dry snare drum part sounds when it's sent through SPAT and is shown all the corners of the room. Quite acceptable results, to my ears.

Also note that SPAT, unlike the competition, doesn’t generate the least bit of crackling when changing parameters. Even when changing them very fast and drastically. Totally crackle-free it is. (Except when you change the parameters that define the room, but usually that needn't be done _during_ a mix anyway.) It’s this feature which, in my opinion, adds tremendously to its already unsurpassed musical value, because it allows you to simulate player movements and/or performer’s mic technique. (As in: turning the horn away from the mic in loud passages and coming closer again in softer passages. Similar to what good vocalists do as well.)

The brass instruments (plus tenor saxophone) in *this example* are all done with Sample Modelling instruments too, and SPAT of course. The other instruments (flute, clarinet, bassdrum, glockenspiel, snaredrum), except the strings, are all sent through SPAT as well. This is music from *Prokofiev’s “Lieutenant Kijé”*, a rarely performed episode in which the Lieutenant gets promoted to Colonel.

Sadly, the version of SPAT I work with, is no longer available. In their strange French wisdom, Flux (the developer) replaced it with SPAT Revolution which, while even more powerful, is also rather bloated and unwieldy affair.

_


----------



## Living Fossil (Dec 24, 2019)

@re-peat : Thanks a lot for sharing these! Indeed, the results you get with SPAT are really exciting. 
I would be more than happy if Precedence would get that feature of directing the sound in a specific direction in a future version, since that's something that could be of great use (not only in orchestral music, but also there)

Merry Christmas!


----------



## Nick Batzdorf (Dec 26, 2019)

Nick Batzdorf said:


> On a promo cue I wrote before Giga and streaming samples, I used all samples but added just a trombone, and then recorded him at half speed playing a trumpet part (and played it back at regular speed an octave up).
> 
> It worked really well.



Oh - I forgot add the punch line: just one live instrument makes the whole thing sound far more live. That was the point of my post.


----------



## ProfoundSilence (Dec 26, 2019)

mic positions are absolutely more important than reverb. 

1.) there is no punch when it comes to reverb. nothing like washed out brass doused in a gallon of reverb to sit it in the "back of the room"

2.) your options are close mic + tons of positioning processing and weak attempt at making reverb sound like a real room - or you're working with an already wet mic, no room for reverb to be added without getting washed out, and no clarity unless you EQ the hell out of it. 

as someone who's experimented tons with reverbs, its simply NOT the same - and after the first of the year if you think so, I'll post some stems and challenge anyone to create something with the same depth, 3d space, clarity, and punch with a single mic and an reverbs. 

it's a bold statement, but I'll stand by it - unless you're going for a washed out orchestra sound like it was recorded with a single decca tree vs a professionally capture performance on par with a modern production


----------



## Nick Batzdorf (Dec 27, 2019)

ProfoundSilence said:


> it's a bold statement, but I'll stand by it - unless you're going for a washed out orchestra sound like it was recorded with a single decca tree vs a professionally capture performance on par with a modern production



First, are we only talking about orchestral music?

But I've said it again and I'll say it before: adding one live instrument does more than 500 sampled mic positions.

Also, using the word "professionally" here doesn't make the case.  Lots of great orchestral performances were recorded with a single Decca tree and no spot mics.

It's all context.


----------



## ltmusic (Dec 27, 2019)

Living Fossil said:


> @re-peat : Thanks a lot for sharing these! Indeed, the results you get with SPAT are really exciting.
> I would be more than happy if Precedence would get that feature of directing the sound in a specific direction in a future version, since that's something that could be of great use (not only in orchestral music, but also there)
> 
> Merry Christmas!




Hi,

what is your opinion on Precedence ? you use it in combination with breeze ?

Thanks!


----------



## Living Fossil (Dec 28, 2019)

ltmusic said:


> what is your opinion on Precedence ? you use it in combination with breeze ?



After using the Precedence/Breeze combo for about a year i have to say i really like it.
The recent update with the link function is somehow a game changer inside this environment.
It's really great to have the option to change the parameters of the reverb (e.g. the room size) and having a direct interaction with all instrument groups.
What i'm still missing (as stated above) is the feature to control the direction of the sound. (That's extremely impressive in repeat's examples)

Another thing that i think is worth mentioning concerns the combination of different libraries:
In my experience (so far) Breeze and Precedence on their own are not enough if you want to combine e.g. VSL libraries with e.g. Spitfire libraries.
So far, i had the most homogeneous results by adding some Nimbus plus a touch of coloring reverb (e.g. PSP 2554) to VSL. Nimbus is great in creating depth, since it gives you control over the duration and development of the energy of ERs and the attack phase of the reverb.
But that's only an impression of my current state of exploration, i'm sure there are other ways to get even better results (e.g. by modifying the Breeze settings for different libs etc.)


----------



## Andrew Souter (Dec 28, 2019)

It's worth noting that it is possible to use Breeze in conjunction with Precedence and minimize the Breeze tail in one or two ways if you wish to combine it with additional "global tails" from a verb or two on sends such as B2 or 3rd party. You can do this in two ways:

1) Make the decay time in Breeze very low, say 0.5sec or less, so that is functions mostly to provide ERs

2) Use the "Gain" slider to reduce the gain of the wet component. When using "Distance" mode, and "Balance Mix Mode" Breezez will still be applying additional spatialization to the DIRECT signal. The character of this sptailization will change depending on Distance. In Distance Mode "Gain" is "wet Gain", not total gain, so you can use this to control the level balance between the spatialized direct signal and the reverb.

This is very much similar to the exact topic being discussed.


----------



## Joël Dollié (Dec 28, 2019)

To answer the question of the thread title,

Not very. Plugins like precedence or even "normal reverb" are great to add tail or nudge things to the back a little bit more, or even plugins like 2016 stereo room 100% wet can give a pretty good illusion of a room space, but when it comes to orchestral recordings, especially something that's recorded too close will never sound truly far. at some point more reverb just makes things more messy and never truly adds depth like proper mic positions and a proper hall do.


----------



## Andrew Souter (Dec 28, 2019)

I think it should be specified if we are discussing sections, or solo instruments? i.e. is whatever is being spatialized close to a point source, or is it quite a large compositie object of many players who are spread out in space over several meters etc?

I would say you can do it very well in the solo case. It is more challenging in the case if sections bc in the real world you are already mixing many different objects that each have their own spatial location, so the result is already a very complex composite.

If you recreated sections with multiple solo instruments that were each spatialized differently, and each had a "sufficiently humanized" performance, (you can NOT simply take copies of the same solo take) I would bet you could arrive at a similar result and recreate sections quite well.


----------



## Nick Batzdorf (Dec 28, 2019)

You know, what's by far the most important - to me anyway - is a *sense* of depth and clarity. Exact positioning corresponding to a real orchestra is far less important, because it only takes an element or two to create that overall illusion. That's why I say adding just a single live element makes all the difference in the world - not that you have to do that.

So when ProfoundSilenc says mic positions are the only thing... that just hasn't been my experience.

Also, depth and left-right positioning are not the same thing at all, to point out the obvious. If you listen to most pop records, what you hear is mostly left, right, or center. That's because amplitude-based panning can't really do much more than that; move your head a fraction of an inch and all your careful panning goes to.... pot. Hence delay-based panning, which has other issues.

Now, VSL's MIR does a pretty amazing job of positioning in both directions using two speakers. Another subject.

***

Also, if you have really wide-dispersion speakers, you'll often hear vertical positioning too. That's a happy accident, but it's pretty dramatic.


----------



## ltmusic (Dec 28, 2019)

Living Fossil said:


> After using the Precedence/Breeze combo for about a year i have to say i really like it.
> The recent update with the link function is somehow a game changer inside this environment.
> It's really great to have the option to change the parameters of the reverb (e.g. the room size) and having a direct interaction with all instrument groups.
> What i'm still missing (as stated above) is the feature to control the direction of the sound. (That's extremely impressive in repeat's examples)
> ...



Thanks!!


----------



## Nick Batzdorf (Jan 1, 2020)

Andrew Souter said:


> This is very much similar to the exact topic being discussed.



I just listened to a couple of your demos through speakers built into a TV (because my studio isn't turned on), and it's pretty stunning even through them.

So maybe I disagree with myself saying that live instruments are the most important ingredient in a live-sounding recording.


----------



## José Herring (Nov 29, 2020)

Reviving this old thread. I'm trying these days to simulate mics placed mid way down a concert hall using fairly ambient samples. Even with a plethora of mic positions it seems like the hall mics are missing because all the samples now are recorded in a studio.

I'm having varying success just sending the instruments to a hall verb set with a predelay of 50-60 ml (about 560 feet). That time was chosen based on the it takes sound to travel a distance into the hall which I figured was about 560 feet. I've removed the highs and the lows on the reverb channel leaving just the mids. I removed the lows just to clear up the muddiness and I removed the highs based on the fact that the high frequencies have less force in them so they'll be far less loud mid way in the hall. 

I'll have to test it further in different listening environments but I'm started to get a bit excited by the results.


----------

