# Mic positions in sample library VS Early Reflections. (And some other questions)



## thevisi0nary (Jul 12, 2018)

I know there is a lot of information on use of reverb, and I know that it ultimately comes down to what you prefer, but it is reassuring to hear what other people do so that you are more comfortable developing a new setup or technique. So just a couple questions I wanted to ask.

1. Mic positions in sample libraries vs Er's and spatial positioners. I've been reading a lot about putting different libraries in the same space using Er's, what I am wondering is if this is necessary if your sample library already comes with different mic positions (Close, Tree, Ambient, etc). With these mic controls is there any point to using something for early reflections? In what situation would you and why? 

2. When you try to "put things in the same room", does this matter more in terms of the tail? Or does this still apply to ER's?

3. How often and why are you sending the parent of a group track or sending all individual tracks to a reverb bus?

4. How much, if at all, processing such as eq are you doing on the reverb bus? What about before and after the reverb?

Thank you for any help.


----------



## JohnG (Jul 12, 2018)

1. no point
2. this one is a little complicated. If, at the extreme, you have close-mic'd brass and far-away-mic'd strings, you'd have both an ER problem and a tail problem. But if you match mic distance and drape a little reverb over the result, you solve both problems
3. I use reverb sends on all individual tracks* so I can use more or less, and longer or shorter reverb depending on the sound of the original samples. Not sure I 100% understand your question here. I also have at least one separate reverb for each section (strings, brass, per, choir etc) so I can deliver individual stems, but that's militated by the dub stage requirements, not a mixing preference.
4. Many people will roll off bottom and some top (using both high and low filters, in other words) before the source hits the reverb. You roll off the lows so as not to add extra rumble to the reverb, and roll off some highs so as not to add too much sizzle. Many / all? reverbs have controls that allow you to do this on the reverb itself so you don't have to interpose EQ plugins to do it. There are no exact rules here, though some people claim there are. I have heard people argue, for example, that you want a pretty tight band for snare reverb -- you cut both highs and lows fairly aggressively. After applying reverb you certainly might put bus compression or overall mastering plugins, but it's more usual to process the signal with EQ and what-not before it hits the reverb. Not so much after

As an overall comment I personally think a lot of people spend way too much time on early reflections and all this jabber. It is absolutely useful to try to get a natural sound by matching mic positions if that's possible, and that takes care of it.

* By "individual tracks," I mean that I break the audio recordings of strings into about 8 stereo pairs, the drums into 10, etc. and I'm able to regulate the reverb send for each one of _those_ stereo pairs. So, "high short strings" may need more reverb than "contrabass." I don't have a separate send for V1, V2, Vla. Timpani have a separate reverb send control from snares or ethnic drums. Some tracks, like synth bass, may not need any reverb at all.


----------



## thevisi0nary (Jul 13, 2018)

JohnG said:


> 1. no point
> 2. this one is a little complicated. If, at the extreme, you have close-mic'd brass and far-away-mic'd strings, you'd have both an ER problem and a tail problem. But if you match mic distance and drape a little reverb over the result, you solve both problems
> 3. I use reverb sends on all individual tracks* so I can use more or less, and longer or shorter reverb depending on the sound of the original samples. Not sure I 100% understand your question here. I also have at least one separate reverb for each section (strings, brass, per, choir etc) so I can deliver individual stems, but that's militated by the dub stage requirements, not a mixing preference.
> 4. Many people will roll off bottom and some top (using both high and low filters, in other words) before the source hits the reverb. You roll off the lows so as not to add extra rumble to the reverb, and roll off some highs so as not to add too much sizzle. Many / all? reverbs have controls that allow you to do this on the reverb itself so you don't have to interpose EQ plugins to do it. There are no exact rules here, though some people claim there are. I have heard people argue, for example, that you want a pretty tight band for snare reverb -- you cut both highs and lows fairly aggressively. After applying reverb you certainly might put bus compression or overall mastering plugins, but it's more usual to process the signal with EQ and what-not before it hits the reverb. Not so much after
> ...



Thank you very much, this clears up a lot for me.

So in the instance of question 2, with the close miced brass and the far miced strings, you would try to put them in the same mic distance with er's or spatial positioners or the sample library included mics, and then glue them with a reverb (if I am understanding correctly). 

For question number 3 I was asking if you would send all individual tracks to the reverb bus or would you send the parent track of each group (ie sending each brass instrument or the single parent track of that group). Actually I just read your edit, thank you.

Last question I have, are there any situations where you are automating the level of reverb send? I don't mean the reverb bus itself I mean the level of one instruments send to the reverb. If so, what type of situation would typically benefit from this? I would imagine it can't be that often.


----------



## aaronventure (Jul 21, 2018)

1. Mic controls are only for the recorded space. No mic setting of a scoring stage sound will get you the sound of a hall recording. ERs, on the other hand...

2. Well, in bigger spaces, early reflections tend to appear later and are more spaced apart because it takes the sound longer to reach the walls. It's (50%) the same as you near a wall on one side, though.

3. I work mostly with dry(ish) libraries these days (except for percussion), so very often.

4. It truly depends on the library. If I send a library with a sound that I like to a reverb and there's a resonance somewhere, I'll fix it on the reverb bus. If, somewhere along the way, one of the libraries sounds fine but there's a weird bump when sent to that same reverb, I'll add an EQ, route that EQ to channels 3 and 4 instead and then send channels 3 and 4 to that reverb, leaving me with no EQ on the dry signal, just on the signal being sent to the reverb.

So yes, it depends. I have sound goal and work towards it. Clear, full-bodied and dark, whatever. Anything that's not playing along gets EQ'd.


----------



## thevisi0nary (Jul 30, 2018)

aaronventure said:


> 1. Mic controls are only for the recorded space. No mic setting of a scoring stage sound will get you the sound of a hall recording. ERs, on the other hand...
> 
> 2. Well, in bigger spaces, early reflections tend to appear later and are more spaced apart because it takes the sound longer to reach the walls. It's (50%) the same as you near a wall on one side, though.
> 
> ...



Thank you very much for this info!

1. Am I correct in assuming that what you are referring to here is when the goal of the song is to be in an orchestral room setting? Are there non orchestral settings you have run into where this became important to you and you wanted to use er's instead of the mic positions in the sample library? When you do use er's, do you turn everything but the close mics off in the sample library?

2. Got it. Thanks!

3. So you send more individual instruments to your reverb send as opposed to the entire instrument group? In a situation where you were working with less dry but not incredibly reverberated instruments, what would you do?

4. When you encounter that resonance, do you prefer to have it eq'd before it hits the reverb? Are there any situations where you would process the reverb send after the reverb insert? And for the second part about sending to channels 3 and 4, it's funny that you mention this because I just learned how to do this from the reaper forum. It's really helpful! You don't have to make an entirely new track just to eq something before it hits the send.


----------



## aaronventure (Jul 30, 2018)

thevisi0nary said:


> 1. Am I correct in assuming that what you are referring to here is when the goal of the song is to be in an orchestral room setting? Are there non orchestral settings you have run into where this became important to you and you wanted to use er's instead of the mic positions in the sample library? When you do use er's, do you turn everything but the close mics off in the sample library?



No, I wouldn't use just the close mics. I would use all the mic positions I have, and depending on how they sound and what I want to achieve, I'd add just on IR reverb on top of it. Think of it like an extension of the current recorded ambience. Don't worry about it being "right" or "wrong"; the goal is for it to sound good.

Mic positions are invaluable in creating the stereo image because of all the natural delay between the signals (sound travel time). Depending on how the library was mic'd and what I want it to sound like, I'd then add additional delay taps and re-image it, adjust with EQs, etc.



thevisi0nary said:


> 3. So you send more individual instruments to your reverb send as opposed to the entire instrument group? In a situation where you were working with less dry but not incredibly reverberated instruments, what would you do?


The only situation where I would send individual instruments as opposed to a library bus is if I need different amounts of reverb on them.

Even if the library is as drenched as Uppsala right now, as long as it has any crossfading and I detect the room disappearing on fast modwheel movements, I'll put reverb on it. Take Berlin Strings, for example. Not overly wet, but it'll still sound weird as all hell when you do a fast crossfade. So instead of putting in ambient mics, I send the whole thing to an IR reverb.



thevisi0nary said:


> 4. When you encounter that resonance, do you prefer to have it eq'd before it hits the reverb? Are there any situations where you would process the reverb send after the reverb insert? And for the second part about sending to channels 3 and 4, it's funny that you mention this because I just learned how to do this from the reaper forum. It's really helpful! You don't have to make an entirely new track just to eq something before it hits the send.


Mostly only on IR reverbs. Sometimes, they'll have a bump here or there that I dislike, or will be missing highs, lows, whatever. It comes from the room response, the speakers used to play back the sine sweep, and then the microphones and preamps used to record the response, so if I need to reshape the frequencies, I'll use an EQ. 

Another thing I like to do is put a multiband compressor on the reverb track, slam the _shit_ out of mids (600 Hz - 6 KHz), slam other two bands a bit less, and boost the gain to compensate for the reduction. I then dial the 'mix' down to 5-8%. It will give the reverb this very light "tapey" feel because the midrange pumping will be insane, but the whole effect will be mixed down to be barely audible at 5-8%. To my ears it sounds more like a "record" this way, than without it (because if you were to record in an ambient room and then put any compression on your tracks, you'd get similar results, albeit in a different way. 

This is by no means a "right" or a "wrong" way to treat a reverb send, it's just my own personal thing.

Yes, Reaper routing is godlike. Everything is so simple with it!


----------



## tim727 (Dec 25, 2022)

Resurrecting this old thread as I had the same question as the OP's point (1) and am still a little unclear on a couple things based on the responses here.Let's say I'm working on a track with a full Orchestral Tools template. Please confirm that my understanding on the following is correct:

(1) Imagine I'm only using close mics on all the sections. In this case, I would need to use a reverb to add ERs to create a sense of depth correct?
(2) Now imagine I'm using both close mics *and* tree mics on all sections. In this scenario is there now no need to use a reverb to add ERs because the tree mics already contain room info and this will afford the mix a sense of depth already?
(3) If the answer to (2) is yes ... then I have a follow up question. When Orchestral Tools recorded let's say the string section and the woodwinds section (for BS and BWW respectively) do they place the tree mics at the same distance from the instruments? Or would the tree mics be further for BWW while closer for BS (since the string section should be closer to the front of the orchestra)? If it's the former then it would seem that even though the tree mics would add depth/room positioning they would end up placing all sections at the same depth, which would not be what is wanted.

Hopefully my questions made sense. Appreciate any help!

@JohnG @aaronventure


----------



## aaronventure (Dec 25, 2022)

tim727 said:


> (1) Imagine I'm only using close mics on all the sections. In this case, I would need to use a reverb to add ERs to create a sense of depth correct?


Yes, but it's gonna sound like a close-miked recording with reverb, because the reverb is not recreating a microphone setup along with all the characteristics of a microphone and the way the reflections interact with them. 



tim727 said:


> Now imagine I'm using both close mics *and* tree mics on all sections. In this scenario is there now no need to use a reverb to add ERs because the tree mics already contain room info and this will afford the mix a sense of depth already?


Correct, but depends on the usage of samples. If you crossfade a conventional sample library, you're crossfading in the recorded reverb as well, which just sounds wrong (it sounds like a recording being crossfaded, not an instrument playing in a space). So I would still use a bit of reverb. I would likely drop the ambient mics in that case. 



tim727 said:


> If the answer to (2) is yes ... then I have a follow up question. When Orchestral Tools recorded let's say the string section and the woodwinds section (for BS and BWW respectively) do they place the tree mics at the same distance from the instruments? Or would the tree mics be further for BWW while closer for BS (since the string section should be closer to the front of the orchestra)? If it's the former then it would seem that even though the tree mics would add depth/room positioning they would end up placing all sections at the same depth, which would not be what is wanted.


If it's recorded "in situ", the tree is above the conductor position. If they knew they're gonna be making a full orchestra eventually, then they did the initial positioning of the strings as if there were other instruments behind them. So you would not need to do anything. You just need to keep in mind that wet samples do not crossfade perfectly and use the libraries accordingly (meaning that if you want a fp phrase, get a fp sample and don't try to do it with your sustain patch). 

Now, this approach does not _fully_ recreate the image of an orchestra because when there's 100 people on stage, the sound reflects off of them as well. But again, if you did somehow manage to get 100 people in the room to stay perfectly still and quiet while one player or section plays a scale of long notes for 10 minutes, then theoretically using that library would only be "valid" for that same instrumentation. The fact that there have been numerous productions using virtual instruments that do not adhere to this spec to orchestrate pieces with different instrumentation is all the proof you need that this ultimately makes no difference. 

The only things that matter are imaging (done by correctly setting up and mixing microphones) and depth (occurring due to high frequency air loss, difference of direct/reflected sound due to instruments' angle relative to the microphones). 

The first one is really up to the library. The second one you can affect, but it only goes one way; you can add depth, but not really remove it. This is ultimately why when working with libraries that were recorded in spaces, it's easier to work with the ones that were done in drier rooms. You get the imaging and the depth information that's all correct relative to one another and your brain gives you the green light, but you can always add more depth if you want with reverbs that do not necessarily mess with the original image. 

Of course, there are some differences when you take a group and record them in a smaller studio vs. a concert hall, especially in how dense early reflections are. But your brain will _okay_ it without question as long as the image and the depth relatively make sense, i.e. it was recorded with the same mic setup and the distances between the mics and the instruments are similar. And that's why, when matching instruments from different libraries that were recorded in different spaces, you just need to make sure that the mic technique was similar if not the same, and then you just check the depth and push it back a bit if needed. If the miking is different, you can _attempt_ to make it sound better by adding delay taps and panning them appropriately, but this is very much a case-by-case thing and there are no rules. 

If there's anything you should take from this unexpectedly long reply, it's that only imaging and depth matter, and how they're originally achieved. This should then influence all your decision making when attempting to set up a virtual orchestra.


----------



## Beat Kaufmann (Dec 25, 2022)

It all depends a bit on the libraries you use. Personally, I mainly use the VSL SYNCHRON libraries. They are coming with all kinds of microphones. With those, the instruments either sound close (close mics) a little bit away (MID) or if you just let the instruments sound with the room mics (MAIN/ROOM), they sound far away. I've come to realize the positions of the instruments with clever microphone ratio settings. Right Left I solve with the pan pots.
Advantage, the instruments sound as they really sounded in the room during the recording. Important - I don't use any presets from VSL, because they mostly sound anything but natural. So I adjust everything myself. Finally, I install some "tail over all". That's how I do it: Example (Video)

If I add to such SYNCHRON stage mixes instruments of the former libraries, which sound dry and cannot be pushed into the room depth by themselves, I have to do this with a reverb, which brings the instruments into the right room depth - preferably without tail of course. This all works wonderfully and sounds fantstically natural. - As I said, you can only work this way, of course, if your libraries offer signals with which you can realize such stage positions.

However you solve it, it seems important to me that you position instruments not only from left to right, but also from front to back, and you are welcome to exaggerate a bit. This results in vivid mixes, even if only a little tail is involved.

Two more examples of pieces with "microphone-positioned instruments" (no Presets, no Positioner, no ERs, no MIR, no nothing - just microphone-positioning + a little Tail over all:
*- **Sleigh Ride*
*- **O Waly Waly*

A lot of success
Beat


----------



## ALittleNightMusic (Dec 25, 2022)

Beat Kaufmann said:


> It all depends a bit on the libraries you use. Personally, I mainly use the VSL synchrion libraries. There are all kinds of microphones. With those, the instruments either sound close (close mics) a little bit away or if you just let the instruments sound with the room mics, they sound far away. I've come to realize the positions of the instruments with clever microphone ratio settings. Right Left I solve with the pan pots.
> Advantage, the instruments sound as they really sounded in the room during the recording. Important - I don't use presets from VSL, because they sound anything but natural. I adjust everything myself.
> Finally, I install some "tail over all". Example (Video)
> 
> ...


Love the demo tracks! Would you please consider a video diving into your positioning / microphone mixing approach to Synchron libraries? It would be immensely helpful!


----------



## Beat Kaufmann (Dec 25, 2022)

ALittleNightMusic said:


> Love the demo tracks! Would you please consider a video diving into your positioning / microphone mixing approach to Synchron libraries? It would be immensely helpful!


The link was already in my first reply.
But please like again extra:


----------



## Joël Dollié (Dec 26, 2022)

tim727 said:


> Resurrecting this old thread as I had the same question as the OP's point (1) and am still a little unclear on a couple things based on the responses here.Let's say I'm working on a track with a full Orchestral Tools template. Please confirm that my understanding on the following is correct:
> 
> (1) Imagine I'm only using close mics on all the sections. In this case, I would need to use a reverb to add ERs to create a sense of depth correct?
> (2) Now imagine I'm using both close mics *and* tree mics on all sections. In this scenario is there now no need to use a reverb to add ERs because the tree mics already contain room info and this will afford the mix a sense of depth already?
> ...


1: Yes but this is a bad idea and a tone killer most of the time. Nothing replaces natural room depth and diffusion of sound and you can't get it by feeding a close mic to a verb. By chaining enough verbs in the right way (certain libraries do that) you can get a decent results but it will be inferior to having good natural depth with mic positions. 

2: depends on the tree and room

3: not sure actually. Depends how they placed the players.

When it comes to this ER stuff I don't think you need to think about it too scientifically because it varies a lot depending on the source.


1: try to get a nice room sound with mic positions, no reverb. Try to avoid close mics unless you need definition. Often Decca is the closest mic position you will need.


2: if you have different libraries, try to blend them with hall reverb sends in various amounts.

3: if spaces between libraries are too different, try ER on the drier one to match.


----------



## tim727 (Dec 27, 2022)

> *aaronventure:*
> 
> Yes, but it's gonna sound like a close-miked recording with reverb, because the reverb is not recreating a microphone setup along with all the characteristics of a microphone and the way the reflections interact with them.
> 
> ...


That makes sense. In general I'm not inclined to go with this "close mics" only approach and indeed never have in the past. However I just bought Berlin Studio Reverb (BSR) with the hopes of placing my Cinematic Studio stuff in Teldex along with my largely Orchestral Tools template and so I've been considering a close mic approach (on the CS stuff only) to do so since my sense is that if I use the main mic from CS that will defeat the purpose of trying to get the instruments into Teldex. (I think?)



> *aaronventure:*
> 
> Correct, but depends on the usage of samples. If you crossfade a conventional sample library, you're crossfading in the recorded reverb as well, which just sounds wrong (it sounds like a recording being crossfaded, not an instrument playing in a space). So I would still use a bit of reverb. I would likely drop the ambient mics in that case.


So just to make sure I understand, you're suggesting that if there is not a lot of crossfading then an added reverb for ERs is not necessary (if I'm using both close and tree mics) but if there is a lot of cross fading then it's better to use an added reverb to smooth things out?



Beat Kaufmann said:


> If I add to such SYNCHRON stage mixes instruments of the former libraries, which sound dry and cannot be pushed into the room depth by themselves, I have to do this with a reverb, which brings the instruments into the right room depth - preferably without tail of course. This all works wonderfully and sounds fantstically natural. - As I said, you can only work this way, of course, if your libraries offer signals with which you can realize such stage positions.


@Beat Kaufmann So if you're using a dry library and you're trying to add depth, are you using ERs to create that sense of depth within the room? Or are you using pre-delay? Or both? I'm kind of a reverb novice and have been having trouble understanding the way to use these parameters to attain the desired results since it seems that in certain cases there is more than one way to affect the perception of both size and depth.



Joël Dollié said:


> 1: try to get a nice room sound with mic positions, no reverb. Try to avoid close mics unless you need definition. Often Decca is the closest mic position you will need.
> 
> 2: if you have different libraries, try to blend them with hall reverb sends in various amounts.
> 
> 3: if spaces between libraries are too different, try ER on the drier one to match.


I'm curious, if you're using multiple libraries from the same developer (i.e. strings, brass, winds from OT or CS etc) do you tend to use the same mic positions and levels across all sections? So for instance you mention focusing on the tree mic. Would you use tree exclusively across all sections of the orchestra?

Thank you @aaronventure @Beat Kaufmann and @Joël Dollié for your thoughtful and detailed responses!


----------



## Beat Kaufmann (Dec 28, 2022)

tim727 said:


> ...
> @Beat Kaufmann So if you're using a dry library and you're trying to add depth, are you using ERs to create that sense of depth within the room? Or are you using pre-delay? Or both? I'm kind of a reverb novice and have been having trouble understanding the way to use these parameters to attain the desired results since it seems that in certain cases there is more than one way to affect the perception of both size and depth.
> 
> 
> ...


To install dry library instruments in depth, I sometimes use algoreverbs and sometimes convoreverbs. If large room depths are needed, I have a couple of IRs that solve the problem with more natural sound better than algos. But for "normal" room depths it works just as well with modern algo reverbs.

The crucial thing for me is that the reverb used for this room-shifting job colors the instrument sound as little as possible. An instrument whose sound is neutrally shifted into the depth also harmonizes with such instruments that otherwise all play in the Teldex studio, the Synchron-stage or the "Vienna-Konzerthaus".
---------------------------
I combine the levels of the different microphones (CLOSE, MID, ROOM, MAIN (TREE) etc.) so that the strings play in front and the winds in the back and the percussion, organ, choir behind the winds. So I use the individual microphone levels so that the instruments appear acoustically in the desired location. The process of "mixing" for me is not setting dB values, but listening and adjusting.
Again, I use the microphones to set the desired spatial depth of the instruments. I don't care how they are labeled. Whether Tree or Close: I turn the knobs until the sound (the distance) is the way I want it. After all, it still needs right to left tuning (balance-knob).
The key is to be EXACTLY CLEAR about where you want the instruments to sit on your virtual stage before you start mixing.
All the best
Beat


----------



## Joël Dollié (Dec 28, 2022)

tim727 said:


> That makes sense. In general I'm not inclined to go with this "close mics" only approach and indeed never have in the past. However I just bought Berlin Studio Reverb (BSR) with the hopes of placing my Cinematic Studio stuff in Teldex along with my largely Orchestral Tools template and so I've been considering a close mic approach (on the CS stuff only) to do so since my sense is that if I use the main mic from CS that will defeat the purpose of trying to get the instruments into Teldex. (I think?)
> 
> 
> So just to make sure I understand, you're suggesting that if there is not a lot of crossfading then an added reverb for ERs is not necessary (if I'm using both close and tree mics) but if there is a lot of cross fading then it's better to use an added reverb to smooth things out?
> ...


Hey! Sometimes yes, they tend to be quite consistent, like all the OT libraries that have a tree tend to sound similar. 

Sometimes things differ, for example with spitfire audio. I use different mic positions for the BBC orchestra and appassionata strings for example. They don't even have all the same choices.

For cinematic studio series I don't use the same mic positions for strings and brass. So it kinda depends, I wouldn't just apply a blanket setting per developer.


----------



## Trash Panda (Dec 28, 2022)

tim727 said:


> That makes sense. In general I'm not inclined to go with this "close mics" only approach and indeed never have in the past. However I just bought Berlin Studio Reverb (BSR) with the hopes of placing my Cinematic Studio stuff in Teldex along with my largely Orchestral Tools template and so I've been considering a close mic approach (on the CS stuff only) to do so since my sense is that if I use the main mic from CS that will defeat the purpose of trying to get the instruments into Teldex. (I think?)


Berlin Studio seems to work best with samples that have very little early reflection information. Even with the separation of spot mics in v1.7, the CSS spot/close mics are still very full of early and late reflections. Some libraries have pretty good isolation of direct signal on their spot mics, but CSS is not one of them.


----------



## stprodigy (Dec 28, 2022)

Trash Panda said:


> Berlin Studio seems to work best with samples that have very little early reflection information. Even with the separation of spot mics in v1.7, the CSS spot/close mics are still very full of early and late reflections. Some libraries have pretty good isolation of direct signal on their spot mics, but CSS is not one of them.


As an owner of TSS, could you possibly offer any insight on whether Berlin Studio would be a good fit for TSS (and by extension TSD)? As I understand, TSS was recorded very dry.


----------



## Trash Panda (Dec 28, 2022)

stprodigy said:


> As an owner of TSS, could you possibly offer any insight on whether Berlin Studio would be a good fit for TSS (and by extension TSD)? As I understand, TSS was recorded very dry.


Short answer:
For TSS - Not really. For TSD - maybe.

Longer answer:
Dry means a direct sound with very little room information (early _and _late reflections). Most people misuse the term dry when talking about libraries that are recorded in smaller venues that have very short late reflections (tails).

TSS is _not_ dry. It was recorded in a studio with a very short reverb tail, but even the close microphones are chock full of of early reflections.

As such, trying to use it with Berlin Studio with a heavy amount of the Tree/AB/Surround "mics" will give your ears the impression of a room within a room. If you just want the tails, then you can certainly put it into Berlin Studio and turn the ER knobs down to 0. You might have _some_ success by putting the ER knobs down to a very low level, but I have not tried that yet.

Tokyo Scoring Drums, on the other hand, utilize a lot of close mics that sound very isolated from the room, so you could potentially place them into Teldex as long as you are not using the ambient mics. I'm guessing that they put up sound barriers during recordings, similar to the below video, but someone at ISW would have to confirm.


----------



## fakemaxwell (Dec 28, 2022)

Beat Kaufmann said:


> Important - I don't use any presets from VSL, because they mostly sound anything but natural.


Do you have a Synchron Player preset you can upload to check out? Interested in seeing what you've done.


----------

