# Working on the most advanced reverb setup!



## Abdulrahman (Feb 27, 2019)

Hi fellow mixers,

I've recently began my journey to discover how we can simulate the acoustics of a scoring studio. I've been working and experimenting endlessly on reverb for a very long time and I've been doing tutorials on it as well. It seems I may get happy at first with my sound, but then after I hear a live recording, I immediately hate my reverb!

In the world of live recording, there's this magnificent sound that results from blending different frequencies and timbres of different instruments together to create this coherent and all-in-one sound that I'm so desperately trying to make. No matter what room or mic setup you make for your orchestra, once they play together, you will love the outcome. Everything sounds balanced and homogeneous.

I could bring you a piece of only 3 sections playing (Say Violins, Violas and Flutes) and yet somehow it still sounds full and they belong to each other. I don't know how else can I describe this, but you get my point. I know there is mic bleed which results on the instruments on the left to play on the right as well which helps sell the "same room" principle, but when I remove them with just simple EQ, I can still feel they're together, so there's no loss in the surround feel.

I'm focusing my power on the strings for now trying to get them as close as possible to the real thing using different layers and maybe some slight compression to bring the room/bow noise to add that human factor to it. Adding an extra room tone on the overall mix can also help sell the illusion of a recording session.

Another thing that disturbs me is the percussion section. Mostly, the Timpani and Bass Drum. Our samples sound dull, boomy and resonant. They failed to capture the Timpani "roar" attack/rolls or the Bass Drum "ohmph" attack that is felt more than heard. My Timpani is almost sound washed out everytime and fail to bring the level of epicness that I want from it. Even when EQing it to bring the desired frequency range that makes it good, you end up making it worse and possibly hurt your ear. Take these two examples that showcase the Timpani/Bass at their best!

• The Chronicles of Narnia
• Mulan 

Some did advice me to play with the sample release of the Timpani. Please, feel free to share your own experiences with the world of reverb and acoustics as I've failed to find a good article that talks about the science behind the acoustics of how they're able to blend different orchestral instruments together. What is the key behind this that could completely change the way we listen to virtual instruments?


----------



## bengoss (Feb 27, 2019)

Good luck with this, I’m looking forward to your results. 
The problem I think we have with all the digital reverbs its the static reflections and amplitude to reverb timbre and decay. 
I have played in different halls and the beauty of live performance is that the players always accommodate to the hall reverb without even noticing. 
I’ve been thinking about this a lot and have tried automating different reverb parameters to achieve realistic performance but still no luck

B


----------



## Abdulrahman (Feb 28, 2019)

bengoss said:


> Good luck with this, I’m looking forward to your results.
> The problem I think we have with all the digital reverbs its the static reflections and amplitude to reverb timbre and decay.
> I have played in different halls and the beauty of live performance is that the players always accommodate to the hall reverb without even noticing.
> I’ve been thinking about this a lot and have tried automating different reverb parameters to achieve realistic performance but still no luck
> ...


I'm glad there are others who share my passion.


----------



## MartinH. (Feb 28, 2019)

You might get more specific answers if you post both a reference track and the result of how close you got to copying its sound.


----------



## Abdulrahman (Feb 28, 2019)

MartinH. said:


> You might get more specific answers if you post both a reference track and the result of how close you got to copying its sound.


Indeed you are right, but the reason I didn't is because I still didn't achieved what I want.


----------



## MartinH. (Feb 28, 2019)

Abdulrahman said:


> Indeed you are right, but the reason I didn't is because I still didn't achieved what I want.



Just post it as is - without knowing "where you are", how can someone tell you "which way to go" to reach your destination? 
"Pick a reference track and make yours sound the same" is about as far as one can help you without having a specific example of what you're struggling with. 

I'm working on / struggling with the same thing by the way. I'm in the process of transcribing and mocking up a track from a soundtrack, while I build a new template. _If _I ever get that finished, I'll post it here as an example to pick apart.


----------



## Abdulrahman (Feb 28, 2019)

MartinH. said:


> Just post it as is - without knowing "where you are", how can someone tell you "which way to go" to reach your destination?
> "Pick a reference track and make yours sound the same" is about as far as one can help you without having a specific example of what you're struggling with.
> 
> I'm working on / struggling with the same thing by the way. I'm in the process of transcribing and mocking up a track from a soundtrack, while I build a new template. _If _I ever get that finished, I'll post it here as an example to pick apart.


Perfect! I like your enthusiasm. I need to finish first with what I have now. We're on the final steps of releasing a short film


----------



## JohnG (Feb 28, 2019)

Hi @Abdulrahman ,

Suggest you incorporate into your survey a few more reverbs that people mention frequently -- UAD (there are two or three I think people use regularly), t.c. electronic, and Waves too -- probably a few more. I realise that many companies have emulations of the same things, but still, there are preferences.

Leaving aside song-writing and guitar/drums/bass/vocal music in general (because that's an entirely different animal), I think you are right to zero in on reverb choices. Having an excellent, natural-sounding reverb is indispensable. It can't be too fiddly, either, for composers at least. Not enough time.

Given what you're seeking, based on your initial post, you might head over to Junkie XL (Tom Holkenborg's) website. He has a tutorial that describes in detail which reverbs he uses and shows their settings.

Have fun!

John


----------



## maxime77 (Feb 28, 2019)

I think Junkie XL uses the UAD 224 for orchestral stuff and the Valhalla reverbs for the other instruments.


----------



## shawnsingh (Mar 1, 2019)

Abdulrahman said:


> In the world of live recording, there's this magnificent sound that results from blending different frequencies and timbres of different instruments together to create this coherent and all-in-one sound that I'm so desperately trying to make. No matter what room or mic setup you make for your orchestra, once they play together, you will love the outcome. Everything sounds balanced and homogeneous



You probably already know, to try and emulate real acoustics, at least for early reflections, convolution reverb is an important starting point. Algorithmic reverbs can be great for an additional diffuse tail, but the initial early reflections are more realistic with convolution reverbs. The impulse responses are basically mathematical snapshots of (a) the sound source properties including frequency response and directivity of sound propagation, (b) the acoustics of the environment, and (c) the freq response, positioning, and polarity pickup pattern properties of the microphone. All that information is hard coded into a single impulse response.

But, I'm sure you can imagine there are some limits to how realistic a basic convolution reverb can get - if you want to use an instrument that has a different pattern of sound propagation in 3d (like a French horn versus a trumpet versus a tuba), the convolution reverb won't represent the different kinds of echoes and reverb you would get. Also, if you try to position instruments in different places the a virtual stage, the reflections and echoes of the reverb in real life would be different, but those echoes are hard coded in the impulse response no matter how you pre-pan the instruments. You would need different impulse responses specifically designed for each position on stage and for each directivity pattern of instrument.

Another interesting concept to keep in mind is the major cues for how we hear the positioning of a sound. Level differences between L and R, delay difference between L and R, . For a coincident microphone setup, the mix ends up relying heavily on level differences to create the stereo image. On the other hand, for spaced microphones, there may not be much level difference, but then the delay difference becomes the dominant clue for hearing positioning. Early reflections of an acoustic space would be picked up by any microphone setup, but things really come to life with a spaced microphone setup that picks up a balanced amount of early reflections.

So with all that out of the way - I struggled a lot with reverb as a VSL user. When I listened to something like EWQLSO sound, it had a certain clarity and precise spatial positioning, yet individual instruments always sounded full when exposed, and there was a nice natural ambience to the recordings. I felt even more compelled by EWQL Hollywood gold the way those instruments are so clearly positioned because of the delay difference between L and R, as well as the early reflections. The convolution reverb on dry instrument samples just couldn't get that sound.

VSL MIR Pro, I think might be very unique in the industry because they actually captured impulse responses as ambisonics. that way, they canapproximate different 3d sound propagation, like the difference between a horn sound that points backwards versus a trumpet sound propagating directly forward. And they captured these this impulse responses in many places across the virtual stage. They even created a generalized microphone so you can emulate different pickup patterns

Sadly, for me personally, MIR didn't capture high order enough ambisonic, and I think they didn't capture enough different microphone positions. So even though MIR sound awesome, I haven't been able to get much of a spaced microphone sound from it. I'd be interested to hear if anyone else has tricks using MIR to get those delay cues and rich early reflections from spaced microphones.

So after a lot of bad attempts to try and imitate that spatial sound with VSL, I learned that what I liked was the sound of spaced microphone setups, like Decca tree or outriggers. Libraries like EWQLSO and Hollywood orchestra had. that's why I'm a fan of more recent libraries these days, which have the room sound as part of the samples. It felt better than any reverb added after, for all the reasons I mentioned above.

Cheers!


----------



## Dietz (Mar 1, 2019)

shawnsingh said:


> Sadly, for me personally, MIR didn't capture high order enough ambisonic, and I think they didn't capture enough different microphone positions. So even though MIR sound awesome, I haven't been able to get much of a spaced microphone sound from it. I'd be interested to hear if anyone else has tricks using MIR to get those delay cues and rich early reflections from spaced microphones.



Ambisconics and spaced microphones exclude each other (the former being a coincident array by definition). MIR Pro's Ambisonics decoder (a.k.a. "Output Format / Main Microphone setup") offers artificial spacing of the capsules by means of clever decorrelation algorithms and capsule delays, to allow for that "squaring of the circle". 

But MIR's main tool to achieve good spacial enveloping is the so-called "Secondary Microphone": It is meant to bring in the completely independent IR patterns of a recording position set apart from the actual Main Mic (in the symmetry axis of the hall, most of the time, because otherwise the all-important panning and positioning cues would be disturbed).

Re: "Higher Order Ambisonics (HOA)": Back in the days MIR in its recent form was already taxing CPUs to a previously unknown amount; using nine or 16 IR channels instead of "just" 4 for each single signal source for HOA would have rendered the most DAWs unusable. - There are some ideas for the (overdue!) next-generation version of MIR to overcome this restriction, though.


----------



## Abdulrahman (Mar 1, 2019)

JohnG said:


> Hi @Abdulrahman ,
> 
> Suggest you incorporate into your survey a few more reverbs that people mention frequently -- UAD (there are two or three I think people use regularly), t.c. electronic, and Waves too -- probably a few more. I realise that many companies have emulations of the same things, but still, there are preferences.
> 
> ...


Anyone can feel free to add their own in the comments and explain why :D


----------



## Abdulrahman (Mar 1, 2019)

maxime77 said:


> I think Junkie XL uses the UAD 224 for orchestral stuff and the Valhalla reverbs for the other instruments.


With all due respect to Tom, but he focuses more on the hybrid type of orchestra rather than the classical one I'm aiming for, so "realism" is not a thing he's after.


----------



## Iskra (Mar 1, 2019)

Abdulrahman said:


> My Timpani is almost sound washed out everytime and fail to bring the level of epicness that I want from it. Even when EQing it to bring the desired frequency range that makes it good, you end up making it worse and possibly hurt your ear. Take these two examples that showcase the Timpani/Bass at their best!


Not reverb-related, so maybe a bit offtopic...
Haven't being able to listen to your examples, but for percussion, instead/in parallel of going crazy with EQ and reverb, maybe fiddling around with a transient plugin would be more effective?

I don't do 'epic' stuff, but I use Punctuate from Eventide in single tracks and it helps a lot on 'cleaning' the mix and the drums' sound, or make them punchier if that's what you need. Punctuate is part of a mastering bundle, but I use it on single tracks or busses and it works great.

I'm sure there are other great transient plugins, this is just the one I use and like.


----------



## Abdulrahman (Mar 1, 2019)

shawnsingh said:


> You probably already know, to try and emulate real acoustics, at least for early reflections, convolution reverb is an important starting point. Algorithmic reverbs can be great for an additional diffuse tail, but the initial early reflections are more realistic with convolution reverbs. The impulse responses are basically mathematical snapshots of (a) the sound source properties including frequency response and directivity of sound propagation, (b) the acoustics of the environment, and (c) the freq response, positioning, and polarity pickup pattern properties of the microphone. All that information is hard coded into a single impulse response.
> 
> But, I'm sure you can imagine there are some limits to how realistic a basic convolution reverb can get - if you want to use an instrument that has a different pattern of sound propagation in 3d (like a French horn versus a trumpet versus a tuba), the convolution reverb won't represent the different kinds of echoes and reverb you would get. Also, if you try to position instruments in different places the a virtual stage, the reflections and echoes of the reverb in real life would be different, but those echoes are hard coded in the impulse response no matter how you pre-pan the instruments. You would need different impulse responses specifically designed for each position on stage and for each directivity pattern of instrument.
> 
> ...


I really like your breakdown and perhaps you saved me some time instead of searching for similar articles. Here are my mistakes and I hope everyone can avoid them:

• Never pan! Like seriously, hard panning will destroy your sound -- your stereo image. These samples were recorded in their orchestral seating, therefore no further panning is required. If you want to have a wider sound, then you use the built-in pan in the patch. For example, I use CineBrass and I use their built-in pan to rotate the CLOSE mic only and keep all the mics as they are. Panning the room mic or the entire patch using the DAW pan, will destroy the room sound. It's entirely not reasonable to pan a ROOM, now is it?! What made me confident with this even more is that if you bought Cinematic Studio Brass, you will notice how all the close mics are panned. I'm sure some of you here own the library and can agree with me on this.

• Always use the ROOM mic in your samples. Using a CLOSE mic only with a very dry sample trying to place it in your own virtual stage is (sorry) just plain bullshit. You have to keep the ROOM mic ON, because that's the very reason which makes the sample stands out from a synthetic to a natural sound. I would suggest maybe using the AMBIENT mic to give more room air to the sample, but that's down to preference I guess.

• DON'T use stereo enhancers or what we famously call the "Haas" effect. The purpose of the Haas effect to create some sort of L-R/R-L delay to create a more natural reflection sound, but ends up the complete opposite. When you use your ROOM mic and apply an impulse response, do you think it's really wise to use further stereo enhancing plugins or delays to help create the illusion of a space? I mean what's the point of using the reverb and room sound then if we are going to synthetically alter the natural sound by using computerized effects? Isn't that the point of me spending $299 on EW Spaces ii to get a natural sound or spending $500 on a professional sample library? I've tried this in the past and the result was no one can measure the room size, because the sound was all over the place and you are not getting a realistic hall sound. Everything is messed up!


----------



## Abdulrahman (Mar 1, 2019)

Iskra said:


> Not reverb-related, so maybe a bit offtopic...
> Haven't being able to listen to your examples, but for percussion, instead/in parallel of going crazy with EQ and reverb, maybe fiddling around with a transient plugin would be more effective?
> 
> I don't do 'epic' stuff, but I use Punctuate from Eventide in single tracks and it helps a lot on 'cleaning' the mix and the drums' sound, or make them punchier if that's what you need. Punctuate is part of a mastering bundle, but I use it on single tracks or busses and it works great.
> ...


Thanks! I will check that plugin out and try my luck with the transient effect. You listened to my examples, so I'm pretty sure you already have an idea of what I'm aiming for


----------



## gsilbers (Mar 1, 2019)

maybe its outdated but try searching for Todd AO altiverb IR orchestral setup. thats what many composers used to use to help achieve a more real space by having early reflection vs late reflections IRs. although i think MIR is already doing this.


----------



## Abdulrahman (Mar 1, 2019)

gsilbers said:


> maybe its outdated but try searching for Todd AO altiverb IR orchestral setup. thats what many composers used to use to help achieve a more real space by having early reflection vs late reflections IRs. although i think MIR is already doing this.


I can always use the specific reverb from my EW Spaces ii.


----------



## JohnG (Mar 1, 2019)

Abdulrahman said:


> With all due respect to Tom, but he focuses more on the hybrid type of orchestra rather than the classical one I'm aiming for, so "realism" is not a thing he's after.



Well if that's your objective -- and a good one no doubt -- it might focus advice from others if you'd said that in the initial post.


----------



## gsilbers (Mar 1, 2019)

Abdulrahman said:


> I can always use the specific reverb from my EW Spaces ii.



hmm... maybe. but since this used to be a huge thing back in 2008-12 ill link a few threads so you can understand more what im trying to say. its a somewhat not too compliocated process but kinda of specific to one or two specific IRs from altiverb. might be replicated with other reverbs im sure but back in the day i remember several composers from remote control doing this and also outside of remote control also. so from my experience it had a wide use here in LA. 

https://vi-control.net/community/threads/tutorial-applying-early-reflections-to-get-that-sound.9139/
https://vi-control.net/community/threads/svk-tutorial-applying-early-reflections.25231/
https://www.gearslutz.com/board/mus...tral-sample-libraries-share-your-set-ups.html


----------



## SBK (Mar 1, 2019)

I did a test here with trailer strings and brass

-Dry
-Normal (just send the 3 channels to one fx channel)
-Mixed technique (Some crazy processing separatetely before I send to the fx channel)
- mixed+normal both of techniques together

How it sounds? any differece? :D

Dry
[AUDIOPLUS=https://vi-control.net/community/attachments/dry-mp3.18713/][/AUDIOPLUS]

Normal

[AUDIOPLUS=https://vi-control.net/community/attachments/normal-send-mp3.18714/][/AUDIOPLUS]

Mixed technique
[AUDIOPLUS=https://vi-control.net/community/attachments/mixed-technique-mp3.18715/][/AUDIOPLUS]

Mixed+Normal
[AUDIOPLUS=https://vi-control.net/community/attachments/mixed-normal-mp3.18716/][/AUDIOPLUS]


----------



## SBK (Mar 1, 2019)

for me the best reverb ever is this one:

Must be only real world one right?


----------



## shawnsingh (Mar 1, 2019)

Dietz said:


> Ambisconics and spaced microphones exclude each other (the former being a coincident array by definition). MIR Pro's Ambisonics decoder (a.k.a. "Output Format / Main Microphone setup") offers artificial spacing of the capsules by means of clever decorrelation algorithms and capsule delays, to allow for that "squaring of the circle".
> 
> But MIR's main tool to achieve good spacial enveloping is the so-called "Secondary Microphone": It is meant to bring in the completely independent IR patterns of a recording position set apart from the actual Main Mic (in the symmetry axis of the hall, most of the time, because otherwise the all-important panning and positioning cues would be disturbed).
> 
> Re: "Higher Order Ambisonics (HOA)": Back in the days MIR in its recent form was already taxing CPUs to a previously unknown amount; using nine or 16 IR channels instead of "just" 4 for each single signal source for HOA would have rendered the most DAWs unusable. - There are some ideas for the (overdue!) next-generation version of MIR to overcome this restriction, though.



I was thinking that ambisonics is used to represent each individual microphone. So in that case, each microphone in a spaced microphone setup could still separately be modeled with ambisonics, right? Isn't it just the case that VSL simply didn't capture IRs of spaced microphones?

If there is a next-gen MIR some day, using HOA, more channels representing sound source directivity, and, if my understanding is correct, capturing several more microphone positions like outrigger, decca, spaced - that might bring me back to the world of dry samples again =)


----------



## Dietz (Mar 1, 2019)

shawnsingh said:


> each microphone in a spaced microphone setup could still separately be modeled with ambisonics, right?



Yes, but that's not the actual reason why MIR is based on Ambisonics.



> Isn't it just the case that VSL simply didn't capture IRs of spaced microphones?



Ambisonics is used throughout the whole signal path of MIR mostly to match the virtual position of source signals and the actual position of the IRs recorded there. As a side-effect, MIR isn't restricted to a specific output format. Variable polar patterns come as free add-on. 



> If there is a next-gen MIR some day, using HOA, more channels representing sound source directivity,



... this I don't get, to be honest. 8-/ What's wrong with the free rotation of the source and directional IRs ...?

HOA is a different topic, though. Let's see what a future version of MIR brings. 



> and, if my understanding is correct, capturing several more microphone positions like outrigger, decca, spaced - that might bring me back to the world of dry samples again =)



As soon as someone invents a way to achieve artfact-free random source positioning on a virtual stage with non-coincident IRs, I'll happily have it implemented!


----------



## Abdulrahman (Mar 1, 2019)

gsilbers said:


> hmm... maybe. but since this used to be a huge thing back in 2008-12 ill link a few threads so you can understand more what im trying to say. its a somewhat not too compliocated process but kinda of specific to one or two specific IRs from altiverb. might be replicated with other reverbs im sure but back in the day i remember several composers from remote control doing this and also outside of remote control also. so from my experience it had a wide use here in LA.
> 
> https://vi-control.net/community/threads/tutorial-applying-early-reflections-to-get-that-sound.9139/
> https://vi-control.net/community/threads/svk-tutorial-applying-early-reflections.25231/
> https://www.gearslutz.com/board/mus...tral-sample-libraries-share-your-set-ups.html


Thanks, man! I'll certainly check them out. We are here to benefit each other


----------



## Abdulrahman (Mar 1, 2019)

SBK said:


> for me the best reverb ever is this one:
> 
> Must be only real world one right?



Well, that is what I am after. To get a natural sound like a real recording session


----------



## shawnsingh (Mar 1, 2019)

Dietz said:


> this I don't get, to be honest. 8-/ What's wrong with the free rotation of the source and directional IRs ...


 
By crude analogy, hearing VR demos of FOA vs HOA with head-tracked binaural rendering, 3rd order HOA was way more immersive than FOA for me. I feel like it has to do with the way that time delay cues (and early reflections as a generalization) stayed slightly more accurate at higher order, where they are very diffused and smeared in FOA. I was imagining that in MIR's scenario, higher orders would have a similar benefit where early reflections sound a bit more precise.

Anyway, I won't hijack this thread any further, some great discussion beyond MIR that should continue =) When I have time I'll create a new thread to ask about how ambisonics is used for MIR. Cheers!


----------



## Abdulrahman (Mar 2, 2019)

Would anyone care to comment about* Seventh Heaven* and how good it is with orchestral work? Perhaps through a quick demo here and is it true its close to the hardware Bricasti M7?


----------



## MartinH. (Mar 2, 2019)

Abdulrahman said:


> Would anyone care to comment about* Seventh Heaven* and how good it is with orchestral work? Perhaps through a quick demo here and is it true its close to the hardware Bricasti M7?



You could play around with some of these:

http://www.samplicity.com/bricasti-m7-impulse-responses/


----------



## Patrick de Caumette (Mar 2, 2019)

If you are including Lexicon, you should add Exponential Audio to your list as well...


----------



## storyteller (Mar 2, 2019)

MartinH. said:


> You could play around with some of these:
> 
> http://www.samplicity.com/bricasti-m7-impulse-responses/


And if you have Waves IR1 or IR-L, you can use these configuration files to have them setup in your preset list: 

http://store.storyteller.im/product/waves-ir1-ir-l-preset-pack-samplicity-bricasti-m7/


----------



## Abdulrahman (Mar 3, 2019)

I would very much appreciate if someone can voluntarily post his/her session stems. Preferably of the classical genre. It will help me a lot to understand how each instrument behave in the room and interact with the other instruments. Please, the stems must be of a professional recording with good acoustics and mics setup, because that's the point of this post. Only then one can study it in the right way.


----------



## Tanuj Tiku (Mar 3, 2019)

This is what my experience tells me:

1. Samples expose issues with musicality that trump reverb or anything else. The musicality and the performance of a piece of music outshines anything else. This is the number 1 problem with samples. You can only go so far with samples and the brain will jump at a live performance more than anything else, first. There is a tendency for all of us to think about reverbs and EQ's because well, what else can you think about? You cannot record a live orchestra so, there 'must' be something to do with production. Keep in mind, I am talking only about the world of samples. 

2. Most samples have baked-in reverb so we are hugely limited with spatialization. VSL is dry but then it has it's own problems.

3. Samples with baked in reverb add a huge amount of room build up problems. Specially, when you start layering. In the real world, there are only a few stems or passes. However, Alan Meyerson has said that Hans Zimmer uses multiple passes of orchestra but if you listen closely, a lot of it sounds very specific but also it sounds like a lot of the mud has been cleared out. Doing the same with samples works really well. I cut out any activity that I do not need or is not part of the absolute essential sound.

4. Samples are often too wide. My theory is that they are designed to instantly inspire. Often real recordings have great clarity because there is a good center balance and things are in place. Individual groups are not very wide but the sum of the individual parts is. Which is why, and this may sound counter-intuitive, you may find that you get good results by collapsing the width of many of those samples a little bit. It will add clarity and open up space for layers and other things. You can then sprinkle some reverb for glue.

5. I have done many tests with many reverbs and as far as samples are concerned, I have come to the conclusion that most 'any' good reverb will do. There are more pressing matters with programming and ground work in production.

6. Automation of reverb, mic positions of samples and how hard you hit a reverb is extremely important. The strength of your send signal and the reverb plug-ins damping of frequencies is more important than the choice of reverb when comparing many of the finest reverbs.

7. Having said all of that, casting the right sample library for the right job becomes very important which is why different things work in different contexts. One library cannot do everything. You obviously don't need a huge amount but a few different ones will help you get there.


----------



## MartinH. (Mar 3, 2019)

Abdulrahman said:


> I would very much appreciate if someone can voluntarily post his/her session stems. Preferably of the classical genre. It will help me a lot to understand how each instrument behave in the room and interact with the other instruments. Please, the stems must be of a professional recording with good acoustics and mics setup, because that's the point of this post. Only then one can study it in the right way.



Isn't there a Mike Verta masterclass with the stems from "The Race"? You should ask him.


----------



## Abdulrahman (Mar 4, 2019)

MartinH. said:


> Isn't there a Mike Verta masterclass with the stems from "The Race"? You should ask him.


I am taking classes with him in Template Balance, Theory, Composition 1, Counterpoint and Orchestration 1. Haven't heard him mentioning anything related to what you say.


----------



## AlexanderSchiborr (Mar 4, 2019)

Abdulrahman said:


> I am taking classes with him in Template Balance, Theory, Composition 1, Counterpoint and Orchestration 1. Haven't heard him mentioning anything related to what you say.



It´s called "putting it all together"! There you have the stems of the race.


----------



## Abdulrahman (Mar 4, 2019)

AlexanderSchiborr said:


> It´s called "putting it all together"! There you have the stems of the race.


Thank you!


----------

