What's new

Working on the most advanced reverb setup!

What is your best choice for reverb plug-in?

  • EastWest Spaces 1/2

    Votes: 24 30.0%
  • Altiverb

    Votes: 6 7.5%
  • Lexicon

    Votes: 8 10.0%
  • Seventh Heaven

    Votes: 11 13.8%
  • Nimbus

    Votes: 3 3.8%
  • ValhallaRoom

    Votes: 9 11.3%
  • Vienna MIR PRO

    Votes: 4 5.0%
  • Others

    Votes: 15 18.8%

  • Total voters
    80

Abdulrahman

Member
Hi fellow mixers,

I've recently began my journey to discover how we can simulate the acoustics of a scoring studio. I've been working and experimenting endlessly on reverb for a very long time and I've been doing tutorials on it as well. It seems I may get happy at first with my sound, but then after I hear a live recording, I immediately hate my reverb!

In the world of live recording, there's this magnificent sound that results from blending different frequencies and timbres of different instruments together to create this coherent and all-in-one sound that I'm so desperately trying to make. No matter what room or mic setup you make for your orchestra, once they play together, you will love the outcome. Everything sounds balanced and homogeneous.

I could bring you a piece of only 3 sections playing (Say Violins, Violas and Flutes) and yet somehow it still sounds full and they belong to each other. I don't know how else can I describe this, but you get my point. I know there is mic bleed which results on the instruments on the left to play on the right as well which helps sell the "same room" principle, but when I remove them with just simple EQ, I can still feel they're together, so there's no loss in the surround feel.

I'm focusing my power on the strings for now trying to get them as close as possible to the real thing using different layers and maybe some slight compression to bring the room/bow noise to add that human factor to it. Adding an extra room tone on the overall mix can also help sell the illusion of a recording session.

Another thing that disturbs me is the percussion section. Mostly, the Timpani and Bass Drum. Our samples sound dull, boomy and resonant. They failed to capture the Timpani "roar" attack/rolls or the Bass Drum "ohmph" attack that is felt more than heard. My Timpani is almost sound washed out everytime and fail to bring the level of epicness that I want from it. Even when EQing it to bring the desired frequency range that makes it good, you end up making it worse and possibly hurt your ear. Take these two examples that showcase the Timpani/Bass at their best!

The Chronicles of Narnia
Mulan

Some did advice me to play with the sample release of the Timpani. Please, feel free to share your own experiences with the world of reverb and acoustics as I've failed to find a good article that talks about the science behind the acoustics of how they're able to blend different orchestral instruments together. What is the key behind this that could completely change the way we listen to virtual instruments?
 
Last edited:

bengoss

Member
Good luck with this, I’m looking forward to your results.
The problem I think we have with all the digital reverbs its the static reflections and amplitude to reverb timbre and decay.
I have played in different halls and the beauty of live performance is that the players always accommodate to the hall reverb without even noticing.
I’ve been thinking about this a lot and have tried automating different reverb parameters to achieve realistic performance but still no luck:)

B
 
OP
Abdulrahman

Abdulrahman

Member
Good luck with this, I’m looking forward to your results.
The problem I think we have with all the digital reverbs its the static reflections and amplitude to reverb timbre and decay.
I have played in different halls and the beauty of live performance is that the players always accommodate to the hall reverb without even noticing.
I’ve been thinking about this a lot and have tried automating different reverb parameters to achieve realistic performance but still no luck:)

B
I'm glad there are others who share my passion.
 

MartinH.

Senior Member
You might get more specific answers if you post both a reference track and the result of how close you got to copying its sound.
 

MartinH.

Senior Member
Indeed you are right, but the reason I didn't is because I still didn't achieved what I want.
Just post it as is - without knowing "where you are", how can someone tell you "which way to go" to reach your destination?
"Pick a reference track and make yours sound the same" is about as far as one can help you without having a specific example of what you're struggling with.

I'm working on / struggling with the same thing by the way. I'm in the process of transcribing and mocking up a track from a soundtrack, while I build a new template. If I ever get that finished, I'll post it here as an example to pick apart.
 
OP
Abdulrahman

Abdulrahman

Member
Just post it as is - without knowing "where you are", how can someone tell you "which way to go" to reach your destination?
"Pick a reference track and make yours sound the same" is about as far as one can help you without having a specific example of what you're struggling with.

I'm working on / struggling with the same thing by the way. I'm in the process of transcribing and mocking up a track from a soundtrack, while I build a new template. If I ever get that finished, I'll post it here as an example to pick apart.
Perfect! I like your enthusiasm. I need to finish first with what I have now. We're on the final steps of releasing a short film ;)
 

JohnG

Senior Member
Hi @Abdulrahman ,

Suggest you incorporate into your survey a few more reverbs that people mention frequently -- UAD (there are two or three I think people use regularly), t.c. electronic, and Waves too -- probably a few more. I realise that many companies have emulations of the same things, but still, there are preferences.

Leaving aside song-writing and guitar/drums/bass/vocal music in general (because that's an entirely different animal), I think you are right to zero in on reverb choices. Having an excellent, natural-sounding reverb is indispensable. It can't be too fiddly, either, for composers at least. Not enough time.

Given what you're seeking, based on your initial post, you might head over to Junkie XL (Tom Holkenborg's) website. He has a tutorial that describes in detail which reverbs he uses and shows their settings.

Have fun!

John
 

shawnsingh

Active Member
In the world of live recording, there's this magnificent sound that results from blending different frequencies and timbres of different instruments together to create this coherent and all-in-one sound that I'm so desperately trying to make. No matter what room or mic setup you make for your orchestra, once they play together, you will love the outcome. Everything sounds balanced and homogeneous
You probably already know, to try and emulate real acoustics, at least for early reflections, convolution reverb is an important starting point. Algorithmic reverbs can be great for an additional diffuse tail, but the initial early reflections are more realistic with convolution reverbs. The impulse responses are basically mathematical snapshots of (a) the sound source properties including frequency response and directivity of sound propagation, (b) the acoustics of the environment, and (c) the freq response, positioning, and polarity pickup pattern properties of the microphone. All that information is hard coded into a single impulse response.

But, I'm sure you can imagine there are some limits to how realistic a basic convolution reverb can get - if you want to use an instrument that has a different pattern of sound propagation in 3d (like a French horn versus a trumpet versus a tuba), the convolution reverb won't represent the different kinds of echoes and reverb you would get. Also, if you try to position instruments in different places the a virtual stage, the reflections and echoes of the reverb in real life would be different, but those echoes are hard coded in the impulse response no matter how you pre-pan the instruments. You would need different impulse responses specifically designed for each position on stage and for each directivity pattern of instrument.

Another interesting concept to keep in mind is the major cues for how we hear the positioning of a sound. Level differences between L and R, delay difference between L and R, . For a coincident microphone setup, the mix ends up relying heavily on level differences to create the stereo image. On the other hand, for spaced microphones, there may not be much level difference, but then the delay difference becomes the dominant clue for hearing positioning. Early reflections of an acoustic space would be picked up by any microphone setup, but things really come to life with a spaced microphone setup that picks up a balanced amount of early reflections.

So with all that out of the way - I struggled a lot with reverb as a VSL user. When I listened to something like EWQLSO sound, it had a certain clarity and precise spatial positioning, yet individual instruments always sounded full when exposed, and there was a nice natural ambience to the recordings. I felt even more compelled by EWQL Hollywood gold the way those instruments are so clearly positioned because of the delay difference between L and R, as well as the early reflections. The convolution reverb on dry instrument samples just couldn't get that sound.

VSL MIR Pro, I think might be very unique in the industry because they actually captured impulse responses as ambisonics. that way, they canapproximate different 3d sound propagation, like the difference between a horn sound that points backwards versus a trumpet sound propagating directly forward. And they captured these this impulse responses in many places across the virtual stage. They even created a generalized microphone so you can emulate different pickup patterns

Sadly, for me personally, MIR didn't capture high order enough ambisonic, and I think they didn't capture enough different microphone positions. So even though MIR sound awesome, I haven't been able to get much of a spaced microphone sound from it. I'd be interested to hear if anyone else has tricks using MIR to get those delay cues and rich early reflections from spaced microphones.

So after a lot of bad attempts to try and imitate that spatial sound with VSL, I learned that what I liked was the sound of spaced microphone setups, like Decca tree or outriggers. Libraries like EWQLSO and Hollywood orchestra had. that's why I'm a fan of more recent libraries these days, which have the room sound as part of the samples. It felt better than any reverb added after, for all the reasons I mentioned above.

Cheers!
 
Last edited:

Dietz

Space Explorer
Sadly, for me personally, MIR didn't capture high order enough ambisonic, and I think they didn't capture enough different microphone positions. So even though MIR sound awesome, I haven't been able to get much of a spaced microphone sound from it. I'd be interested to hear if anyone else has tricks using MIR to get those delay cues and rich early reflections from spaced microphones.
Ambisconics and spaced microphones exclude each other (the former being a coincident array by definition). MIR Pro's Ambisonics decoder (a.k.a. "Output Format / Main Microphone setup") offers artificial spacing of the capsules by means of clever decorrelation algorithms and capsule delays, to allow for that "squaring of the circle". ;)

But MIR's main tool to achieve good spacial enveloping is the so-called "Secondary Microphone": It is meant to bring in the completely independent IR patterns of a recording position set apart from the actual Main Mic (in the symmetry axis of the hall, most of the time, because otherwise the all-important panning and positioning cues would be disturbed).

Re: "Higher Order Ambisonics (HOA)": Back in the days MIR in its recent form was already taxing CPUs to a previously unknown amount; using nine or 16 IR channels instead of "just" 4 for each single signal source for HOA would have rendered the most DAWs unusable. - There are some ideas for the (overdue!) next-generation version of MIR to overcome this restriction, though. :)
 
OP
Abdulrahman

Abdulrahman

Member
Hi @Abdulrahman ,

Suggest you incorporate into your survey a few more reverbs that people mention frequently -- UAD (there are two or three I think people use regularly), t.c. electronic, and Waves too -- probably a few more. I realise that many companies have emulations of the same things, but still, there are preferences.

Leaving aside song-writing and guitar/drums/bass/vocal music in general (because that's an entirely different animal), I think you are right to zero in on reverb choices. Having an excellent, natural-sounding reverb is indispensable. It can't be too fiddly, either, for composers at least. Not enough time.

Given what you're seeking, based on your initial post, you might head over to Junkie XL (Tom Holkenborg's) website. He has a tutorial that describes in detail which reverbs he uses and shows their settings.

Have fun!

John
Anyone can feel free to add their own in the comments and explain why :D
 
OP
Abdulrahman

Abdulrahman

Member
I think Junkie XL uses the UAD 224 for orchestral stuff and the Valhalla reverbs for the other instruments.
With all due respect to Tom, but he focuses more on the hybrid type of orchestra rather than the classical one I'm aiming for, so "realism" is not a thing he's after.
 

Iskra

Active Member
My Timpani is almost sound washed out everytime and fail to bring the level of epicness that I want from it. Even when EQing it to bring the desired frequency range that makes it good, you end up making it worse and possibly hurt your ear. Take these two examples that showcase the Timpani/Bass at their best!
Not reverb-related, so maybe a bit offtopic...
Haven't being able to listen to your examples, but for percussion, instead/in parallel of going crazy with EQ and reverb, maybe fiddling around with a transient plugin would be more effective?

I don't do 'epic' stuff, but I use Punctuate from Eventide in single tracks and it helps a lot on 'cleaning' the mix and the drums' sound, or make them punchier if that's what you need. Punctuate is part of a mastering bundle, but I use it on single tracks or busses and it works great.

I'm sure there are other great transient plugins, this is just the one I use and like.
 
OP
Abdulrahman

Abdulrahman

Member
You probably already know, to try and emulate real acoustics, at least for early reflections, convolution reverb is an important starting point. Algorithmic reverbs can be great for an additional diffuse tail, but the initial early reflections are more realistic with convolution reverbs. The impulse responses are basically mathematical snapshots of (a) the sound source properties including frequency response and directivity of sound propagation, (b) the acoustics of the environment, and (c) the freq response, positioning, and polarity pickup pattern properties of the microphone. All that information is hard coded into a single impulse response.

But, I'm sure you can imagine there are some limits to how realistic a basic convolution reverb can get - if you want to use an instrument that has a different pattern of sound propagation in 3d (like a French horn versus a trumpet versus a tuba), the convolution reverb won't represent the different kinds of echoes and reverb you would get. Also, if you try to position instruments in different places the a virtual stage, the reflections and echoes of the reverb in real life would be different, but those echoes are hard coded in the impulse response no matter how you pre-pan the instruments. You would need different impulse responses specifically designed for each position on stage and for each directivity pattern of instrument.

Another interesting concept to keep in mind is the major cues for how we hear the positioning of a sound. Level differences between L and R, delay difference between L and R, . For a coincident microphone setup, the mix ends up relying heavily on level differences to create the stereo image. On the other hand, for spaced microphones, there may not be much level difference, but then the delay difference becomes the dominant clue for hearing positioning. Early reflections of an acoustic space would be picked up by any microphone setup, but things really come to life with a spaced microphone setup that picks up a balanced amount of early reflections.

So with all that out of the way - I struggled a lot with reverb as a VSL user. When I listened to something like EWQLSO sound, it had a certain clarity and precise spatial positioning, yet individual instruments always sounded full when exposed, and there was a nice natural ambience to the recordings. I felt even more compelled by EWQL Hollywood gold the way those instruments are so clearly positioned because of the delay difference between L and R, as well as the early reflections. The convolution reverb on dry instrument samples just couldn't get that sound.

VSL MIR Pro, I think might be very unique in the industry because they actually captured impulse responses as ambisonics. that way, they canapproximate different 3d sound propagation, like the difference between a horn sound that points backwards versus a trumpet sound propagating directly forward. And they captured these this impulse responses in many places across the virtual stage. They even created a generalized microphone so you can emulate different pickup patterns

Sadly, for me personally, MIR didn't capture high order enough ambisonic, and I think they didn't capture enough different microphone positions. So even though MIR sound awesome, I haven't been able to get much of a spaced microphone sound from it. I'd be interested to hear if anyone else has tricks using MIR to get those delay cues and rich early reflections from spaced microphones.

So after a lot of bad attempts to try and imitate that spatial sound with VSL, I learned that what I liked was the sound of spaced microphone setups, like Decca tree or outriggers. Libraries like EWQLSO and Hollywood orchestra had. that's why I'm a fan of more recent libraries these days, which have the room sound as part of the samples. It felt better than any reverb added after, for all the reasons I mentioned above.

Cheers!
I really like your breakdown and perhaps you saved me some time instead of searching for similar articles. Here are my mistakes and I hope everyone can avoid them:

• Never pan! Like seriously, hard panning will destroy your sound -- your stereo image. These samples were recorded in their orchestral seating, therefore no further panning is required. If you want to have a wider sound, then you use the built-in pan in the patch. For example, I use CineBrass and I use their built-in pan to rotate the CLOSE mic only and keep all the mics as they are. Panning the room mic or the entire patch using the DAW pan, will destroy the room sound. It's entirely not reasonable to pan a ROOM, now is it?! What made me confident with this even more is that if you bought Cinematic Studio Brass, you will notice how all the close mics are panned. I'm sure some of you here own the library and can agree with me on this.

• Always use the ROOM mic in your samples. Using a CLOSE mic only with a very dry sample trying to place it in your own virtual stage is (sorry) just plain bullshit. You have to keep the ROOM mic ON, because that's the very reason which makes the sample stands out from a synthetic to a natural sound. I would suggest maybe using the AMBIENT mic to give more room air to the sample, but that's down to preference I guess.

• DON'T use stereo enhancers or what we famously call the "Haas" effect. The purpose of the Haas effect to create some sort of L-R/R-L delay to create a more natural reflection sound, but ends up the complete opposite. When you use your ROOM mic and apply an impulse response, do you think it's really wise to use further stereo enhancing plugins or delays to help create the illusion of a space? I mean what's the point of using the reverb and room sound then if we are going to synthetically alter the natural sound by using computerized effects? Isn't that the point of me spending $299 on EW Spaces ii to get a natural sound or spending $500 on a professional sample library? I've tried this in the past and the result was no one can measure the room size, because the sound was all over the place and you are not getting a realistic hall sound. Everything is messed up!
 
Last edited:
OP
Abdulrahman

Abdulrahman

Member
Not reverb-related, so maybe a bit offtopic...
Haven't being able to listen to your examples, but for percussion, instead/in parallel of going crazy with EQ and reverb, maybe fiddling around with a transient plugin would be more effective?

I don't do 'epic' stuff, but I use Punctuate from Eventide in single tracks and it helps a lot on 'cleaning' the mix and the drums' sound, or make them punchier if that's what you need. Punctuate is part of a mastering bundle, but I use it on single tracks or busses and it works great.

I'm sure there are other great transient plugins, this is just the one I use and like.
Thanks! I will check that plugin out and try my luck with the transient effect. You listened to my examples, so I'm pretty sure you already have an idea of what I'm aiming for :)
 

gsilbers

Part of Pulsesetter-Sounds.com
maybe its outdated but try searching for Todd AO altiverb IR orchestral setup. thats what many composers used to use to help achieve a more real space by having early reflection vs late reflections IRs. although i think MIR is already doing this.
 
OP
Abdulrahman

Abdulrahman

Member
maybe its outdated but try searching for Todd AO altiverb IR orchestral setup. thats what many composers used to use to help achieve a more real space by having early reflection vs late reflections IRs. although i think MIR is already doing this.
I can always use the specific reverb from my EW Spaces ii.
 

JohnG

Senior Member
With all due respect to Tom, but he focuses more on the hybrid type of orchestra rather than the classical one I'm aiming for, so "realism" is not a thing he's after.
Well if that's your objective -- and a good one no doubt -- it might focus advice from others if you'd said that in the initial post.
 

gsilbers

Part of Pulsesetter-Sounds.com
I can always use the specific reverb from my EW Spaces ii.
hmm... maybe. but since this used to be a huge thing back in 2008-12 ill link a few threads so you can understand more what im trying to say. its a somewhat not too compliocated process but kinda of specific to one or two specific IRs from altiverb. might be replicated with other reverbs im sure but back in the day i remember several composers from remote control doing this and also outside of remote control also. so from my experience it had a wide use here in LA.

https://vi-control.net/community/threads/tutorial-applying-early-reflections-to-get-that-sound.9139/
https://vi-control.net/community/threads/svk-tutorial-applying-early-reflections.25231/
https://www.gearslutz.com/board/music-computers/565702-reverb-settings-orchestral-sample-libraries-share-your-set-ups.html
 
Top Bottom