# set the reverb for strings ( predelay )



## darkneo57 (Jul 19, 2019)

hi, i'm french, 34 yo, piano teacher, so i'm sorry for my poor english. it's been 6 months since I wrote epic orchestral music, however, I encounter several problems. I have already watched several video on the reverb and the predelay, but I would have liked to have the answer of other people. i actually use space II.

1) is it very common to use different reverbs for different instruments, for example one for string instruments, one for piano, another for brass, or it's better to use a single reverb with different values different for each instrument ? I am currently using a convolution reverb.

2) i saw for fast tracks, or with staccato strings it's better to have a predelay of 25ms max, is it right ?

3) fo 135 BPM, for sustain and legato strings, wich predelay would you advice me and wich decay ?

Thanks you very much for your help.


----------



## marclawsonmusic (Jul 19, 2019)

When it comes to reverb, I think it is good to look at it more like cooking than baking, so I tend to not think of specific ms values (others will surely disagree). More of 'a dash of this, a sprinkle of that'.

I previously had multiple reverbs per section / instrument, but I now just use a single 'hall' reverb - and adjust the send level for each instrument. You are right that staccato strings can get blurry with a lot of reverb, so maybe use less send on those.

With Spaces II you can get great results, so set up a hall reverb buss and experiment with sending levels from your strings and other instruments. You can also set your send to 'pre fader', and lower the fader to push things back in the room - or just experiment with changing the balance of wet / dry... more wet = more perception of distance.

Anyway, my 2 cents.


----------



## dzilizzi (Jul 19, 2019)

I think it also depends on how wet or dry your library is. Some have the predelay already there, others have some, and the rest have none. I want to say Paul or Christian from Spitfire talked about adding predelay in one of the videos about their Studio library that was generally useful to all not so wet libraries. If I find it, I will link it.


----------



## marclawsonmusic (Jul 19, 2019)

Very true. With a wet library, you may not need much reverb... on the other hand, with a dry library (e.g. VSL, LASS, Sample Modeling), you might need to 'put it in a room' first before sending it to a 'hall'.

It's all about listening to the source material and understanding what it needs. Using your ears is key!

@darkneo57, which libraries are you using? For strings, piano and brass?


----------



## Beat Kaufmann (Jul 20, 2019)

darkneo57 said:


> ...I have already watched several video on the reverb and the predelay, but I would have liked to have the answer of other people...



Have a look at this site: https://www.beat-kaufmann.com/vitutorials/about-reverbs/index.php (Especially the point "Predelay", of course...)
That was the theoretical approach.
---------------
If you are using Space II, then you use "Room Impulses" (IRs) with it, which probably already have predelays built in. When using such reverbs I can only recommend: Set the values so that it sounds right.
So general advices like "Strings always need 25ms Predelay" are completely useless.

Beat


----------



## darkneo57 (Jul 20, 2019)

thank you very much for your answers,

as I subscribed to Eastwest Composer Cloud X, currently I use Hollywood strings diamond, I think it's a dry library. I understand that you have to set the reverb by using ears, but when we start, we necessarily learn what are the parameters, what they serve, how to use them, what is their rendering. I watched several tutorials but there is not too much detail for the strings.

Currently I used the patch Hamburg cathedral, is it better to use a hall reverb?

if I sent my strings tracks to an FX bus and I put the reverb on insert on this FX bus, "adjust the send" corresponds to the dry-wet mix of my insert reverb ? right? ( sorry, i'm a beginner)

Again, thank you very much.


----------



## darkneo57 (Jul 20, 2019)

sorry , i forgot. For piano i use EW piano platinium and for brass EW hollywood brass diamond. For now i'm learning to set the reverb on strings and piano with EW space II.


----------



## Manuel Stumpf (Jul 20, 2019)

darkneo57 said:


> if I sent my strings tracks to an FX bus and I put the reverb on insert on this FX bus, "adjust the send" corresponds to the dry-wet mix of my insert reverb ? right? ( sorry, i'm a beginner)


When using the reverb on the insert of an FX bus, you normally set the dry-wet mix in the reverb to 100% fully wet. No dry signal at all.
Then you have two parameters to adjust:
- The fader of your instrument track, which controls the "dry" signal going to the output.
- The send knob of your instrument track, which determines how much you send to the FX track and thus how "wet" it will be.
Edit: How to setup a "send" (and thus having a "send knob") differs from DAW to DAW. But essentially every DAW has this feature, which allows you to send only a partial volume of a track to another track (in this case your reverb FX track).


----------



## Beat Kaufmann (Jul 20, 2019)

Hello darkneo
Actually, the point is to place orchestral instruments acoustically in different room depths. Hall plug-ins are suitable for this. Some produce beautiful reverb tails, while others are better at placing instruments in different "room depths. The topic is very complex and unfortunately it can't be treated with just a little more or less pre-delay.

Therefore try to find out with Space II which IRs are best suited for moving wet/dry instruments backwards - independent of the IR title.

After that you could basically create such different acoustic depths in 2-4 bus channels. All instruments sitting on the front of the stage are then dragged through the bus, which simulates the smallest room depth. 

Some Videos about the depths
https://www.beat-kaufmann.com/mixing-an-orchestra/about-the-tutorial/mixing-videos/index.php

You can find some posts about the topic "Depths with Reverbs" in this forum.

All the best
Beat


----------



## darkneo57 (Jul 20, 2019)

Hi, Beat Kaufmann, i took a look on your site, i think I understood a good part but it's quite complex, I think I need to practice to really understand. In any case, in space II convolution the reverb, by default the predelay was 100ms, for the moment for the sustain strings I had it set to 25ms. The question I asked myself was:

the fact of putting a predelay so small is it, without to talk about depth and distance, it creates a kind of offset with the beat, right? I saw that we could put it "on time" with the beat. Is it not really good to put a predelay at 0?

thank you


----------



## darkneo57 (Jul 20, 2019)

Hi Manual Stumpf, 
in my case, the strings output is sent directly to the Fx bus and the Fx bus output is sent directly to the main output. I think that in my case, I do not have two signals (dry and wet). I think in my case I just have to adjust the mix in my reverb, right?
thank you


----------



## Manuel Stumpf (Jul 20, 2019)

darkneo57 said:


> Hi Manual Stumpf,
> in my case, the strings output is sent directly to the Fx bus and the Fx bus output is sent directly to the main output. I think that in my case, I do not have two signals (dry and wet). I think in my case I just have to adjust the mix in my reverb, right?
> thank you


Indeed if your complete strings output is routed through the FX bus, in that case you have to use the dry-wet mix control of the reverb.
When mixing people talk about "send" they usually refer to sending a partial volume somewhere.


----------



## darkneo57 (Jul 20, 2019)

if I understood correctly, the utility of using the reverb in send (e.g for a chord section), is to be able to control independently each instrument via its fader (dry signal). right?
thank you


----------



## Beat Kaufmann (Jul 20, 2019)

darkneo57 said:


> Hi, Beat Kaufmann, i took a look on your site, i think I understood a good part but it's quite complex, I think I need to practice to really understand. In any case, in space II convolution the reverb, by default the predelay was 100ms, for the moment for the sustain strings I had it set to 25ms. The question I asked myself was:
> 
> the fact of putting a predelay so small is it, without to talk about depth and distance, it creates a kind of offset with the beat, right? I saw that we could put it "on time" with the beat. Is it not really good to put a predelay at 0?
> 
> thank you



As long as it sounds good, any value is OK. It would be unusual if the strings sound further away than the wind instruments.

By the way: If you control the reverb part with "send" (reverb in an aux bus), the reverb in the aux bus must be set to 100% wet. Then the value of Pre-Delay doesn't matter anymore...
-----------------------------------------------------------

*For orchestral music, a really working reverb concept is this:*

So, first we have to say that the result always counts. That's why there is not THAT procedure.
Nevertheless, my proposed system (depths in a group channels) has 3 main "plus points":

1. Acoustically creating a room depth usually requires more effects than just a reverb. An EQ, for example, helps to better simulate the distance of instruments. Since all instruments come with samples at the same volume (alto flute ... bass drum), you always have to amplify the instruments in depth3 with Compressor & Co because the are even weaker (EQ cuts the high frequencies...).
All these procedures can be conveniently solved in each bus channel for optimizing every room depth perfectly. With more or less "send" into one single reverb this is not possible in the same way.

2. If you have a 4th bus (without any effect), you can collect all the sampled instruments there that have already integrated a room depth.

3. The big advantage now comes to the end: Because now all the different and still dry room depths (only ERs) are looped through one and the same Reverb with only Tail, so everything is now glued together nicely. Even if the instruments play in different depths the feeling of one concert hall is nicely given. Also the tail volume is the same for close and far away playing instruments which simulates perfectly the reality.

This system is almost always successful, especially with larger mixes. Maybe you can count this fact as a further Plus.

--------------------------------------------------
That's what about the text is above:









And that's how it can sound: Example
Observe that the tail is always the same. It is only the distance of the instrument that changes. These different distances are done with the reverb which only manages the ERs.
Now the goal is to set 3 (or more) different depths in Bus-channels, and to merge their output signals with a tail (without depth) in the main channel to a final mix.
Yes, this is a bit complicated but the results can be great (You are able to bring out soloists or to put instruments in the back). You need 3 instances of Space II, matching IRs and a reverb at the output, which is responsible for the tail (mostly an algorithmic reverb).

The concept above has another advantage:
If you have samples that already come with an "integrated acoustic position and room depth", you can route them directly to the main output channel. These signals are then combined with the others and the "tail" will merge them all into a whole (final mix).


All the best
Beat


----------



## Bluemount Score (Jul 20, 2019)

Beat Kaufmann said:


> And that's how it can sound: Example


Dang, I could literally SEE the vibraphone (?) move further away!


----------



## Nick Batzdorf (Jul 20, 2019)

Note that the EastWest stuff a) is recorded in a hall already and comes with different mic positions; and b) includes a built-in reverb in its player (Play).

My approach to reverb includes depth and reality (as in Beat's tutorial, which is very good), but I tend to consider clarity above all. For example, I usually like a longer predelay on strings, because it makes them speak more clearly.

The interesting thing is that you only need a couple of elements near and far to give the whole mix a sense of depth. Everything else can have individual reverb that sticks to it but may not sound realistic. Oddly, we accept lots of different spaces in recordings; I still haven't figured out why that is.


----------



## Beat Kaufmann (Jul 20, 2019)

Meetyhtan said:


> Dang, I could literally SEE the vibraphone (?) move further away!



Dang... Provided you have good IRs, a big difference is possible from "near" to "far away". You really don't have any problems "creating" 2-4 different (fixed) room depths.

Again, it only changes the distance of the instruments, but not the reverb. Because actually we only want more distance and not "more and more mud". 

Beat


----------



## darkneo57 (Jul 21, 2019)

hello, thank you again for all your answers.

for strings I disabled the build-in reverb, and I used the main microphone. I heard that if the mics are far, there is already the reverb of the room that is integrated. Could you explain that to me a little bit, please? if i used the main mic, is it ok?

What I can understand from all that you say is to using ears to adjust the reverb. But would you have some rough values predelay and decay for sustain strings and stacato strings, just for have an lill idea.

why is it not good to put a predelay to 0?

my music has a tempo of 135 bpm, the measure is a 3/4. sustain strings play 3 beats chords, my stacato plays eight notes, the main theme used in several note figures.

thank you and have a good weekend.


----------



## erikradbo (Jul 21, 2019)

Beat Kaufmann said:


> By the way: If you control the reverb part with "send" (reverb in an aux bus), the reverb in the aux bus must be set to 100% wet. Then the value of Pre-Delay doesn't matter anymore...
> -----------------------------------------------------------



Nice overview. But the pre-delay would function in the same way regardless if the verb is used as an insert with dry/wet-mixing or as a 100% wet send.


----------



## dzilizzi (Jul 21, 2019)

darkneo57 said:


> hello, thank you again for all your answers.
> 
> for strings I disabled the build-in reverb, and I used the main microphone. I heard that if the mics are far, there is already the reverb of the room that is integrated. Could you explain that to me a little bit, please? if i used the main mic, is it ok?
> 
> ...


The nice thing about using everything from an orchestra like EW is that if you use a mix with the room mics, they should all sound in proper position. At least that was what I understood on how they record them. Then just add some space reverb to make the room bigger if you want it. 

All this fussing around is more important on dry samples, which EW is not. Though someone may tell me I'm wrong about this


----------



## Ashermusic (Jul 21, 2019)

dzilizzi said:


> All this fussing around is more important on dry samples, which EW is not. Though someone may tell me I'm wrong about this



You are wrong about this  DEPENDING on which EW library you are talking about. EWQLSO is not very dry but the Hollywood Orchestra is.


----------



## dzilizzi (Jul 21, 2019)

Ashermusic said:


> You are wrong about this  DEPENDING on which EW library you are talking about. EWQLSO is not very dry but the Hollywood Orchestra is.


Good to know. I thought it wasn't dry if you used room mics, just the close mics. Did they record the instrument in the usual positions? I bought SSO right after buying EWHO Diamond, so I haven't played much with EWHO. And frankly, all the articulation choices overwhelm me a bit with it.


----------



## Ashermusic (Jul 21, 2019)

dzilizzi said:


> Good to know. I thought it wasn't dry if you used room mics, just the close mics. Did they record the instrument in the usual positions? I bought SSO right after buying EWHO Diamond, so I haven't played much with EWHO. And frankly, all the articulation choices overwhelm me a bit with it.



No even with the default mics, HO is pretty dry, and yes, recorded in position.

If you want help with choosing articulations and setting it up in either Logic Pro or VE Pro, I am available for hire over Skype.


----------



## dzilizzi (Jul 21, 2019)

Ashermusic said:


> No even with the default mics, HO is pretty dry, and yes, recorded in position.
> 
> If you want help with choosing articulations and setting it up in either Logic Pro or VE Pro, I am available for hire over Skype.


Thanks! I'm in Cubase or ProTools


----------



## Ashermusic (Jul 21, 2019)

dzilizzi said:


> Thanks! I'm in Cubase or ProTools



Well no matter which DAW you use, VE Pro has s best suited to host HO by FAR.


----------



## darkneo57 (Jul 22, 2019)

hello, thank you again for all your answers.

for strings I disabled the build-in reverb, and I used the main microphone. so my HS is a dry library right ? if i used the main mic, is it ok? 

could you explain me why in some mics of some library , there is already reverb ?

What I can understand from all that you say is to using ears to adjust the reverb. But would you have some rough values predelay and decay for sustain strings and stacato strings, just for have an lill idea.

why is it not good to put a predelay to 0?

my music has a tempo of 135 bpm, the measure is a 3/4. sustain strings play 3 beats chords, my stacato plays eighth notes, the main theme used in several note figures.

thank you


----------



## WhiteNoiz (Jul 22, 2019)

darkneo57 said:


> could you explain me why in some mics of some library , there is already reverb ?



It's the natural reverb of the studio (and more distant mics). Plus, make sure the built-in reverb is off. You can use multiple mics if you want the sound and (pre-shaped) character they add. There's still some 3D info there you'd have to struggle to create with just a reverb and fx. Then again maybe you want a dry sound with an unnatural or custom tail. It's up to you and the purpose.



darkneo57 said:


> why is it not good to put a predelay to 0?



You can do whatever you want, it's just not physically realistic. You don't hear the source and the reflections at the same time and it also depends on where you're standing and on the source and the conditions [which is why Spaces also has some instrument seating specific impulses for violins, horns and others to localise the response even more, instead of just having a general point of reference for one "master" tail, which still depends on the recording position; which is still just one point in the room, probably where you'd put the full sections room mics in a typical recording session, I would assume. They even sampled it directionally (as violins for example would be faced more towards the ceiling, so the sound parameters and reactions change accordingly). But it's still options, you don't _have to_ do it that way. And those impulses are still recorded with sweeping tones on speakers as sources, not the actual instruments and their dynamics, so the response will still be a (still very realistic and naturalistic) approximation. Every point in the room will be different. There's a distance and time difference from the sound coming out the instrument and hitting the surfaces and bouncing and travelling around to create the tail. If it's an impulse [of a room], it should already have those characteristics, so there it becomes more of an additional delay between dry and wet signal (assuming that's the sound you're after and the character you wanna add). 

Did you read the EW manual? It actually explains a lot of this and has some other tips. Technically, you can calculate the time difference but it's not the whole story (there's also materials, tone shaping, the combination of reflections, environmental conditions, room reaction to specific source characteristics, dry to wet ratio etc.). With an algo you just have to create the room conditions yourself, having more control on the character of the room and its variants. It's just that the wetter the lib is (and the more mic perspectives it has, which is still in essence a sample of the naturally occurring sound at that point in the room with the actual source as source), the more of these spatial characteristics it has built-in. The drier it is the more you have to recreate/emulate. You can still combine multiple approaches.

_





Moving brass to the back of the room without reverb


First some quick context: I am trying to decide if I need replacements for my Hollywood Orchestra samples because I don't like the room size information in the samples OR if I'm just not properly mixing them and not getting the sound I want. In another thread about the Hollywood Orchestra...



vi-control.net




_


----------



## Ashermusic (Jul 22, 2019)

We need to be careful not to make the terms "ambience" and "reverb" synonymous. They aren't.

When a mic picks up the sound of a room, that is ambience, not reverb.

Reverb is defined as: "an effect whereby the sound produced by an amplifier or an amplified musical instrument is made to reverberate slightly, or a device for producing reverb on an amplified musical instrument. "


----------



## JohnG (Jul 22, 2019)

Bonjour @darkneo57

Pardonnez-moi pour mon français terrible!

Je pense que vous pensez un petit peu trop sur cet sujet. 

Je vous recommande d'écouter attentivement et de choisir la mic position que vous aimez le mieux -- la position qui donne le son (avant reverb et avant EQ) que vous préférez.

Quelque-fois, c'est pas la "main" position mais, en tous cas, la quantité de reverb dépend de la position que vois choisissez. Si vous choisissez une mic position proche, vous devrez probablement utiliser plus de reverb et aussi EQ, comme @Beat Kaufmann recommande. Si vous choisissez une position éloignée, naturellement vous aves besoin de moins reverb (et probablement zero EQ).

Pour mois, je préfère choisir une mic position que ne nécessite pas d'EQ (si possible).

Pour décider du montant de reverb, utilisez vos oreilles; peut-être pouvez-vous comparer a un enregistrement que vous admirer, et faire votre choix comme ça.

Pour pre-delay, peut-être commence avec 18ms et jugez par toi-même si vous l'aimez.

Bonne chance!

John


----------



## Nick Batzdorf (Jul 22, 2019)

darkneo57 said:


> I heard that if the mics are far, there is already the reverb of the room that is integrated. Could you explain that to me a little bit, please? if i used the main mic, is it ok?



Reverb = the space the instrument is recorded in (actual or simulated with a processor).

If you put the mic really close to the instrument, the sound coming directly from it will overbalance the sound of it bouncing around the other six paths (L&R sides, floor, ceiling, back wall, front wall) and the sound is drier; move the mic farther back and you get more space in the balance - it's a wetter sound.

You can use any or all of the mics, with or without additional reverb. The reason they include multiple positions is so you have a choice of how wet you want it.


----------



## Nick Batzdorf (Jul 22, 2019)

JohnG said:


> Bonne chance!
> 
> John



Amazingly, I understood all of that post!


----------



## darkneo57 (Jul 23, 2019)

Once again, thank you all for your time and kindness.
Thank you for the message "good luck" , I think that beginners especially need courage baha  . I did a good part of the music conservatory, so for the part composition is much simpler but for computer music, every new thing makes me face problems, and face my ignorance.

I read the east west manuals for the strings, piano and brass, but it was with my english of 6 months ago, my current level being still poor.

I still had some questions:

for my epic orchestral music, I chose a cathedral type reverb (hamburg cathedral 2.8s). The instruments I used are sustain srings for chord accompaniments (v1, v2, viola, cello, bass), a cello solo, a piano, legato violins, choirs and brass. For each type of instrument, I took a specific reverb e.g hamburg 2.8s string for strings, hamburg 2.2s piano for piano, hamburger 2.8s choirs for choirs.

Is this a bad choice? 
Should I put a single reverb for all instruments with different settings for each instrument?
Would it have been better to use a hall-type reverb?

I know that the basic trap for beginners and put too much reverb and make the music muddy. I started to put reverb on each section of instruments with a rendition that sounded good to my ears. However, when I listened (last night), the general rendering, I found that it was really muddy, anyway it "sounded cathedral" while the separate tracks sounded good.

Is it because the reverb tails are too long? (I set the decay)
Is it rather because I put too much wet?
too much predelay? (25 ms)
Or my hamburg cathedral 2.8s reverb is not suitable?

thank you so much


----------



## darkneo57 (Jul 25, 2019)

please help, no Korben Dallas here ?


----------



## Beat Kaufmann (Jul 25, 2019)

darkneo57 said:


> ...I chose a cathedral type reverb (hamburg cathedral 2.8s). The instruments I used are sustain srings for chord accompaniments (v1, v2, viola, cello, bass), a cello solo, a piano, legato violins, choirs and brass. For each type of instrument, I took a specific reverb e.g hamburg 2.8s string for strings, hamburg 2.2s piano for piano, hamburger 2.8s choirs for choirs.
> 
> Is this a bad choice?
> Should I put a single reverb for all instruments with different settings for each instrument?
> Would it have been better to use a hall-type reverb?...



You probably want someone who gives you a reverb preset for all your instruments along with the predelay information. Beginners often think you just have to enter the correct dB-values everywhere or the correct ms delay times and that's it... 

You have now received many many suggestions from quit a lot of members. From me, among other things, you got the idea to try out a different reverberation concept. But you are still stuck in your predelay values. It seems that's a dead end.

So here are 4 new suggestions
1. First think about what you actually want to achieve with your mix. Draw a stage plan where your instruments should sit from the front to the back and from right to left. Then you have to be clear whether you want to hear the whole orchestra from the gallery or more like in the 2nd row. It is best if you have an orchestral sound that you want to achieve (reference). So search such a reference. The aim will be finally to achieve its sound as close as possible.
So start to reverb and panning section by section until the instruments are where you want them to have. Don't worry about pre-delays and other values, just turn the buttons until it's acoustically OK (I recommend to do everything with the same reverb type).

2. If you don't get satisfactory results, let a professional mix this project. It costs something, but then you can see how he solved all the tasks.

3. using presets and advice from the forum doesn't really get you anywhere. Forum members can only offer a real help if they hear a sound sample and how it should sound at the end. So, in general I recommend to take the time for learning "how to mix orchestras" from scratch.

4. if you don't want to go the way of 3, then go for VSS2, MIR or another similar product.

Also, it would be great to hear an example of how your mix sounds at the moment.
Maybe it's not the delay-time, as you think, which needs some support.

Beat


----------



## Nick Batzdorf (Jul 25, 2019)

Ashermusic said:


> We need to be careful not to make the terms "ambience" and "reverb" synonymous. They aren't.
> 
> When a mic picks up the sound of a room, that is ambience, not reverb.



My mics are really good. They pick up reverb.


----------



## darkneo57 (Jul 26, 2019)

Hi, Beat, thank you very much for answering me as well as for your help. 

I'm sorry, I'm really a beginner in computer music although this is not the case in music writing. I am not a stubborn person, just a novice without experience trying to understand. I have noted your advice and those of others, I will seek information about the orchestral mix.

Regarding my music, although I am a beginner in computer music and mix, I count and I hope to sell it, which is why I do not wish unfortunately to get files out my computer.. Thank you for your understanding, and especially a big thank you.


----------

