# Zero knowledge of using reverb in orchestration, where to start?



## TheGruppy (Oct 26, 2021)

hi guys, I'm trying to learn the use of strings with the piano, so I tried to replicate some pieces including "a game of croquet" from the movie "The Theory of Everything". I used the spitfire chamber library and the result satisfies me considering my little experience, but the original sounds much wet than my version, so I would like to use some reverbs. I have searched a lot around and also on the forum, but I find notions too advanced for my current knowledge. do you have any info or advice or some basic tutorials with which I can begin to understand how reverbs work in orchestration (I know how to use them but only in pop music) ... anything is welcome!


----------



## mybadmemory (Oct 26, 2021)

The very short version: Use a nice hall preset, put in on an aux track, send all channels to it, and adjust to taste.


----------



## mybadmemory (Oct 26, 2021)

The slightly longer version: For dry libraries some people like to use two reverbs. Usually an IR one to create the sound of the stage or room, and then often an algorithmic one for the tail. For a wet library like SCS, you’d probably only use the one for the tail since the room/stage is already in there. There are many great reverbs to choose from but some popular ones are Valhalla, Seventh Heaven, and Fabfilter. They all contain lots of hall presets and you can obviously tweak to taste but 2-3 seconds seem to be common lengths for orchestral halls. Many people route the different orchestral sections to different aux channels and busses to be able to have individual reverbs (and other processing) for each section, and some even do it separately for long and short notes, but for a start you’re probably good to go with just one for everything, where you adjust the send to taste for each section or track. As for how much you use, I guess that’s up to referencing if you want to stay realistic, or to taste if you want to be creative.


----------



## ryans (Oct 26, 2021)

I suspect my setup is relatively minimal comparing to most here. I usually use 4 reverb returns: PERCUSSION, BRASS, WOODWINDS, and STRINGS.

PERCUSSION is the wettest because they sit at the back of the room, followed by BRASS, WOODWINDS, and STRINGS is the driest.

I find this more or less correlates with the sound of most orchestral recordings.


----------



## mybadmemory (Oct 26, 2021)

Henrik B. Jensen said:


> You can experiment with the microphone positions, in this case of Spitfire Chamber Library.
> 
> @Sarah Mancuso uses the library a lot though, so she may be better at helping you out.



Yeah, forgot to mention the obvious! With SCS being as wet as it is, mic mixing should probably come first, before even adding much reverb!


----------



## Trash Panda (Oct 26, 2021)

I'll probably get yelled at by someone who knows better about these things, but here's a "quick" summation:

Reverb consists of two major pieces - Early Reflections (ER) and Late Reflections (LR) also known as the tail.

ERs tell your brain information about the room the instrument is in (size, density of objects, etc.) and where the instruments are within the room (depth). With dry libraries (those with little to no room information) it's common to add a "room" reverb to simulate these ERs to give the samples a proper sense of being in a real space. Typically this is done with a convolution reverb. Most Spitfire libraries will have the room information baked into the samples themselves from the actual recording venue, so you should not need to worry about this part of the reverb.

LRs, or the tail, give that blooming and fade out effect that is so common in large concert halls or score recordings. This is typically added through an algorithmic reverb. They can range from very transparent and natural (Nimbus, Cinematic Rooms) to very colored (Lexicon 480 or Bricasti M7 emulations are popular, as are cheaper options like R4 or Valhalla Room).

For your specific scenario, it's probably best to set up an algorithmic reverb on a separate track with a mix that heavily favors or only has late reflections, send your instrument group busses to it at varying levels (less for closer sections like strings and woodwinds, more for farther sections like brass and percussion). Once that is done, turn the fader on your reverb track all the way down and slowly increase it until you can audibly hear the reverb itself. Then back it off by about 1.5-3 dB, and try muting/unmuting it to compare.

If the performance sounds larger with the reverb on, but doesn't have the obvious reverb effect then you've achieved a pretty natural reverb tail. If you would prefer to hear more reverb, then just turn the reverb fader up more.

I typically have my reverb tracks hanging out somewhere between -21 dB to -18 dB. Cleaner, more transparent reverbs and shorter decay times can typically go higher in volume (to my ears) than a colored reverb or longer decay time without sounding unnatural.


----------



## d.healey (Oct 26, 2021)

TheGruppy said:


> understand how reverbs work in orchestration


Are reverbs part of orchestration now? I need to re-read my Rimsky book


----------



## JJP (Oct 26, 2021)

d.healey said:


> Are reverbs part of orchestration now? I need to re-read my Rimsky book


Please, no... and shame on you for even asking this in an internet forum! Now we will have people misinterpreting this simple question and carrying on about how reverb is an integral part of learning orchestration.

Okay, I'm joking... but now that I think about it...


----------



## re-peat (Oct 26, 2021)

d.healey said:


> Are reverbs part of orchestration now? I need to re-read my Rimsky book



Actually, I believe they are. Depending on the libraries you use, reverbs can be as much a part of the orchestration as the writing. What is an obvious reality with real orchestras and needn’t be considered at all in traditional orchestration work (or books), is anything but a given with virtual orchestration: things like placement, distance, balance, depth, blending, space, … All of these — the effectiveness and esthetics (or lack thereof) of which can make or break a mock-orchestral production — require attention and very specific know-how that no traditional orchestrator need ever be concerned with.

Rimsky-Korsakov doesn’t have a chapter on EQ’ing, dynamic processing and imaging either, and yet these are also processes that can define, colour, enhance and animate your orchestral sound every bit as much as the instruments do.

I’m of the viewpoint that everything that contributes to the believability of the illusion is part of the virtual orchestration. And I’m also of the opinion that people would make much better sounding mock-ups if their approach to virtual orchestration included the above considerations.
The right balance between one instrument group and another, or a solo instrument against an accompaniment, is often as much a matter of producing it right than of writing it correctly.

Only to say: use reverb wrongly in a mock-up, and you can ruin what is, on paper, an expertly orchestrated piece of music.

_


----------



## TheGruppy (Oct 27, 2021)

thanks guys, especially to @mybadmemory and @Trash Panda , this was the answer to start I was looking for!

i only dont undestand what to use, and with my bad english is hard to explain... i try with a screenshot






i send all my strings channels to the channel group "string"... i set the reverb to 100% wet... after i use "orange fader n°2" in every channel or in the group channel? and after do i use "fader n°3"?


----------



## TheGruppy (Oct 27, 2021)

Henrik B. Jensen said:


> You can experiment with the microphone positions, in this case of Spitfire Chamber Library.
> 
> @Sarah Mancuso uses the library a lot though, so she may be better at helping you out.


@Sarah Mancuso se ci sei batti un colpo!!

i undestand the use of close and ambient microphones, but how to use three one?


----------



## TheGruppy (Oct 27, 2021)

@Henrik B. Jensen thanx for you replies and you patience


----------



## Living Fossil (Oct 27, 2021)

Trash Panda said:


> ... Typically this [ERs] is done with a convolution reverb.


I have no clue why this notion is mentioned so often.
Is there a popular Youtuber who makes this claim and everybody thought it's a good concept to tell to others without seriously challenging it?

Honestly, of course there are great Convo ERs, but there are also great algorithmic ERs.
And there are great algorithmic tails as well as nice Convo Tails.

Both have their place, their strengths and their weak points. I don't think it's possible to turn it into a rule.
(there are also situations where it's better to use Delays instead of ERs)


----------



## Bernard Duc (Oct 27, 2021)

Living Fossil said:


> I have no clue why this notion is mentioned so often.
> Is there a popular Youtuber who makes this claim and everybody thought it's a good concept to tell to others without seriously challenging it?
> 
> Honestly, of course there are great Convo ERs, but there are also great algorithmic ERs.
> ...


I think that, without talking about ER vs tail, convo verbs tend to be better at putting a sound into a space. If the IR is from a real room and recorded correctly, then you will get the sound of this room. On the other hand, an algorithmic reverb, like the Quantec, can also be great at that (and some specialize in it) but many others will add shine to the sound without really changing the feel of the room, which is sometimes a wanted feature (many people will call them "transparent" reverbs). Many of the favourite reverbs of film mixers, like the Lexicon or TC, are fantastic at that, but they won't work as well as other options if you want to place a dry instrument into a room.


----------



## Living Fossil (Oct 27, 2021)

Bernard Duc said:


> I think that, without talking about ER vs tail, convo verbs tend to be better at putting a sound into a space. If the IR is from a real room and recorded correctly, then you will get the sound of this room. On the other hand, an algorithmic reverb, like the Quantec, can also be great at that (and some specialize in it) but many others will add shine to the sound without really changing the feel of the room, which is sometimes a wanted feature (many people will call them "transparent" reverbs). Many of the favourite reverbs of film mixers, like the Lexicon or TC, are fantastic at that, but they won't work as well as other options if you want to place a dry instrument into a room.


I know these arguments, but i tend to disagree.

If you have instruments with some baked in room, convo reverbs convolute the room portion of the sample too, which usually results in a sound that is quite muddy. 
I liked the convo IRs that came with LASS a lot 10 years ago, but that was a really dry library.

In general, people tend to forget that the brain is not "hearing" a room. It's rather busy in trying to make sense of the whole bunch of acoustic information. (and in fact, it's a wonder how got it can get at this task, if somebody puts a lot of practise into it)
Now, the relevant information for the brain to decode the position of a sound source consists in some few but very important elements.
With algorithmic solutions for positioning sound sources you can focus on the information you want (or: need) and leave out the mud.
That's why i personally prefer tools as Precedence, SP2016 and (finally) IRCAM's verb in most cases.

P.S. i know there are many roads leading to Rome, i just think reverberation is such a complex topic that there should be much more personal experimentation and less relying on standard receipts (that usually many people just pass on after having heard it from some Youtuber-Influencer).


----------



## re-peat (Oct 27, 2021)

Living Fossil said:


> I have no clue why this notion is mentioned so often.
> Is there a popular Youtuber who makes this claim and everybody thought it's a good concept to tell to others without seriously challenging it?



I don't get it either. I suppose people do it that way because they believe, or were told, that convolution-generated ER's sound "more real" than algorithmic ones, which, especially in the context of a mock-up, is ludicrous nonsense of course.
It's an old delusion though and, so it seems, like the japanese knotweed, as good as impossible to eradicate. I vaguely recall the idea being introduced many, many years ago here on the forum and for some reason it caught on — anything that manages to convince people that their mock-ups might sound more realistic by using a certain technique, nearly always catches on, whether it is true or not — and since those days it's been passed on from generation to generation of VI-members, no questions asked. Me, I've never heard a mock-up nor have any experience with doing my own stuff that indicates, let alone proves, there is any validity to this way of working.

_


----------



## obey (Oct 27, 2021)

What are your guys favorite algorithmic ER generators?


----------



## Bernard Duc (Oct 27, 2021)

Living Fossil said:


> I know these arguments, but i tend to disagree.
> 
> If you have instruments with some baked in room, convo reverbs convolute the room portion of the sample too, which usually results in a sound that is quite muddy.
> I liked the convo IRs that came with LASS a lot 10 years ago, but that was a really dry library.
> ...


But you don't actually disagree, you're saying the same as me. Real room convolution reverb is often considered better at creating a sense of room because it has all this information. It's also why it will often sound muddy when audio you're processing has already a room baked in, you're basically putting a room inside of a room. If you add reverb to wet samples, you're not creating a room from nothing, but rather adding some extra features to the room that already exist, and that's where many algorithmic reverbs are so good.


----------



## DennyB (Oct 27, 2021)

Bernard Duc said:


> Real room convolution reverb is often considered better at creating a sense of room because it has all this information. It's also why it will often sound muddy when audio you're processing has already a room baked in, you're basically putting a room inside of a room. If you add reverb to wet samples, you're not creating a room from nothing, but rather adding some extra features to the room that already exist, and that's where many algorithmic reverbs are so good.


I found this to be super clarifying (as a newbie). Interested if others feel the same.


----------



## Trash Panda (Oct 27, 2021)

I told you I would get yelled at by more knowledgeable people.  



Living Fossil said:


> I have no clue why this notion is mentioned so often.
> Is there a popular Youtuber who makes this claim and everybody thought it's a good concept to tell to others without seriously challenging it?
> 
> Honestly, of course there are great Convo ERs, but there are also great algorithmic ERs.
> ...


I never said using a convolution reverb to place samples in a room is a rule, just that it's a common approach. I also mentioned that if you are using samples with baked in room information, such as Spitfire libraries recorded at AIR Lyndhurst, adding additional ERs is not needed.

I think the real issue in the "convo vs algo" debate for ERs is that it's mostly just people typing their opinions in text, which doesn't help the broader audience of impressionable lurkers hear the difference for themselves unless they are open to downloading demos.


----------



## ShemS76 (Oct 27, 2021)

Reverb is probably one of the most hotly debated effects in all of music production, but I will say I've seen some pretty good advice here that's giving me some ideas to play with. 

All the mixing and effects processing I've been learning about kind of makes me miss the old days of composing. Back then all you had to do was write a piece and then not be able to get it performed!


----------



## KEM (Oct 27, 2021)

Trash Panda said:


> I'll probably get yelled at by someone who knows better about these things, but here's a "quick" summation:
> 
> Reverb consists of two major pieces - Early Reflections (ER) and Late Reflections (LR) also known as the tail.
> 
> ...



I currently have all my orchestral instruments going to my Cinematic Rooms sends at -15db but I’m thinking of going down to -20db or lower so there isn’t as much buildup


----------



## re-peat (Oct 28, 2021)

obey said:


> What are your guys favorite algorithmic ER generators?



For, say, 80% of the common orchestral spatialization tasks, I use a combination of a decent reverb (any will do), an EQ, a stereo tool and, fairly often, a delay. For the remaining 20%, when things are a bit more critical, I load up IrcamSPAT.

Here’s* a little video of SPAT in action*.
(This is a very superficial demo — concerned almost exclusively with positioning — but SPAT is an incredibly sophisticated piece of software that offers parameters for just about any aspect of spatialization you can imagine, and a few dozen more for aspects which you can't imagine. Amazing tool.)

The orchestral backing is a bit rough in places, sorry about that, but as this demo is about SPAT, I gave the orchestral accompaniment only a minimum of attention.

Note that none of the orchestral instruments or sections use any additional reverb whatsoever. They’re mostly Spitfire — strings (Sable), brass and woodwinds, all from the old BML series) and I only used the included mics of these libraries to define the space, without any additional reverb. Same with the tuba, which is from Berlin Brass and only uses the tree and surround mics, and no reverb.

The trumpet and the tenor saxophone are both SampleModelling instruments, i.o.w. as dry as can be, and they’re each going through their own instance of SPAT.

Sadly, the version of SPAT which I work with, v3, is no longer available. It's been replaced by something even more complex and seems now more built for post-production duties than for spatialization tasks in the context of a music mix.

_


----------



## Living Fossil (Oct 28, 2021)

Bernard Duc said:


> But you don't actually disagree, you're saying the same as me. Real room convolution reverb is often considered better at creating a sense of room because it has all this information.


Actually, i replaced all convo ERs by the mentioned algorithmic solutions also on completely dry libraries some years ago. I simply prefer the results.


----------



## Bernard Duc (Oct 28, 2021)

DennyB said:


> I found this to be super clarifying (as a newbie). Interested if others feel the same.


The ultimate rule however, and the one that makes everything simpler and more complicated at the same time, is that what matters is how it sounds. Convolution or algorithmic are simply technologies that can be used for different purposes.


Living Fossil said:


> Actually, i replaced all convo ERs by the mentioned algorithmic solutions also on completely dry libraries some years ago. I simply prefer the results.


Yes, some algorithmic reverbs are very good at it and they have the advantage of being more flexible (to my ears processing on the IR doesn’t sound as good). But I would say it’s a minority, while all the well recorded IR of real rooms have plenty of room information. It makes me think: it might be worth trying to do a test comparing different reverbs to put a dry sound in a space, I would be happy to contribute with a few convo and algorithmic ones.


----------



## Rob (Oct 28, 2021)

Bernard Duc said:


> The ultimate rule however, and the one that makes everything simpler and more complicated at the same time, is that what matters is how it sounds.


that sums it up I think... me, I just use the ear to apply reverb, no scientific principles.


----------



## Tralen (Oct 29, 2021)

I don't know if I'm against the grain here, but I will say that the major technique with reverb is simply not adding more of it.


----------

