# Reverb ER - final explanation needed



## pixel (May 16, 2015)

Hi. I have another (yeah I know) reverb question. 
I'm getting confused because different people have opposite vision about ER and Pre-delay.
Which settings are proper?

*A*
1-Source Far: Pre-delay: 5ms Wet: 80%
2-Source Mid: Pre-delay: 15ms Wet: 50%
3-Source Close: Pre-delay: 35ms Wet: 30%

*B*
1-Source Far: Pre-delay: 35ms Wet: 80%
2-Source Mid: Pre-delay: 15ms Wet: 50%
3-Source Close: Pre-delay: 5ms Wet: 30%

My thought is that A setting are proper. Pre-delay is the distance between source and wall. Or is it distance between ER from wall coming back to listener? 
Also: more wet = more distance, right?


----------



## Hannes (May 16, 2015)

PreDelay has nothing to do with Early Reflections. It's the time gap between the original sound and the tail (but maybe it's possible that it's different with some reverbs... :?: ) 
The ERs are between the original sound and the tail and it's the reflection of the walls and the floor, that's what f.ex. VSS2 is calculating.

As far as I know, the distance of the instrument is mostly noticable by the delay and the loudness of the Early Reflections. But also the sound changes with the distance (less high frequencies I think?). 

And the tail should be more or less the same for every instrument in the room. So more wet doesn't necessarily mean more distance...

I'm working on a template with a lot of dry samples and I have an instance of VSS2 for every instrument and I use one instance of the LX480 (send) for the tail and it works quite well so far...

Greets o=?


----------



## Jason_D (May 16, 2015)

https://youtu.be/jSVdel5YabI?t=605


----------



## pixel (May 16, 2015)

Thanks guys. Unfortunately I still didn't get answer about my question: is pre-delay (A) a time (distance) between source and wall or (B) it's time when source hit the wall and then reflections travel to the listener? 
I know all other parameters (and their function) in reverb but pre-delay still confuse me. Talking with people I found two different descriptions of pre-delay function in reverbs (A & B above)
Probably I unnecessary complicate it but I'm detail maniac and I love to know everything about used technology :D
It seems that Vlado's video explain it in the best way  https://www.youtube.com/watch?v=RoFItNTs8hQ


----------



## Nick Batzdorf (May 16, 2015)

I'm confused by what you're asking about proper settings, but predelay is the first seven paths to hit your ear: the direct sound and the first bounces off the side walls, front wall, floor, ceiling, and back wall. Those are all discrete "copies" off the original sound.

After that the bounces build up into a wash - the reverb.



> PreDelay has nothing to do with Early Reflections



It depends how you look at it. In nature it does, but in a reverb processor it's independent.


----------



## Nick Batzdorf (May 16, 2015)

Also, ERs are heard as part of the sound source, at least they are if they're within the first 50ms.


----------



## re-peat (May 17, 2015)

Pixel,

Put simply, predelay, as a parameter in a reverb, is the amount of time it takes before you hear the room’s response to a source signal. (And that response is not just the tail, but also the ER’s which always make up the first part of the response of a room to a source signal.)
So yes, if you insist on splitting it up: predelay is the sum of the time needed for the source to reach the surfaces of the room, PLUS the time required for the room’s reaction to reach your ears.
Note though, and this is relevant, that not all of a room’s surfaces are reached in the exact same amount of time. And that not all surfaces of a room are always equally reflective. And that your position in the room also determines which reflections will reach you first (although this is a parameter that complicates matters far beyond most practical uses in most mixes).
Which is why Early Reflections contain only reflections of just a few surfaces (not of the entire room): the first ones which respond to the source. It’s these reflections, which sound not entirely dissimilar to a cluster of slap-back delays, that contain important suggestiveness as to size (of the room) and position (of the source).

However, and this might be interesting: in a large room, and with the source being quite some distance away from the listener, there is no audible distinction anymore between the Early Reflections and the tail, as both simply dissolve into one diffused response. Nature of the blurry beast. It’s a mistake one often hears in mock-up mixes: far too much ER’s in a hall’s reverberation. Why is that a mistake? Because the distinct presence of ER’s suggests the very opposite of what the amount and the length of the rest of the reverb suggest. Put differently: Early Reflections, due to their inevitable less diffused immediacy, always generate a certain sense of confinement, which works miracles-of-spatial-suggestiveness in smaller spaces (rooms, chambers, smaller halls), but tends to create a certain unnatural and certainly undesirable feeling of being “boxed in” when present in (the response of) large spaces.
And this first mistake then often leads to a second one: somehow picking up on the fact that their reverb setting — with the illogical presence of ER’s — doesn’t give them the sense of spaciousness and depth that they’re looking for, people then try to solve the problem by simply adding more reverb. Very bad idea, invariably resulting in thrombosified mixes clotted with reverb — the wrong reverb — and still without any real and/or musical sense of space, position or depth.
It is, in my opinion, a good idea to remove the ER’s altogether, or at the very least start with their level at a barely audible setting, if you want to simulate great distance in a big, wide venue. Bring some in if you must (this will depend on the mix and/or the particular illusion you hope to create), but dial cautiously. 

More wet is not always equal to more distance, although the two are obviously linked. But wetness (and its diffusion and frequency spectrum) is also very much determined by the character of the room and its dominant surfaces. Only to say: there’s many more factors at play here — and they all contribute important details to the illusion of space, character of the room, distance and position — than just the amount of wetness. Very much worth it, I find, to study them all separately and carefully listen to how they each contribute a very specific characteristic to the process and the effect of spatialization.
And contrary to what many people think, tail length doesn’t always tell us the whole story about the size of a room either. If a room has very reflective surfaces, for example, you can have a surprisingly long tail even in a smallish space, because “the reflections will keep reflecting”.
Likewise, a big space can generate a remarkably short tail when it has very absorbent surfaces. In such cases, the setting for predelay might be more useful than the actual length of the tail, to give us some idea of the size.

_


----------



## Hannes_F (May 17, 2015)

What Piet says. When you move your head to left and right while listening to your mix and it feels as if you are hitting a wall (too early) then it is time to reduce the ERs.

Also, the predelay figures or the initial time delay gap are regularly assumed too high when it comes to concert halls. Read this paper:
http://www.aes.org/technical/heyser/dow ... eranek.pdf

Also, in direct response to your question, yes but the figures should be lower. See this animation:
http://www.syntheticwave.de/ITDG.htm

HTH Hannes


----------



## phil_wc (May 17, 2015)

Came here to grab the knowledge. Great topic and explanation! Thanks all.


----------



## tokatila (May 17, 2015)

I know exactly what you are asking with pre-delay, since one can find contradictory information about this matter.

In a nut-shell: Do you increase pre-delay when the source is further away or do you decrease it?

So, which is "more" correct in a typical orchestral setting, where strings are front of the percussion (don't mind about the values)

A)
String section: Pre-delay 15 ms
Percussion: Pre-delay 50 ms

or 

B) 
String section: Pre-delay 50 ms
Percussion: Pre-delay 15ms


----------



## Hannes_F (May 17, 2015)

@tokatila
Whatever you like better, but for me it would be neither. 

Make it 15 ms for strings and 5 ms for percussion.

However this is only true if you don't have any other ERs. If they are there already then you can either try to stay away from those with your second reverb or creep in with a mixture of ERs.

But don't take exact values from an internet forum, try to experiment and listen to what does what.


----------



## tokatila (May 17, 2015)

Hannes_F @ Sun May 17 said:


> @tokatila
> Whatever you like better, but for me it would be neither.
> 
> Make it 15 ms for strings and 5 ms for percussion.
> ...



Don't mind about the values :wink: The point was that is the pre-delay shorter for far away sources or longer. I think this is what the original poster was pondering about.

Do you push sources further away by decreasing or increasing pre-delay (if you forget other variables)? In your example you_ decrease_ pre-delay to push source _further_ away (increase depth).


----------



## Hannes (May 17, 2015)

I just found a sheet about PreDelay, my teacher gave me a while ago - I think it's downloadble somewhere in the internet (in german though  ). It's written by Eberhard Sengpiel, an acknowledged recording engineer of germany.
And he said about the PreDelay (ITDG):

"Die Größe der Anfangszeitlücke ITDG wird vom Abstand der Schallquelle zum Mikrofon, bzw. zum Hörer bestimmt. Nahe Schallquellen ergeben eine längere ITDG und entfernte Schallquellen ergeben eine kürzere ITDG."

I can try to translate: :D 
"The size of the ITDG is determined by the the distance between the sound source and the microphon. Close sound sources have a longer ITDG and distant sound sources a shorter."

So you should decrease the PreDelay to push the sound further away and vice versa. 

That's also what the animation of (the other) Hannes' link shows :wink: 

greets,
Hannes


----------



## pixel (May 17, 2015)

Oh thank you thank you so much for the answers!  Now I will spend some time to precisely read it. 
Guys you rock!


----------



## KEnK (May 17, 2015)

i pretty much do the close, mid, far, ER busses and a separate tail bus
that people refer to here. (other things too)

but there are some things i'm still not certain of-
done some experimenting and finding no definitive answer.

Q- Do people prefer algorithmic reverbs for ERs and convolution for Tail?
I've tried both, not sure if one way is better.

Q- What about sending the ERs to the Tail-
as opposed to separate sends/track for both ER and Tail
again, both methods work, but i prefer sending ERs to Tail

Q- Slighlty ot- been trying some more complicated mix bus routing-
separate bus compressors for low end, mid range and high end instruments.
sometimes using a little bus compression on the reverb returns only
This can be good when used sparingly.

any thoughts?

k


----------



## phil_wc (May 17, 2015)

KEnK @ Sun May 17 said:


> Q- Do people prefer algorithmic reverbs for ERs and convolution for Tail?
> I've tried both, not sure if one way is better.
> 
> Q- What about sending the ERs to the Tail-
> ...



1. I know some people who use convolution for realistic room, then they send to algorithmic for controllable tail. I think this is a good idea if you want realistic.

2. You should send ERs to tail because it is the same room. It will pick the room sound and generate tail from that. Then you just control how much send dB.


----------



## re-peat (May 18, 2015)

phil_wc @ Mon May 18 said:


> (...) if you want realistic (...)


'Realistic' doesn't enter into it. 'Realistic' disappears from the entire equation the moment you decide to load up a sample library to make music with.
Any consideration pertaining to ‘realism’ in mock-ups is only meaningful insofar as it doesn’t ignore (the consequences of) the true nature and identity of what a mock-up actually is.

Even if it were true that convolution reverbs can generate a spatial response that is closer to what happens in real life, than algorithmic reverbs can — something which I disagree with quite strongly by the way — that is entirely without meaning and value when working with source sounds that are, in themselves, completely artificial, as any sampled or modeled instrument is. (And not just artificial with regard to timbre, but artificial in just about every single one of their musical abilities, aspects and manifestations).

See, if I decide to use, say, SM's trumpet with Pianoteq, Ravenscroft, Berlin Woodwinds, HollywoodStrings, DimensionStrings, LASS or Sable or whatever (and hope to create a so-called “real-sounding” piece of music with these sounds), I have several *very* serious problems to begin with: none of these instruments/libraries sound and behave — let alone: interact with one another — in a realistic way. And that's just for starters. Some can be made to sound fairly decent and some even have a cunning ability to suggest the real instrument(s) which they are supposed to simulate, yes, but still: none of them come anywhere near to what any person equipped with good and honest ears would consider ‘real’. In short: the opening gambit of any mock-up production is one which presents mostly huge and fundamental problems (both technically and musically).
And no matter which type of spatialization I throw at all these problems — good, bad, cheap, expensive, convolution, algorithmic, whatever … — they won't go away. They just won't. Not with a Bricasti, not with QL Spaces, not with MIR, not with B2, not with SPAT, Waves, ReLab or Phoenix or whatever, not even if you were to play back those samples (or modeled sounds) in a real room and then capture the resulting interaction. Because all those problems are 100% intrinsic to the source (and increase dramatically when combining several of such sources).
The wise thing, it seems to me, is to accept that. And accepting that also means realizing that there is absolutely no point in trying to create a "realistic space" around these (combinations of) fake instruments. Because the attention-grabbing sounds of the end mix will always be the problematic ones: not the space (whether it is supposedly ‘real’ or not), but the clumsy mimicry of sampled and modeled instruments and their performances.

Realism (as the resulting sound of a source and its spatialization) is not something you can have in degrees. It's either there or it isn't. And in order for it to be there, both the source (as a sonic and musical presence) and the enfolding space, and the all-important interaction between these two, need to be real.
You can't have, say, 15% of such realism in a mix. And even if you could, it will never function or be perceived as such. And a certain degree of realism, assuming you managed to bring that in, will certainly not spread, like a beneficial virus, into the rest of the mix and make that more real as well. That simply doesn’t happen. (There is, to give a simple example, no difference in the realism of SampleModeling’s trumpet whether you hear it in a real, a convolution-based or an algorithmically-generated space. It always is and remains a fake trumpet. A pretty good fake trumpet, sure, but fake nonetheless.) In fact, it is the other way round: no matter how 'real' you may believe a convolution-generated space to be, that realism will *always* be compromised and even annihilated by even the lowest percentage of artificiality that a mix contains. (And in mock-ups, alas, that percentage is invariably and inevitably, frighteningly high.)

Just listen to any of the mock-ups posted in the Members' Composition section: before anything else, they're all — without exception — exhibitions of sonic, timbral and performance-related problems. All of them. There's not a single one that sounds good (as in: *really* good), there's certainly not a single one that even begins to sound 'real', and, more to the point : there is not a single one where whatever ‘realism’ the spatialization may be said to contribute, narrows the gap between fake and real, not even in the slightest of ways.

The only sensible consideration, I feel, when working on an orchestral mock-up, should be: how can I make this parade of sonic, timbral and performance-related problems sound as good and enjoyable (and ‘musically projective’) as I possibly can. Not 'as realistic as I possibly can', because that is a futile and, forgive me, thoroughly foolish aim.

And in order to make a mock-up sound good en enjoyable, you have, I believe, to be able to distinguish between emulation which matters and emulation which doesn't. 
Under 'emulation which matters', I would put: 
(1) choosing (and/or combining) your libraries tastefully, carefully and wisely, 
(2) programming your samples with insight, knowledge and musical understanding 
(3) applying your production tools in function and service of whatever the music is meant to communicate, NOT in function of some abstract, theoretical and totally irrelevant interpretation of what 'realism dictates you're supposed to do'.

And under 'emulation which doesn't matter', I place: 
(1) realistic spatialization
(2) anything to do with attempting to generate high-end audio (though this is a different topic)

A mix is only as convincing as its least convincing ingredient is, and since the most prominent and ear-grabbing ingredients in a mock-up — its timbres, performances and sonic interactions — aren't very convincing to begin with (at least, to my ears they aren't), I really fail to see any value in trying to elevate the believabilty of an ingredient which is of far less defining importance — the enfolding space, to name the one which we’re discussing here — beyond making it simply _functional and non-distracting_.

Functional and non-distracting, those are the fellas for me.

So, what to do? Well, it is simple really. Accept the artificiality of a mock-up (even embrace and exploit it, I’d say), study the technique of using artificial reverbation (and how such a reverb functions in a mix: understand what it can, but also what it can't do) and somehow learn to use your tools so that you’re able to wrap your music in an idiomatic, communicative and ear-pleasing sound. And, like I said at the start, only be concerned about whether things conform to what ‘captured orchestral reality’ instructs us how such things are supposed to sound, to the extent that this doesn’t ignore the actual indentity and ability of the materials and tools you choose to work with.

There’s only two things in a mock-up that can ever be truly ‘real’: (1) the musical content and the talent behind it, and (2) good sound and the craft to arrive at that result. All the rest is, I fear, self- or fellow-member-delusional nonsense (complicating things that shouldn’t be complicated) or, worse, the misleading fable-spinning work of pseudo-experts and/or some developers.

_


----------



## KEnK (May 18, 2015)

re-peat @ Mon May 18 said:


> ...Even if it were true that convolution reverbs can generate a spatial response that is closer to what happens in real life, than algorithmic reverbs can — _something which I disagree with quite strongly by the way_ — that is entirely without meaning and value when working with source sounds that are, in themselves, completely artificial, as any sampled or modeled instrument is...


thanks peat

k


----------



## pixel (May 18, 2015)

I think that nobody really believe that it's possible to recreate 100% realistic sound by artificial tools that we have today  
Simple things like mic's bleeding that doesn't exist when working with sample libraries.
Reverb, second. Some crazy omni-directional(?) sound system to recreate illusion of being in song area (hall, stage, cathedra etc).

btw Re-peat, thank you for so much of your time spend to give us answers


----------



## phil_wc (May 18, 2015)

oh, I see. I didn't look at that this deep. But I agree. :roll: 
I just mean about reverb only. I know it depends.


----------



## Nick Batzdorf (May 18, 2015)

Piet, I don't think this is right:



> And that response is not just the tail, but also the ER’s which always make up the first part of the response of a room to a source signal



Unless I've been wrong since the dinosaur era, predelay in reverb units is between the ERs and tail.

It could happen - I was wrong once - but I don't think so this time.


----------



## Nick Batzdorf (May 18, 2015)

Hm. If you look at Waves IR-1, it seems both are right. It lets you set separate predelays for the direct signal, ERs, and tail (or you can link them).


----------



## Nick Batzdorf (May 18, 2015)

Now I'm not so sure. You're probably right after all, Piet.


----------



## re-peat (May 18, 2015)

pixel @ Mon May 18 said:


> I think that nobody really believe that it's possible to recreate 100% realistic sound by artificial tools that we have today  (...)


I guess not but I know people, of an otherwise sane constitution and in perfectly good health, who actually measure their virtual rooms, carefully calculate their predelays and engage in all sorts of strange mathematics in order to set up the parameters of their reverbs as ‘realistically’ as possible, or so they maintain.
There also are people, so I’m told anyway, who use different reverbs for their ER’s and tails, believing as they do that this is the only way to get closest to reality.
There’s even a group of individuals who insist on knowing in which precise location their sample libraries were recorded and who would love nothing better than to get their hands on impulse responses from these places. All in the pursuit of increasing the ‘realism’ in their mock-ups.
And most bewilderingly of all, it appears that there are some who swear by the idea that, for example, the Dimension Strings spatialized with MIR will sound more ‘real’ than when spatialized with a common algorithmic reverb. Doubtful as it may be that the existence of such people will ever be proven, the rumour that they roam among us, is quite strong nonetheless. 

_


----------



## Beat Kaufmann (May 21, 2015)

re-peat @ Tue 19 May said:


> And most bewilderingly of all, it appears that there are some who swear by the idea that, for example, the Dimension Strings spatialized with MIR will sound more ‘real’ than when spatialized with a common algorithmic reverb. Doubtful as it may be that the existence of such people will ever be proven, the rumour that they roam among us, is quite strong nonetheless.
> 
> _


Well said... o-[][]-o 
Beat


----------

