# Early reflections - can someone set me straight here?



## salbinti (Jul 21, 2010)

Hi - I think I understand what early reflections are - the first sounds that hit your ear after the initial instruments plays (sample). Ok, but how to do this in the world of VIs? Say you have Altiverb - how do you set an "early relflection", as compared to say, just a normal reverb setting that starts as soon as the sample is triggered, and then tals off? Doesn't the early reflection then just tail off like a "normal reverb"? And if you have an early reflection, is it always necessary to have normal reverb? Hope that makes sense.

Any real world examples on how to use erly reflections will be appreciated. Thanks!


----------



## Cinemascore (Jul 21, 2010)

The early reflections are usually contained within the first 80-100ms of a reverb signature, especially with convolution impulses. After that, the "late reflections" (the reverb tail) take over. In Altiverb, there are separate level controls for both the ER and LR (tail) portions of the reverb impulse.

The time difference between when the direct sound hits the ear and the very first reflection doing the same is called the ITDG. Both delays can be calculated easily if you know the distances involved (speed of sound constant by distance in meters usually). By adding the appropriate ITDG delays to your sampled instruments, you can recreate depth more realistically then just using a reverb on it alone.


----------



## chimuelo (Jul 22, 2010)

Its what is reffered to as Intial Time Delay Gap.
Similar to Pre Delay.
I prefer RAM attached to a DSP chip like the hardware units have.
I still think if software Algorhythmic reverbs could have their ER's stored in the CPU's cache ( Nano seconds also ) it would yeild much better results.


----------



## gmet (Jul 22, 2010)

Original Image

A simple diagram to show the three phases:


----------



## Peter Emanuel Roos (Jul 22, 2010)

When I have time I will dive into my psychophysics literature, I believe ERs are from 50 msec to 150 msec.


----------



## Dan Mott (Jul 22, 2010)

chimuelo @ Thu Jul 22 said:


> Its what is reffered to as Intial Time Delay Gap.
> Similar to Pre Delay.
> I prefer RAM attached to a DSP chip like the hardware units have.
> The measurements are more acccurate and since it deals with Nano seconds instead of Msec.'s you dont have the latency of accessing system RAM.
> ...




WTF!!??


----------



## EnTaroAdun (Jul 22, 2010)

Yes, this makes absolutely no sense, since the timings of the reverb-algorithm and the processing of the audiostream *are not influenced by the timings of the CPU and RAM.*



Emanuel @ 2010-07-22 said:


> When I have time I will dive into my psychophysics literature, I believe ERs are from 50 msec to 150 msec.


As far as I know, there's no definite definition of what's considered as ERs.
There's a stepless transition between ERs and the tail and it's always a bit subjective, where someone draws the line (also depends on the room of course).


----------



## Andrew Souter (Jul 22, 2010)

From the Aether 1.5 manual:

http://www.2caudio.com/products/aether/pdf/2CAudio_Aether_Manual.pdf (http://www.2caudio.com/products/aether/ ... Manual.pdf)

ERs:

The Early Reflections Engine provides discrete delay patterns, which provide perceptual cues that identify the nature of the given acoustic space and provide the brain with spatial information such as the position of the sound source. Early Reflections generally occur in the first couple hundred milliseconds and can be heard--or at least seen in a waveform editor—as distinct delay taps or initial reflections off of surfaces in the acoustic space. Most acoustic spaces will result in an increasing density and complexity of echoes as time passes in the Impulse Reponses of the given space. Once the echo tap distribution becomes so complex and dense that it is difficult or impossible to distinguish individual echoes, the ER phase of the Impulse Response is over and the Late Reflections phase has begun. The ER phase normally only lasts a couple hundred milliseconds at most in real-world acoustic spaces, but Aether allows things to be exaggerated by offering a huge ER Size range in which ER taps may last up to a few seconds.

LRs:

As explained previously, Impulse Reponses of real acoustic spaces generally show that delay tap density and complexity increases over time. Once the delay tap distribution becomes so complex and dense that it is difficult or impossible to distinguish individual delays, the Early Reflections phase of the Impulse Response is over and the Late Reflections phase has begun. The Late Reflections phase is characterized by a very dense and spatially diffuse distribution of delays, and is best described by statistical and stochastic methods instead of specific delay taps. The Aether LR Engine is a highly configurable and flexible construct which can model these types of processes with the ultimate flexibility and precision. It is responsible for providing the “Tail” of the reverb. LR parameters control the overall statistical characteristics of this complex and diffuse “Tail”.

Maybe this is helpful?


----------



## Narval (Jul 22, 2010)

Justin M, spot on! That picture explains everything to anyone who has a bit of common sense.

No need to be anal about numbers. Of course the border between "early reflections" and "subsequent reverberations" is not established with barbed wire. There's actually no border and no early and no subsequent. You think there really are such things? Wanna go semantic? OK - What "early" does mean? And what's subsequent? Subsequent to what? Why subsequent to 80ms and not to 150ms? Also, what's direct? Why "direct" ends at 40ms and not at 10ms? Why shouldn't "early" start earlier? Where is the point when something ends and something else starts? Btw, what's "something," and what's "something else?" Etc. 

Justin's picture just hints to how the sound behaves in a room, and also explains what's called what. Now, what you understand of that and were you set the borders, that's entirely up to you.


----------



## bryla (Jul 27, 2010)

People should really read Mike Novy's book on this


----------



## Peter Emanuel Roos (Jul 27, 2010)

I wish I had the time to write a primer (eBook?) on psycho-acoustics. I have a degree in perception research / psychophysics - it would be great if I could manage to write an intro on this stuff for musicians, composers and mixers.

And yes, there are some important "timing values" involved in how our hearing works, Haas effect, comb filtering in ERs in the first 200 msec, how our brain uses info from the time differences between left and right ear AND how the shape of our ears (also sound "filters") are also used in this processing... and so on. Great stuff IMO


----------



## wst3 (Jul 27, 2010)

Hi Peter,

Pretty cool field! Are you familiar with the work of the late (great) Carolyn A. "Puddie" Rodgers?


----------



## Nick Batzdorf (Jul 27, 2010)

> I believe ERs are from 50 msec to 150 msec



Again, it's going to depend on the size of the room!

150ms is a very large room.


----------



## Cinemascore (Jul 27, 2010)

bryla @ Tue Jul 27 said:


> People should really read Mike Novy's book on this


I wholeheartedly couldn't agree more. ~o)


----------



## Peter Emanuel Roos (Jul 28, 2010)

Nick Batzdorf @ Tue Jul 27 said:


> > I believe ERs are from 50 msec to 150 msec
> 
> 
> 
> ...



Yes if you consider them as first reflections only, but after the initial set of first reflections you can also expect echoes from these FRs! It's indeed a matter of definitions and the terms we use.


----------

