# Early Reflections for better Mockup?



## Frederick Russ (Feb 22, 2005)

I read with great interest Andy B's approach to positioning the brass dimensionally using various early reflections - something like early tight reflections, early loose reflections and then of course the hall impulse. The result (when done right - haven't tried yet) is better dimensional positioning of some sections (closer/farther)

Seems like this could get somebody in trouble if overused or hastily done. Andy B is an exception to this, so I wondered if others could provide insights on a better mockup based on the use of early reflections?


----------



## Marsdy (Feb 22, 2005)

I'm willing to stand corrected but I don't buy this early reflection/pre delay business. These are parameters that digital reverb designers use to fool listeners into thinking they are hearing a real acoustic space, like the Lexicon "spin" parameter that modulates the reverb tail. They are not real world phenomenon and very different to using different convolution reverb instances to build a mix which I think is what Andy B is talking about.

Early reflections as they are usually manifest in digital reverbs, namely a cluster of discrete delays, don't exist in the real world. Reverberation in the real world is much more diffuse and complex than a cluster of distinct delays. Granted, the first few milliseconds of a reverb contains most of the information that defines the size, shape and construction of an acoustic space but early reflections just fake this. You may get standing waves in a room with parallel surfaces but even here, you are hearing something much more diffuse and complex than discrete delay clusters.

This is one reason why good IRs can sound more real world than digital reverbs IMO, you don't get these fake early reflections. That's not to say digital reverbs can't sound fantastic, I'm as much as a Lex/TC fan as anyone.

Pre delay is also something that you don't really experience in the real world. It may take a finite amount of time for a sound to bounce off walls and ceilings before they get to the listener, which is the phenomena that pre-delay is supposed to fake but rooms also have floors! Most of the time you will hear reverb almost as instantaneously as the direct sound assuming the source of that sound isn't suspended in mid air! 

To reproduce distance, both the direct sound and the reverb need to be delayed and this isn't what pre-delay in a digital reverb does. The distance between the front and back of the orchestra is not that great in terms of the speed of sound. In my experience, this distanced is too small for delay tricks to be really effective in faking distance and in the end simply clutter up the mix. Again, with pre-delay we are talking about a parameter designed to fake real world phenomena.

In the 70s, engineers used to use tape delays to delay the feed to plate reverbs and very nice it could sound too! (Dreamer by Supertramp has a lovely verb.) Faking that is where pre-delay is effective but again, not as successful as what happens in the real world.

In a similar way, you could use different early reflection clusters from digital reverbs to pre-process audio before it gets to a convolution reverb. Again, I've not had much success doing this but I'd like to hear an example of it working in practice! It's certainly way down the list of things that make for a successful orchestral mockup IMO. I'm willing to be proven wrong of course.


----------



## lux (Feb 22, 2005)

well, after Dave its difficult...this is my unuseful 1 cent

I have a very simple approach due to my non-engineering knowledge. 

I do a simple dosage of early reflections on default sonitus Sonar reverb and then apply a large ambient with predelay (I owe Craig for that) of about 60-70ms. Then I reduce the high rolloff to give the ambient a darker resonance, mostly due to my taste for dark and old recordings.

Then apply some eq, mostly by ear, considering that far instruments are less bass and high sounding.

Other than Andy, whose results are great, I find the best at all spacing heard until now, standing on Simon Ravn's mixes, who reached quite a perfection to my ears, the best result available in town imho. If he's looking it would be nice to hear some words from him.

Luca


----------



## Simon Ravn (Feb 22, 2005)

Lux - thanks a lot. Reading your procedure makes me dizzy I don't bother doing any separate early reflections, EQ's on those, seperate verb, EQ on that etc etc. I generally apply an impulse (I recently started using impulses instead of my HW reverb) to each section of the orchestra, trying to keep at a maximum of 16 audio tracks when I have choir, solo voices, different percussion etc. 

If I do a "bells and whistles mix" I do violins+violas, cellos+basses, VSL strings, VSL woodwinds, other woodwinds, trumpets, horns, trombones+tuba, percussion, choir - and maybe divide percussion into two if they are too different in recording style, like VSL and True Strike. I EQ each of those tracks separately and send them to 3-4 busses which each have an impulse reverb. Same impulse, just different levels of wet-dry ratio. Then I bounce that. Final step is to master it in Wavelab, where I apply some final EQ and a little compression maybe.

I don't want to spend more time than I already do mixing the pieces, so this is the "worst case scenario". Sometimes I just do all strings in one go, all brass too etc., if I don't think dividing it further will add much to the final mix.


----------



## handz (Feb 23, 2005)

Simon: Thanx for sharing your mixing process with us! What Reverb you using?


----------



## lux (Feb 23, 2005)

Simon Ravn said:


> Lux - thanks a lot. Reading your procedure makes me dizzy



hehe...it does the same effect to me :? ...though I cant do more with my knowledge...



> I don't bother doing any separate early reflections, EQ's on those, seperate verb, EQ on that etc etc. I generally apply an impulse (I recently started using impulses instead of my HW reverb) to each section of the orchestra, trying to keep at a maximum of 16 audio tracks when I have choir, solo voices, different percussion etc.
> 
> If I do a "bells and whistles mix" I do violins+violas, cellos+basses, VSL strings, VSL woodwinds, other woodwinds, trumpets, horns, trombones+tuba, percussion, choir - and maybe divide percussion into two if they are too different in recording style, like VSL and True Strike. I EQ each of those tracks separately and send them to 3-4 busses which each have an impulse reverb. Same impulse, just different levels of wet-dry ratio. Then I bounce that. Final step is to master it in Wavelab, where I apply some final EQ and a little compression maybe.
> 
> I don't want to spend more time than I already do mixing the pieces, so this is the "worst case scenario". Sometimes I just do all strings in one go, all brass too etc., if I don't think dividing it further will add much to the final mix.



Thanks for sharing your procedure  

Luca


----------



## Nick Batzdorf (Feb 23, 2005)

> Early reflections as they are usually manifest in digital reverbs, namely a cluster of discrete delays, don't exist in the real world. Reverberation in the real world is much more diffuse and complex than a cluster of distinct delays.



My understanding is that there actually are discrete clusters of early reflections in the real world! It took a lot of research to get digital reverbs to sound as realistic as they do - which is often really good, but not as realistic as convolution reverbs.

What I wonder is whether impulse responses carry the early reflection information, i.e. whether it's inherent in convolution reverbs. To be honest, I've been trying to understand the process for a long time, and nobody has really been able to explain to me how you can compress what happens over time into a theoretically infinitely small point (which is what an impulse response is).

It's not hard to understand how the impulse shows how the sound got from "there to here," but not how long it took to evolve at different stages along the way.


----------



## Hans Adamson (Feb 23, 2005)

Nick,

That was a very good description of what I also am wondering over. I do'nt think impulse response can describe what happens over time. That's one of the problems I have with impulse response replacing pedal down samples in pianos for instance.


----------



## Marsdy (Feb 24, 2005)

Nick Batzdorf said:


> > Early reflections as they are usually manifest in digital reverbs, namely a cluster of discrete delays, don't exist in the real world. Reverberation in the real world is much more diffuse and complex than a cluster of distinct delays.
> 
> 
> 
> My understanding is that there actually are discrete clusters of early reflections in the real world! It took a lot of research to get digital reverbs to sound as realistic as they do - which is often really good, but not as realistic as convolution reverbs.



Those real world early reflections, or whatever you want to call them, are a lot more dense and complex than the simple delay clusters most digital reverbs generate. Good IRs sound more real world precisely because they DO capture this initial complexity and density. At least that's my theory. 

There's this notion from digital reverb design that you get an initial build up of early reflections that decay into the tail and you balance one against the other. What I'm saying is that this is a means digital reverb designers use to fake reverberation rather than an accurate reproduction of what happens in the real world.


----------



## ComposerDude (Feb 24, 2005)

Nick Batzdorf said:


> What I wonder is whether impulse responses carry the early reflection information, i.e. whether it's inherent in convolution reverbs. To be honest, I've been trying to understand the process for a long time, and nobody has really been able to explain to me how you can compress what happens over time into a theoretically infinitely small point (which is what an impulse response is).
> 
> It's not hard to understand how the impulse shows how the sound got from "there to here," but not how long it took to evolve at different stages along the way.



Nick, here's my understanding of the process (and I welcome correction by anyone who might explain it better, like Peter Roos):

The _impulse_ is the infinitely small point, but the _response_ is what goes on and on and on -- the room's _response_ to that impulse. If you take a dry audio track of a single-sample spike and you convolve that with the impulse response, the resulting "tail" should look just like the impulse response!

However, the _amplitude_ of that tail will be different from the normalized impulse response, since the tail of the impulse response is _*scaled* by the height of your dry single-sample spike_.

Now extend this in the following thought-experiment: We're going to look at the result for consecutive "dry track" audio samples, one after the other, but in very slow motion.

The first dry sample at time "0" (t0), when convolved with the impulse response, will excite that response hereafter (call this the t0 tail), scaled by the amplitude of the t0 sample. This scaled impulse response tail will be "played out" to the listener over time.

The next dry sample at time "1" (t1), when convolved with the impulse response, effectively produces _another copy_ of the impulse response that's _now scaled to the amplitude of the t1 sample_. This t1 tail gets _summed with the t0 tail -- you're adding two scaled impulse responses -- AND the t1 tail begins one sample later_.

Continue this for t2: the t2 tail is the usual impulse response scaled now for the t2 sample, and the t2 tail gets summed with the t1 and t0 tails, with the t2 tail starting TWO samples after t0, and so on.
 
The immense amount of math (a 2-second reverb tail at 48KHz sampling encompasses 96,000 such overlapping layers of sound in a "sliding window" of time) is why convolutional reverbs weren't widespread until CPUs got fairly powerful. It's cool that Altiverb can do in software what other manufacturers needed dedicated hardware (i.e. the 777 reverb) to perform. Obviously clever algorithms, whatever Altiverb's doing...

Since what you hear in our hypothetical 2-second reverb tail is an overlapping shifted-by-one-sample addition of 96,000 scaled impulse responses, and since the impulse response is simply "how the room reacts to an impulse", therefore diffuse or discrete early reflections, reverb, etc. are all present and accounted for.

-Peter


----------



## Nick Batzdorf (Feb 24, 2005)

> Those real world early reflections, or whatever you want to call them, are a lot more dense and complex than the simple delay clusters most digital reverbs generate.



That makes some sense. One of the things that separates really good digital reverbs from okay ones is how they sound with small spaces, in which the delays have to build up really quickly. My hunch is that it's the complexity rather than the density of the early reflections that isn't 100% right in fake reverbs.

But fake reverbs have a use too, as Peter Roos' demos prove.


----------



## Nick Batzdorf (Feb 24, 2005)

Peter, thanks very much. I was thinking backwards - the impulse vs. impulse response.


----------



## Hans Adamson (Feb 25, 2005)

Here's a question for you all:

If an impulse can describe an ongoing time-dependent effect. (Not talking about early reflections here, just the possible limitations of Impulse Response technology) - How would you go about and sample a Leslie cabinet, for a rotating speaker effect?

Is that possible?


----------



## Nick Batzdorf (Feb 25, 2005)

Altiverb can use sine wave sweeps instead of an impulse. I don't remember what the length limit is - either imposed by the software or by the limits of today's hardware - but the sweeps are long enough to capture a few seconds of a Leslie (plus whatever else is in the signal chain, including the room).

That should work.


----------



## Hans Adamson (Feb 25, 2005)

Peter,

I wasn't comparing emulating a leslie effect with simulating a room reverb. I was comparing it to using impulses to emulate pedal down in a piano, which has components that evolve over time just like a leslie.


----------



## Nick Batzdorf (Feb 26, 2005)

Altiverb comes with a program that transforms the sweeps into an impulse - whatever its magic - and turns your sample into something you can load into Altiverb.

There's a soundboard impulse in Giga 3, in fact the new GigaPiano uses it instead of pedal-down samples. That aspect of the piano works very well; the actual sound is, shall we say, highly specific.

I do know that impulse responses get the decay right, but that would just be a matter of knowing the RT60. The thing about pianos is that you can hold a note and hear a symphony while it decays - as you say. I don't think that's going to happen.


----------



## Marsdy (Feb 26, 2005)

I was under the impression that you couldn't make successful IRs from any sounds that have a modulating component, for example, chorus, flanging, or a Leslie for that matter. That's why you are supposed to turn off the Spin parameter, (which modulates the reverb tail,) when you sample a Lexicon. No idea why thouhg. You can sample EQs or the colouration of an old compressor or mic pre apparently.

Going off on a tangent here but I wonder if anyone has ever tried to make IRs of the various human orifices. It would certainly be interesting looking at the pictures and Quicktime VR movies of the IR in Altiverb.


----------



## ComposerDude (Feb 26, 2005)

Marsdy, thanks for mentioning "modulating component" -- that's an excellent distinction of what we can and cannot model by impulse responses...

<edit: "cutting to the chase" re my earlier pedantic post...> Hans and Nick, I agree, the piano involves resonance phenomena which are not properly emulated by impulse responses.

-Peter


----------



## Rob Elliott (Mar 4, 2005)

Simon Ravn said:


> Lux - thanks a lot. Reading your procedure makes me dizzy I don't bother doing any separate early reflections, EQ's on those, seperate verb, EQ on that etc etc. I generally apply an impulse (I recently started using impulses instead of my HW reverb) to each section of the orchestra, trying to keep at a maximum of 16 audio tracks when I have choir, solo voices, different percussion etc.
> 
> If I do a "bells and whistles mix" I do violins+violas, cellos+basses, VSL strings, VSL woodwinds, other woodwinds, trumpets, horns, trombones+tuba, percussion, choir - and maybe divide percussion into two if they are too different in recording style, like VSL and True Strike. I EQ each of those tracks separately and send them to 3-4 busses which each have an impulse reverb. Same impulse, just different levels of wet-dry ratio. Then I bounce that. Final step is to master it in Wavelab, where I apply some final EQ and a little compression maybe.
> 
> I don't want to spend more time than I already do mixing the pieces, so this is the "worst case scenario". Sometimes I just do all strings in one go, all brass too etc., if I don't think dividing it further will add much to the final mix.




Thanks Simon - very practical (and everyday usable) ideas in here. Right now I have been just throwing sections at a time, but I like perhaps splitting the strgins up as you do and maybe the percussion. Thanks again.

Rob


----------



## Frederick Russ (Mar 5, 2005)

Marsdy said:


> Going off on a tangent here but I wonder if anyone has ever tried to make IRs of the various human orifices. It would certainly be interesting looking at the pictures and Quicktime VR movies of the IR in Altiverb.



LOL Marsdy - are you sure you're really not the dog in your pic?


----------



## Nick Batzdorf (Mar 5, 2005)

http://www.xs4all.nl/~fokkie/IR.htm

Among others, he has a sample of his own mouth.

Thank goodness he stopped there...


----------



## Marsdy (Mar 5, 2005)

WOOF!!!

Well he SAYS it was his mouth.... We just don't know for sure.

Some of the IR's that guy did were surprisingly usefull, the bucket for example. (I'm being serious here.)

... And on the subject of unusual IR's.... how about the space between the ears of the owner/moderator of a certain other forum way less sophisticated than this one. That would be a nice 15s cathedral like verb if I ever heard one.


----------



## ComposerDude (Mar 5, 2005)




----------



## Nick Batzdorf (Mar 5, 2005)

One would think a large-diaphragm tube microphone would solve the problem of peristalsis. But you'd have to use a pop filter as well.

And a boom stand.


----------



## Nick Batzdorf (Mar 5, 2005)

Sideways, of course.


----------



## Marsdy (Mar 5, 2005)

At least a large-diaphragm tube microphone would be nice and warm.

...and talk about getting a crap sound


----------



## synergy543 (Mar 6, 2005)

Here is an interesting example of how a "box" (in this case a Lexicon 200) was used to closely emulate the ambience for an orchestral mockup that's suprisingly close to the original (except he removed the piano - which was the goal) 

DPDan used GPO and GOS. Have a look and listen. He posts both the mockup and the original:

http://northernsounds.com/forum/showthr ... post268653


----------



## Nick Batzdorf (Mar 6, 2005)

Yeah, Marsdy, it would sound like ass.


----------



## KingIdiot (Mar 7, 2005)

I still believe that impulses arent going to be the perfect option. Only physical modeling will be able to best it tho.

ALL sounds including real reverb have modulating components, not directly over time in a fixed movement, but just about any real world space reacts to dynamics (volume) differently as well. Its all subjective and may not be something that alot of people notice, but its (IMO) something that will ultimately make TRULY realistic room responses with JUST impulses something impossible.

I love impulses, and think that there should be more available (and people like Peter are going out and helping this become a reality), I just dont think that its perfect yet. Until CPU's get 20 times more powerful, its going to take some one like Aleksei from Voxengo to come up with something that could do this without Dynamic Convolution (which is killer for amplitude based modeling/convolution but not going to be good for time based stuff).

I've got some thoughts on how this can be done, it will jsut take a lot of research


and then beyond all that there's that "magic" sound in the room when players get the perfect take....I think people need to have speakers and a sweep tone ready on hand at all times....maybe you can capture the "magic" in an impulse if you sample the room right after one of thse goosebump giving momments


I mean...it could be about heat


----------



## KingIdiot (Mar 8, 2005)

Why the hell is it that when I type somethign out and say its impossible, or that someone else has to come up with an Idea to do something, I take some time away from it, and then randomly something pops in my head on how to do it myself...

I was just thinking of impulsing some speaker cabinet and mic position variations and was thinking of how a cabinet reacts differently to different volums and then just thought of something to try to get a somewhat dynamic response out of it using multiple impulses.

gargh, I hate my brain...

I'm one of those idea people, I HATE doing the work....garghhh..


----------



## Frederick Russ (Mar 8, 2005)

KingIdiot said:


> I was just thinking of impulsing some speaker cabinet and mic position variations and was thinking of how a cabinet reacts differently to different volums and then just thought of something to try to get a somewhat dynamic response out of it using multiple impulses.
> 
> gargh, I hate my brain...
> 
> I'm one of those idea people, I HATE doing the work....garghhh..



....because it means you've gotta try it while the rest of us wait with baited breath so we can too  Seriously, I'm interested to see where this one goes. Great idea!


----------



## Marsdy (Mar 8, 2005)

KI

If you get into speaker modelling or amp distortion modelling for that matter, what would be the advantage of using multiple IRs over the way Line 6 does it in the Pod? 

I guess you're right though, there is still a long way to go and "dynamic convolution" sounds cool! Maybe this would be a way round the fact that a solo oboe excites a room in a very different way to an 8 piece horn section for example and current IR technology doesn't really allow for this. And there is still no really convincing way of "moving" the mic closer or further away from the source. 

Also I'm still yet to hear an IR of a room that sounds remotely like hearing an overhead/ambience pair of mics over a drum kit. It always sounds like a reverb return!


----------



## KingIdiot (Mar 8, 2005)

The POD, is a killer unit, but it uses modeling not convolution to get its effect.

I dont think there'll be any benefit over the Pod really, but just, well, there's some thoughts in my head, and an idea I've had for something I've been doing, bloomed into an idea for a bit of "light" dynamic convolution using some ideas I've got.

I doubt it would be all that great for high gain stage effects like guitar distortion, but for tube warmth and such stuff with some subjective sound characteristics that are changed with amplitude (and some with changes at peak volumes), it might prove useful.

It might also be proof of conept for some dynamic reverb options that arent CPU intensive like 256 step dynamic convolution...but still based on convolution and not some pure modeling or digital recreation.

BTW, this isnt based oon some simple crossfading idea, you'd need more than that for reverb anyway.


----------

