# Ultimate early reflections question .....



## Silence-is-Golden (Oct 19, 2015)

I have researched several posts about using early reflections as one of the used methods to place instruments, or groups of instruments (violins 1-2, violas......, woodwinds, etc)in a "concert hal".
And that the early reflections allow one to place them more front or back in this virtual stage, and that panning is used to place them left or right. Simple stuff (?!)

However.....it isn't that easy obviously, so I hope that once again some of you are willing to share some of the setup secrets to help me on the way.
(or some may possibly say: make your life easy and spend some money of VSS2 or Ircam tools, or B2 or so)

I have seen some tutorials as well, like Blakus setup video's which are quite helpful as well. But maybe I miss the specifics or foundational knowledge to get out of it what you need to get it working. And hopefully without investing in more software or equipment. Also Beat Kaufmann in some VI-control's forums I have read and tried out. And Guy Rowland's "LASS fumble through" also showed some setup regarding ER's.

I use Logic X's Space designer (convolution)with the Bricasti M7 presets as the overall concert hall reverb, and I am currently using ToneBoosters reverb (algorithmic) for the early reflections. They have a nice preset which I tweak for setting.

Here is the list of questions:
- What do you actually do to place instruments further back? ..increase the wetness levels in the ER reverb? Increase the ER levels within the reverb? Use the pre-fader track setup so one can use the level fader for more or less direct levels as well?
- Should I use one ER setup or several with even differing dimensions?
- Further away also indicates air absorption of higher frequencies. This means also EQ-ing the high frequencies gently the further back? But from which frequencies do you start?

Probably I still miss some stuff, but I am open minded to anyone's advice and knowledge regarding this.

Thank you all in advance!


----------



## ZeeCount (Oct 19, 2015)

There are a whole bunch of different factors that control the perceived distance of a sound. These include the difference in time between the original sound being heard and any reflections (the smaller the difference the further away a sound is), the volume of the reflections compared to the source (the energy of a soundwave drops with the square of the distance traveled), the stereo width of the signal (wider sounds are closer, narrower ones are further away), the amount of low frequencies (the proximity effect) and the amount of high frequencies (air absorption is frequency dependent and affects higher frequencies more than lower ones).

I'm actually playing with all of this myself. I've made a simple mathematical model of a 2D room and have calculated the delay times for what I call the first and second order reflections as shown in this picture https://gyazo.com/165788203bc628cdb1ff8c3eb9de45e6. I'm experimenting with using mono delays for these reflections to see what results I can get.

In terms of your second question, ER setups and all that really depend on what you are trying to do. If you want to get as realistic a result as possible, you should have ERs for every instrument that you are trying to spatialise. However, if this will give you are more _musical_ result is up for debate. 

Personally, in my current template I am only using spatialisation on sample modeling brass which I use B2 for (thanks blakus). I then just run everything into another instance of B2 which is providing a reverb tail for the whole mix. In terms of rolling off high end, for sample modeling I usually do a low pass starting at around 4 Khz, as I have them sitting quite far back, however I do this inside of B2 as I have it set to be 100% wet.

Hope this helps.


----------



## muk (Oct 19, 2015)

It is a big topic, and you can spend unhealthy amounts of money on plugins that help you to achieve what you want. That being said, there are several factors that influence our perception of the distance of an instrument. It's air absorption (the further away the more high frequencies get damped), volume (the further away the lower the volume), reverb (the further away the more reverb), stereo width (the further away the narrower), predelay (the further away the less predelay for the early reflections usually)... You can try to simulate all of this with various plugins, but it will take some time and a lot of experimenting.
Check out Tokyo Dawn Labs plugin 'Proximity'. It is completely free, and it lets you simulate the distance of an instrument easily with a few sliders and knobs. Basically it takes into account the parameters I listed above and manipulates them automatically according to the distance setting you choose.


----------



## chrysshawk (Oct 19, 2015)

I second what Muk says above. At the end of the day, I have chosen to treat every virtual instrument separately, based on just how it sounds with the mics used to record it. As such, I have come to believe there is no "one size fits all" approach to this, although I have to say I use VSS sparingly since it does compromise the purity of the instrument the way it was initially recorded. 

My process typically involves A/Bing all the various way of setting up a particular instrument, and then choose the one I like the best, and stick with that. This goes for all instruments/sections and new instruments I add to the template. It is tedious for sure, but there is a lot of ear training and skill that gets developed along the way.


----------



## Silence-is-Golden (Oct 19, 2015)

Thank you Zeecount, Muk and Chrysshawk.
So indeed this is quite an area to dive into. That's the reason it didn't seem easy to achieve.

@ZeeCount: B2 seems a viable option referring to the good results you and Blakus get... however it is also an CPU hungry plugin.
Stereo vs mono imagery is indeed also an approach to discover.

@muk: thank you for the pre-delay reference. These are things I don't yet understand but look into further (I thought it would be the opposite ...my confusion and lack of knowledge)
The Tokyo proximity I have used, but for some reason it seems to me like it only simply reduces volume (or decibels as I believe it refers to).
It doesn't seem to add any reflections or distance effect as such. Do you have different experiences with setting it up? It looks so obvious as you put it. I will try again and see if in my new combo of reverbs it will do something different.

@chrysshawk: are you referring to actual instruments recorded or virtual instruments regarding the VSS compromising effect of the sound?


----------



## muk (Oct 19, 2015)

The predelay is the time it takes a sound to travel to the nearest wall, and from there to the ear of the listener. Instruments which are further back are _closer_ to the rear wall than the instruments upfront. Therefor it takes _less_ time for the sound to reach the rear wall, and hence the predelay should be lower. I know it's not really intuitive, but once you get the logic it should be clear.

Tokyo Dawn Lab's 'Proximity' can add some early reflections, but you will definitely have to support it with your own reverb. It can take care of volume, stereo width, air absorption, and delay for you. Try to experiment with each parameter on it's own to see what it does, and whether you like it or not. But as it's name suggests, it helps with proximity of the sound only. It doesn't place the sound on a stage and in a room. I thought it could help as you mentioned specifically that you have some difficulties with pushing the sounds further back. For that, Proximity is very good.
So, the workflow would be something like this: you have placed an instrument on stage horizontally with panning and reverb (early reflections). Now you want to push it further back on the stage. Simply insert Proximity before the reverb, and set the distance. Maybe you want to adjust your reverb a bit after that (the further away, the more reverb).


----------



## ControlCentral (Oct 19, 2015)

chrysshawk said:


> ... I have to say I use VSS sparingly since it does compromise the purity of the instrument the way it was initially recorded...


I demo'd VSS v1 and liked the concept but can understand your misgivings- sounds like you would not stray from your manual approach. But I'm curious if anyone has tried the VSS v2 update? On the surface it seems like a fair-sized overhaul.They make it seem like the poor man's MIR...


----------



## Silence-is-Golden (Oct 19, 2015)

Terrific! Thank you Muk.
Good instruction which I will use and try out with procimity and reverb ER setup. And helpful knowledge regarding pre- delay.

Already new practise grounds available.....


----------



## KEnK (Oct 19, 2015)

Silence-is-Golden said:


> ...the pre-delay reference. These are things I don't yet understand but look into further (I thought it would be the opposite ...my confusion and lack of knowledge)...


Here's an incredibly clear vid demonstrating how to set this up.
It came out of a lengthy thread here at VI-
Made by forumite Carl Ruessmann



Also check Beat Kaufmann's tuts on the subject
He's also made some very clear tuts.


----------



## Silence-is-Golden (Oct 19, 2015)

At this moment I am hungry for knowledge so thank you for offering this one.
I will have a look again with Beat Kaufmann, havent found the tuts yet.

Thank you Kenk


----------



## Nick Batzdorf (Oct 19, 2015)

All of the above, plus another important factor with simulated ambience in general: clarity. If you run a booming orchestral bass drum and the strings through the same processor, there's a good chance you're going to get mud regardless of all else - whether it's ER or reverb. Then you have accurately-placed mush, which probably isn't the goal.

It's perfectly legal to suspend disbelief in recordings. As long as *some* elements are in plausible-sounding spaces, it still sounds real when you have all kinds of impossible things going on. That's especially true in pop production, where you always have lots of reverbs going on - it's unlikely you'd have the snare and lead vocal going through the same reverb - but even with orchestral music you can get away with some funny stuff.

Having said that, VSL's MIR is amazing. It uses individual impulse responses for each instrument.


----------



## chrysshawk (Oct 19, 2015)

Mixing with Jake Jackson course covers how he does spatial placement. Virtuosity by Mike Verta here also gives some approaches to it. I'd recommend both.

It is VSS2 I use by the way. Yet it's usually a simple enough question: Does it sound good without anything but reverb? Does it sound good in VSS2? Will panning and/or the delay trick improve things? A/B until happiness ensues.

But the worst is when the signal is so treated by artifical ERs, reverbs, EQing, and panning that the limited life that is in a sampled instrument is drowned in muddy spatial tools. They are after allmthe frame, not the picture.


----------



## germancomponist (Oct 19, 2015)

Nick Batzdorf said:


> Having said that, VSL's MIR is amazing. It uses individual impulse responses for each instrument.


And this is exactly the way how it works in the real world. Any room interacts different to different frequencies, tones and volume levels ... .


----------



## Rasmus Hartvig (Oct 19, 2015)

This is a topic that I too have spent unhealthy amounts of time pondering, tweaking, a/b-ing.
There's a lot of good advice on this topic here (and in other threads) but I would add that FAR more than spatial placement, a very obvious giveaway for mockups is a poorly balanced template. So be sure that you have the absolute best template balance you can get before throwing days and weeks of work at spatial stuff.
Also, a well balanced template doesn't need a lot of advanced reverb techniques to sound realistic (very dry instruments are separate cases, and of course need more love).


----------



## Assa (Oct 19, 2015)

Rasmus Hartvig said:


> This is a topic that I too have spent unhealthy amounts of time pondering, tweaking, a/b-ing.
> There's a lot of good advice on this topic here (and in other threads) but I would add that FAR more than spatial placement, a very obvious giveaway for mockups is a poorly balanced template. So be sure that you have the absolute best template balance you can get before throwing days and weeks of work at spatial stuff.
> Also, a well balanced template doesn't need a lot of advanced reverb techniques to sound realistic (very dry instruments are separate cases, and of course need more love).



Very good advice, I absolutely second that!


----------



## Silence-is-Golden (Oct 19, 2015)

Ok, from the last posts I understand that there is no need to overdo it but to use ones ears and senses to find out what works and use that.
Although I am intrigued by the "illusions" that can be created indeed the goal remains to create the illusion of a real orchestra (and/or extra instruments). That's why maybe after all plugins like VSS2 or MIR PRO, B2 may be well suited to use.

Nevertheless I am learning something regarding spacial placement with the idea to make it sound as real as possible. And as suggested there is still some digging to do with suggested tutorials.

Template balancing is another one I am working on, so thank you all so far. Terrific stuff!

PS: more considerations still welcome, as said I am on a learning stance with this.


----------



## ZeeCount (Oct 19, 2015)

Silence-is-Golden said:


> @ZeeCount: B2 seems a viable option referring to the good results you and Blakus get... however it is also an CPU hungry plugin.
> Stereo vs mono imagery is indeed also an approach to discover.



Yeah B2 has some pretty crazy resource requirements when you run it using the extreme mode for early reflections. This is why I'm trying out using delays, because currently with everything else going on in my template, I only have enough dsp power to run a single instance of B2 for all of my brass.

Have a look at the oculus audio sdk, as it comes with a spatilisation plugin. It has an early reflection algorithm built into it that lets you specify the distance to walls as well as how reflective they are.


----------



## KEnK (Oct 19, 2015)

Rasmus Hartvig said:


> a well balanced template doesn't need a lot of advanced reverb techniques to sound realistic


This would seem to apply only to the pseudo orchestral thing-
Most people here are doing other things as well.
The principles being discussed here are quite universal.

k


----------



## Rasmus Hartvig (Oct 19, 2015)

@KEnK That is true. I only mentioned that point because the OP specifically was after a realistic concert hall sound.
Of course for hybrid stuff and other things, anything goes.


----------



## muk (Oct 20, 2015)

ZeeCount said:


> Have a look at the oculus audio sdk, as it comes with a spatilisation plugin.



Unfortunately the Oculus Audio SDK is designed as a binaural plugin and will only work if you are listening on headphones. It is not suitable as an allround positioning plugin.
About template balancing: here's a comprehensive guide by Thomas Bergersen that's well worth reading

http://www.samplelogic.com/sequencingsamples.pdf


----------



## ZeeCount (Oct 20, 2015)

muk said:


> Unfortunately the Oculus Audio SDK is designed as a binaural plugin and will only work if you are listening on headphones. It is not suitable as an allround positioning plugin.
> About template balancing: here's a comprehensive guide by Thomas Bergersen that's well worth reading
> 
> http://www.samplelogic.com/sequencingsamples.pdf



That's only true for how it does it's stereo placement. It uses a room simulation to generate it's early reflection patterns, and offers you the ability to specify the reflectivity of every surface in the room.

From the Oculus SDK website:

"The Audio SDK supports early reflections and late reverberations using a simple 'shoebox model,' consisting of a virtual room centered around the listener's head, with four parallel walls, a floor, and a ceiling at varying distances, each with its own distinct reflection coefficient."


----------



## muk (Oct 20, 2015)

Thanks for clearing that up. That means that you can use it to generate early reflections, but not for stage placement. As long as you leave everything centered it should be fine.


----------



## Silence-is-Golden (Oct 20, 2015)

Thomas Bergersen's writing on balance is interesting. Also the approach to create a template that will be almost in balance.
As I am busy with this already this is good food for the template update.

So far I have tried out some combo's with ER's and Proximity. Even though it works to create distance and placing, the CPU meter is rising gradually with each instance added. So I may need to change it with other plugins. I will start with the native Logic X plugins as well. See how far I get.

Carl's vid was indeed a clear help in how to work on spacing and placement.
Good stuff!
I am heading in the right direction so far, thanks again to all the useful input.


----------



## KEnK (Oct 20, 2015)

muk said:


> About template balancing: here's a comprehensive guide by Thomas Bergersen that's well worth reading
> 
> http://www.samplelogic.com/sequencingsamples.pdf



Anybody know where part 3 of this series can be found?

k


----------



## chimuelo (Oct 20, 2015)

Interesting that modern Symphonic recordings go out of thier way to avoid ERs.
Mics are placed around the Symphonic Hall and then in close proximity to the performers which in most cases are in the middle of the lower mezzanine.
Have to search hard to find recordings where the performers are on stage unless its a live concert recorded.

Next time I go see Joel Revzen conducting I'll ask him which reverb they use.


----------



## vicontrolu (Oct 20, 2015)

Its one of those things you can waste your time forever.

Why dpnt you post some samples and we try our methods? Maybe a dry chord/tutti with steams for each section (guess string-brass-ww-perc would make it) from a different library and see who glues it better and how.


----------



## Nick Batzdorf (Oct 20, 2015)

> Nevertheless I am learning something regarding spacial placement with the idea to make it sound as real as possible.



Just bear in mind that 99% of that is the MIDI performance, not what reverb processing you use.


----------



## Nick Batzdorf (Oct 20, 2015)

And .5% of the rest is the samples.


----------



## Silence-is-Golden (Oct 21, 2015)

Goodmorning to you all ( for me it is morning)
@Nick Batzdorf : yes, fur sure it is about how and what is played. I have been starting with that some years ago, and now I experience increasingly the need for some 'air' in the music mixes. This thread has been useful in that.
@vicontrolu : That might be an interesting option. Presumably those who like to join that option are all on Logic x? That saves a lot of time. Then my suggestion is in the pm section exchange email adress and I can send it via wetransfer. Complete with recorded samples. 
@chimuelo : interesting contradiction regarding ER's. 
Orchestra musicians want to play things as best as they can ( like us obviously) and in the samples world it is about creating human errors and natural playing styles.
And apparently we want ER's and they don't want them( if that is what you hear)
Please post if you have had a chance to ask about reverbs)


----------



## re-peat (Oct 21, 2015)

Silence-is-Golden said:


> interesting contradiction regarding ER's


.
That's not a contradiction, Silence. If you want an open, airy sound, it's actually a very good idea to minimize ER's or even dispense with them altogether. Important as these things are in defining a smaller or medium sized location (spaces where you’re aware of the room’s size and boundaries), they are of much less use (and are even prone to ruin the illusion) when the aim is to suggest more spacious environments. See, in large halls, ER's are weaker to start with and also tend to dissolve with the rest of of the reflections and with everything else that's going on.
An often heard mistake is: too much ER's in mixes which hope to sound big and large. People try to create depth by adding ER's, and they add yet more ER's to suggest even more depth, and then they suddenly find that no matter how much reverb they add to all of this, the whole thing somehow still sounds 'boxed' and a strangely small-ish. That's the ER's doing their dirty work.

If you want listeners to be aware of the room in your mixes, then yes, ER's are the way to achieve that. If, on the other hand, you're striving for a more expansive, wide and open sound (in which the presence of walls seems near inaudible), only add ER's very, very sparingly. Too little of them being a lot better then too much.
A more effective and convincing way to suggest depth in larger environments, is to EQ and monofy the source and simply adjust the wet-dry balance. In short: blurring its presence. And maybe add a subliminal amount of diffused delay's as well.

And Nick is quite right: reverb is — or at least, should be — much less important than discussions such as these too often suggest it is. A well-programmed production, using wisely chosen sounds in a musically crafty arrangement, will work with just about any decent reverb that's available today, assuming it is applied with taste and insight. (And you *certainly* don't need things like VSS, a contraption which, in inexperienced hands, introduces far more problems to a mix than it solves).
Whatever 'realism' you hope to suggest in a mock-up, reverb only plays a *very* small role in that. (Mainly because most mock-ups make the emergence of any hint of realism a near impossibility anyway.) If your mock-up doesn't sound half-way believable before adding reverb, in won't do so after adding reverb either. Is my point, I guess.

Place a SampleModeling woodwind in SPAT and the illusion of "a woodwind in a space" will never be spoiled by the latter, but always by the former. Process any sampled or modeled piano, orchestral library or sampled drumkit with any decent reverb, and any phonusbolonus in the resulting mix will always come from the virtual instruments, never from the reverb. (That is, if you have some basic understanding of, and flair for working with reverb, of course.)
Or, to go even further: play LASS or HS or DimensionStrings, or whatever, back in the greatest sounding, sympathetically reverberating environment, and it'll still sound like a sample library (with a very nice reverb in this particular case). Process, on the other hand, a recording of a real string section with even the most modest of virtual reverbs, and the result will still be entirely convincing and musically satisfying.
Only to say: don’t make reverb a problem. If it is, and continues to be, that is only an indication (and an extremely reliable one) that you have a problem somewhere else.

_


----------



## Hannes_F (Oct 21, 2015)

What Piet says, except that VSS is a lifesaver for me ... I strongly reduce the ERs though or use the open space setting alltogether.


----------



## Rasmus Hartvig (Oct 21, 2015)

I agree that VSS should be used sparingly. Running everything through it to bring cohesion didn't work at all for me, so I've scaled it back a lot. For wet libraries I think it sucks life out of the samples, but it's usable for making Sample modeling instruments sound better in unison.


----------



## Silence-is-Golden (Oct 21, 2015)

Ok, so that after reading re-peat's view of usage of reverbs and ER's there are different considerations.

With my latest test I did achieve more space and air. Maybe listen again with 'new' ears how spacious it sounds. I am in the trying out phase so it is all good practise.

I guess my only goal is to achieve a realistic as possible illusion of music played by an orchestra. And from various sources seen/ heared it being done with ER's amongst other things.

I will try re-peats suggestion, and start from buttom up and see whether mono and dry-wet differences can achieve what I am after.

And clearly everyone chooses their own approaches, VSS2 or Ircam tools or B2 reverbs or none of these. It is all about whether it sounds good to you or not.

Muchos gracias everyone!

@re-peat ( or anyone who knows about it) is there an EQ chart of some sort where what frequencies need to be decreased and how much if you want to use EQ for creating distances? I gues it simulates air absorbtion and blurring of certain frequencies?


----------



## Rasmus Hartvig (Oct 21, 2015)

Silence-is-Golden said:


> is there an EQ chart of some sort where what frequencies need to be decreased and how much if you want to use EQ for creating distances? I gues it simulates air absorbtion and blurring of certain frequencies?



You can make your own. There's a bunch of online tools for calculating air absorption (this, for instance: http://resource.npl.co.uk/acoustics/techguides/absorption/)
Plug in a bunch of frequencies across the spectrum and try to find a small representative set for setting EQ poles.

Haven't tried this myself so I don't know how well it will work.

Then again - when people start talking about air absorption emulation in a musical context, I tend to be 99% sure there's probably other more worthwile things to spend time on


----------



## KEnK (Oct 21, 2015)

re-peat said:


> If, on the other hand, you're striving for a more expansive, wide and open sound (in which the presence of walls seems near inaudible), only add ER's very, very sparingly. Too little of them being a lot better then too much.





Hannes_F said:


> I strongly reduce the ERs though or use the open space setting alltogether


So in the case of minimizing or eliminating ERs all together,
are you then using some differing settings on whatever hall reverb you're using?
Altering perhaps pre-delay, reverb eq, pan field variance, diffusion or even size?

k


----------



## KEnK (Oct 21, 2015)

Silence-is-Golden said:


> is there an EQ chart of some sort where what frequencies need to be decreased and how much if you want to use EQ for creating distances?


I'm sure such a chart would be useful, (and some people do like Tokyo Dawn's Proximity for this)
but it seems to me that every acoustic space is going to be different-
based not only on the dimensions of the room (including height)
but also the materials in the environment.

An interesting case-
One of my jazz buddies and I sometimes perform acoustically (bass and guitar, no amps)
For that scenario playing in a hard tiled area or in front of a window is incredibly helpful.
But the same spot can be a nightmare for performing electrically.
The size of two spaces can be the same, but the material makes a big difference.

k


----------



## re-peat (Oct 21, 2015)

*Rasmus,*

The problem with convolution based spatializers, I find, is that, unless they're used extremely carefully (and only with suitable source material), they can (and will) cause all sorts of trouble related to phase and imaging. (That might be why you find VSS "sucks the life out" of your samples.)
This unfortunate peculiarity was already apparent in Altiverb's built-in placement tool — the first convolution device to introduce such a feature, if I'm not mistaken —, and from the mixes and examples I've heard, it still seems to be very much something to watch out for in VSS and even in the mighty MIR.
(SPAT, on the other hand, doesn't give any reason for such concern, perfectly solid as its output always is.)



Silence-is-Golden said:


> (...) is there an EQ chart of some sort (...)


.
*Silence,*

Such charts as you're inquiring about would be quite absurd to compile because frequency absorption is room-dependent — one hall will absorb and reflect sound in an entirely different way than another, and it is highly unlikely that the results of measurements done in one place will be identical to those of the same measurements done in another location
All this measurement business is entirely meaningless for our purposes anyway — at least, that is my opinion (and one not everybody agrees with) — because just about everything in what we do — making virtual instruments pseudo-perform in a virtual space, and hoping that the result will somehow come across as real instruments performing in a real space — is of such artificiality that the only relevant references & considerations should be the ones pertaining to the illusion rather than the ones pertaining to the reality.

In other words: use your ears, and always remember that we're not producing orchestral music, but electronic music that just happens to resemble orchestral music vaguely and superficially. (If we're lucky.)

In order to suggest distance, a violin from VSL will need a different treatment then the Fluffy violin which, in turn, needs to be approached differently than the QuantumLeap Solo Violin ... And if you use the Spitfire solo violin, you needn't do much spatial fiddling at all, other than enable the most fitting mic perspective.
Every decision you make, or need to make, depends on context: what sort of piece it is, what libraries you use, what production tools you work with, and what sort of emotion, story or abstract musical content you hope to convey. There's no all-comprehensive rules and guidelines for any of these things, I'm afraid.

The only thing there is to do, apart from wisely accepting the reality of our unreality, is to master your tools — whatever they may be — to such an extent that they allow you to say what you want to say, as accurately and enjoyably as possible. This may sound like hollow talk, but it isn't, because the only TRUE realism you're ever going to get in a mock-up is the realism of your musical talent, vision and craftsmanship.

_


----------



## Silence-is-Golden (Oct 21, 2015)

Hmm, I was hoping to create a template with a starting setup that more or less works fine most of the time. Including preset reverbs, and correct placement of the various sections. Or the illusion of it.

Apperently that may be my assumption that this is a common way of working, seeing as the reference to 'thomas bergersen's balanced template'.

Thank you re-peat.
All in all, there is no secret formula for how to make it sound good, other then my own knowledge, skill, ears and senses.
That is actually how I like it to be.

I will do some more experimenting and leave it at that when I think the result is good enough.

Again, can't say it enough, thank you all for contributing to this thread.


----------



## Hannes_F (Oct 21, 2015)

Silence-is-Golden said:


> is there an EQ chart of some sort where what frequencies need to be decreased and how much if you want to use EQ for creating distances?



I made this chart for you as an orientation. The attenuation is less than one would perhaps think.

Note: This is only for the direct sound. In practise a good recording has about the same level of indirect sound as of direct sound, and for indirect sound the distances are bigger plus the walls/ceiling/floor attenuate entirely different of course.


----------



## Hannes_F (Oct 21, 2015)

KEnK said:


> So in the case of minimizing or eliminating ERs all together,
> are you then using some differing settings on whatever hall reverb you're using?
> k



That is the charming thing about LX480: It does not add ERs if you don't explicitly want it to (and even then only 4 or so). The reason is that back in time when the Lexicon L480 was constructed by David Griesinger (I think) he was of the opinion that ERs can too easily get into the way of a good orchestral mix. Comes as a surprise, no? 

Practically I avoid everything that gives even a hint of being 'boxy'. I use custom recorded HD room IRs, but only sparingly, and I personally never add single ERs.


----------



## Nick Batzdorf (Oct 21, 2015)

re-peat:



> The problem with convolution based spatializers, I find, is that, unless they're used extremely carefully (and only with suitable source material), they can (and will) cause all sorts of trouble related to phase and imaging.



That's probably because they're interacting with the recorded ERs.

What does work well - and I wish I'd invented it - is setting up a few shared ERs and then running them through a common tail. One for the each stringed instrument, one for ww, brass, etc.


----------



## Silence-is-Golden (Oct 21, 2015)

Thank you Hannes, good to have a look at and use as a guideline.
Interesting lessons today: first I am all gung-ho about ERs and now I learn to abondon them......good learning!

And another reverb to put in the considerations. Its algorhythmic I believe? Relab is the manufacturer/ developer?

Is it (Mac) CPU friendly?


----------



## Silence-is-Golden (Oct 21, 2015)

@Nick Batzdorf : can you explain a little further Nick?

What ERs are you referring to and what do you mean with shared ERs and running them through a tail?
Do you mean: signal -> Er-Er-Er(etc) -> reverb tail-> output?

And what ERs do you then use/ create?

Just to keep the box of knowledge open ....


----------



## Hannes_F (Oct 21, 2015)

Silence-is-Golden said:


> Thank you Hannes, good to have a look at and use as a guideline.
> Interesting lessons today: first I am all gung-ho about ERs and now I learn to abondon them......good learning!



Silence, as a word of warning, don't take anything I write as complete but just as a hint to start your own survey. You'll find that my opinion differs from the majority often but that is not meant to make the other opinions invalid.

A true gem from the Lexicon L480 manual:
*The Early Reflection Myth*
The importance of early reflections in reverberation has become accepted as indisputable fact. We call it a myth. Much of the myth of early reflections is a result of attempts to emulate the sound of discrete reflections from the floor, stage area, and ceiling of a real hall. This sounds reasonable in theory, but it has been our experience that the resulting preechoes are much different from the early reflections present in real halls, and recorded music is often better off without them.

The reason for the difference is not difficult to discover. Early reflections in artificial reverberation are usually discrete - simply a delayed version of the original sound. Transients such as clicks or drums are clearly heard as discrete reflections, resulting in a coarse, grainy sound. But the reflective surfaces of real halls are complicated in shape, and the reflections they produce are smoothed or diffused. Their time and frequency are altered, making them much more interesting. In a very good hall, discrete reflections are hard do identify as such.

Another major disadvantage of discrete early reflections is that the same reflection pattern is applied to every instrument which is fed into the reverberation unit, and each instrument has its timbre altered in exactly the same way. In a real hall, every instrument has a different set of early reflections, and each instrument will have its timbre altered in a different way.

Some engineers find any type of early reflection undesirable. In classical music, many recordings are now made with the orchestra in the middle of the hall, with the special intention of avoiding early reflections. Too much early reflected energy makes the sound muddy, and does not add to richness or spaciousness. This is part because reflections and reverberation also exist in the playback room.

The 480L reverberation algorithm still offers the option of adding early reflections (preechoes) but we have made them diffused clusters of preechoes (etc.)

Lexicon (David Griesinger) 1988


----------



## ZeeCount (Oct 21, 2015)

Hannes_F said:


> I made this chart for you as an orientation. The attenuation is less than one would perhaps think.
> 
> Note: This is only for the direct sound. In practise a good recording has about the same level of indirect sound as of direct sound, and for indirect sound the distances are bigger plus the walls/ceiling/floor attenuate entirely different of course.



Here's a website that gives a whole bunch of absorption coefficients for different materials and different frequencies if you haven't seen them.

http://www.acousticalsurfaces.com/acoustic_IOI/101_13.htm


----------



## chimuelo (Oct 21, 2015)

On my "dated" PCM70 of course I love the Concert Hall.
Far from realistic but sounds fantastic.
Then I have tiled floor. Wooden floor.
Bathtub. Podium stage left/right.
An excellent foley effect.
Piano sounds like crap but vocals are quite realistic.

Agree that I just dont see ERs as an effect for capturing space useful for music.
Voice....most definately.


----------



## KEnK (Oct 21, 2015)

Hannes_F said:


> That is the charming thing about LX480: It does not add ERs if you don't explicitly want it to (and even then only 4 or so). The reason is that back in time when the Lexicon L480 was constructed by David Griesinger (I think) he was of the opinion that ERs can too easily get into the way of a good orchestral mix. Comes as a surprise, no?
> 
> Practically I avoid everything that gives even a hint of being 'boxy'. I use custom recorded HD room IRs, but only sparingly, and I personally never add single ERs.


Interesting Hannes-
For the longest time I used only various sized hall type algo reverbs,
then moved to convolution-
Then started reading about the Front-Mid-Rear ER theory here.
I have been getting good results w/ it, but still haven't settled on a particular reverb.
I do like the quasi-dimension feel I get from it.
Interesting to see other people not buying into the current accepted dogma.

k


----------



## rayinstirling (Oct 22, 2015)

Here's a thing, the more years I spend processing instruments (both real and vsti), mixes and mastering audio and, the more effects I've accumulated in that time, the less I now actually use.
Once your "ears" know what they would like to hear and you have the experience in knowing which effect will give them what they want, less is definitely more.


----------



## jadedsean (Aug 9, 2016)

Hi guys,

I'm a newbie to forum and i was hoping i could get some advice from you guys, i'm relativity new to the recording process in general but i am a keen learner with a passion for music so i hope someone could help me, can anyone shed light on some aspects of orchestration for me? I know this topic has come up a multitude of times on this forum but as a newbie it's quite daunting. I have watched so many video's on ER's but am still none the wiser, possible because i'm still learning my daw, (Reaper). I have purchased some masterclasses with Mike Verta which i feel are invaluable to me as it's given me a great insight and a head start but the novice in me is still struggling and still none the wiser on certain aspects of Er's. Mike uses a delay effect to create panning effect using delay. I have tried this but can not achieve the same result as he is getting,(probably because i don't have the same delay plugins as him and i am unsure of the parameters on different delay plugins). I have also watched a great video from this forum from Carl Ruessmann which gives a different angle on the topic. What i am confused about with this tutorial is the routing i understand everything until he mentions stereo channel settings, i'm not quite sure what this means? If someone could enlighten me to the meaning of this i would be very grateful? With Mikes tutorial he use's a delay to create a panning effect whereas Carl uses a reverb. Both use a pre delay to emulate Er's. Can someone please help. Thanks for reading guys and apologies in advance for my complete lack understanding.

Sean


----------



## re-peat (Aug 10, 2016)

jadedsean said:


> a great video (...) from Carl Ruessmann which gives a different angle on the topic


Hi Sean,

That's a video tutorial to watch with plenty of skepticism, in my opinon. For reasons I explained *here*, the methods which it suggests strike me as highly questionable.

The important thing to remember about ER’s — in my opinion at least — is that they’re the ingredient of reverberation which suggests _confinement_. ER’s can only come from (reflective) surfaces, so the simple fact that ER’s are audibly present, means that there must be reflective surfaces in the vicinity too. Hence: a certain feeling of confinement.
Now, when trying to create the illusion of rooms, chambers and other smaller spaces, such a suggestion is obviously an essential requirement for a convincing reverb(eration), but if your aiming for a big, open, spacious sound, the presence of ER’s is often counter-effective because the last thing you want your reverb to do in such cases, is convey the idea that your instruments are closed in by reflective surfaces.

Which is why I will always reduce the ER’s quite drastically whenever working on a mix that needs to sound open, expansive and spacious.
You have to be careful here though, because when you take away the ER’s completely, you also run the risk of taking away some of that important connection, or 'glue', between the source and the tail, and with some reverbs that can sound a bit strange. (Sometimes it works better to lower the level of the ER’s only marginally, and reduce the Early Filt Freq a bit more instead, resulting in more diffused and blurry ER’s.)

To complicate matters even further: the formula for the best ER-settings also varies depending on the instruments. Percussion, for example, is often served better with less ER-reduction, even in big spaces, because, being sounds that have a lot of impact and energetic transients, they tend to bounce off walls much more noticeably than strings, horns or woodwinds. It’s very difficult, in other words, to place, say, a snaredrum at the back of an orchestra convincingly without ER’s being a part of their reverberation.

And here, another parameter needs to be considered: _pre-delay_. Increasing the pre-delay also helps a lot in suggesting a bigger environment, because the length of the pre-delay is obviously directly related to the distance between the source and the reflective walls. More pre-delay = more distance = bigger space.

A great way to learn about the complex relationship between source, reverb, ER's, pre-delay, tails, size and space is to experiment first with single instruments (plus your reverb of choice of course) before working on an entire virtual orchestra, and something like a marimba is ideal for such purposes: full and snappy sound, not too much energy in the initial transients yet distinctly percussive nonetheless. Other good candidates, in my experience, are staccato double-reed instruments (oboes, bassoons, ...)

And a completely different but also very helpful excercise can be to try and place an instrument to fit with an orchestral recording you like the sound (and spatial characteristics) of, and find out what sort of reverb you need (and which settings work best) to make the instrument sound as if belonging to that recording.

But I agree, it can all be a bit daunting and quite difficult to grasp at first, yes. Which is why I think you ought to keep things as simple as possible, especially at the beginning. And certainly not let yourself get confused by (in my view, totally useless) complexifications such as those presented in that video mentioned above.

_


----------



## Ashermusic (Aug 10, 2016)

+1 to what re-peat said.


----------



## jadedsean (Aug 10, 2016)

Thanks for all the information re-peat you really know your stuff. Iv'e read so many arguments for and against on this forum that it's possible that i'm more confused than when i started, but i think i'll take your advice and make things more simply to start off, perhaps if i improve i will be back asking more questions and getting answers that i may never understand Thanks again for your time.


----------



## pixel (Aug 10, 2016)

re-peat said:


> ER’s can only come from (reflective) surfaces, so the simple fact that ER’s are audibly present, means that there must be reflective surfaces in the vicinity too.



re-peat you just opened my eyes even more. Thank you! It was my missing element in giant reverberation puzzle


----------



## wst3 (Sep 24, 2016)

Whattayaknow... sometimes it is good to be old<G>!

I'm not disparaging ANY of the tools we have today to create a sense of space. With a couple of exceptions based solely on personal taste I can't remember the last time I wasn't tempted to purchase a license for a reverb, delay, or spacialization tool. They are remarkably capable, and fascinating, and the only thing preventing my disk drive from exploding is my wallet!

But I am tremendously grateful that my first spacialization tools were a pair of reverb springs and a pair of delays. And I'm going to make the outlandish suggestion that if you are struggling with reverb you might want to strip down to a reverb and a delay to help you wrap your head around things.

In fact after re-reading this thread I may just do the same.

Meanwhile, back at the ranch, my first reverb was a pair of MicMix springs, and I still have one of them, but it doesn't get a lot of use because I'm lazy. My first delays were a pair of Ibanze AD-80s, yup, stomp boxes, one of which was stolen, but the other one is still on my pedal board.

Back then I used to read Computer Music Journal and the Journal of the AES cover to cover, and I read some paper that inspired me to tinker. I ended up with six reverb springs (the MicMix plus four more PAIA reverbs I built), and four delay lines, two CompuEffectrons which I still use, and a couple of multi-tapped delays, I think ADA, but this was a long time ago. I built a passive matrix mixer that let me route audio back and forth between the various devices, and created some really lush reverbs, which was cool enough, but I was also able to add pre- and post- delay signals directly to the mix, and that gave me the ability to place tracks left to right and much to my surprise (but in a limited manner) front to back!

Then came the Yamaha SPX-90 and Rev-7. Many of their reverb algorithms included a Pre-Delay setting. That was a bit of an eye opener.

The rest is a blur, and admittedly I'm probably over-thinking things these days, but I am certainly not at a loss for tools!

Which means it should be pretty easy to build a tool

Let's take a simple string or brass quartet as an easy (??) starting point. In fact as I type this it might be even easier to take four synth tracks, but choose whatever you like. I would encourage traditional acoustic instruments (or synth based charactures thereof) over a power trio or other form of rock band - we're used to hearing a string quartet without reinforcement, most bands end up using reinforcement, and while there may still be a sense of position, for the most part it is artificial.

So four tracks (make them mono if you can)... if they are all from the same developer/series then you probably don't need to do anything to make them sound like they are in the same room. If you do need to place them all in a single space insert a reverb in each track and dial in to taste. Hopefully you can skip that step for now.

Create eight aux busses (for starters) the first four each get a delay, the second four each get a reverb. Create a ninth aux buss and put a different reverb in it.

Feed the four original tracks to the 2-mix.

Feed the four delay lines to the 2-mix.

Experiment with increasing the delay time on each aux - without touching anything on the actual tracks. You should hear the instruments moving about in the sound field. Now start playing with the pan controls on the channel and the send to the aux and the aux. You'll be (I hope) astounded at the range of movement. 

Your sense of space will likely be a tad artificial at first - we haven't added reverb yet! So lets do that. For a starting point (and it is arbitrary) I set the auxes for unity gain and control levels via the sends. If everything is going smoothly you should hear an instant change, from kinda blah to a physical space (or an approximation thereof) and some sense of placement.

Here comes the fun (if you like tearing your hair out) part. Add three more sends to each channel so that each instrument can send to each reverb. And of course start sending each instrument to all four reverbs - if all goes well you should be able to add some definition to the placement of the instruments in your sound stage.

You can try doing the same for the four delay auxes, but truth is now you are playing with so many variables that it is better to let the computer do the heavy lifting (e.g. Splat, MIR, VSS, etc).

Oh, once you have the 4 tracks and 8 auxes balanced more or less as you like send everything to the ninth aux, I find myself setting the sends between -12 and -15, a little bit goes a long way. And send that last aux to the 2-mix and it ought to glue everything together nicely, and provide the real sense of room. I really can't explain why that last instance of reverb seems to dominate, sorry.

I don't think most of us have the time or patience to do this for a real project, and that isn't the intent. Rather I hope it helps those wrestling with various reverb strategies and parameters get a better idea of what things do, and what can be done.

To put this in perspective, here's what I'm using today:

I am experimenting with VSS-2 and MIR as a way to place instruments. VSS-2 shows a lot of promise, but as someone mentioned, if you use too many instances the coloration starts to be a problem. Haven't figured out if I can work around that yet. I also use Ocean Way Studios for a sense of "room" if I need it. For really dry libraries, or close mics I can place them in OWS and they blend together, although I'm limited to two rooms (not a hardship!). And then I use a variety of delays...

I use Reverberate for convolution - I've only scratched the surface, but I like it. I used Pristine Space until the hassle of using a 32 bit DLL in a 64 bit world became too much. I use Reverberate for early returns and tails and everything in between. Mostly I use it on sections because it is a little too much of a CPU hog to use on every track.

I use the UAD EMT Plate in an aux send, and pretty much everything goes through it.

I use the UAD Lexi 224, and Zynaptiq Adaptiverb in aux sends, but lately Adaptiverb has been growing on me, and I find I'm using it in busses too. It's also a bit of a resource hog.

Other reverb and delay plug-ins get added as needed... as I mentioned before I have a bit of a problem.


----------



## tack (Sep 24, 2016)

wst3 said:


> You can try doing the same for the four delay auxes, but truth is now you are playing with so many variables that it is better to let the computer do the heavy lifting (e.g. Splat, MIR, VSS, etc).


Splat is a wonderful typo. It's the sound that the combination of my hand and face produced when I first learned its price.

Just to add to your "etc" I find that EAReverb2 does a fine job at positioning -- much better than VSS2 in my opinion -- and it's certainly priced better than MIR and Splat.


----------



## wst3 (Sep 24, 2016)

tack said:


> Splat is a wonderful typo. It's the sound that the combination of my hand and face produced when I first learned its price.



Hmmm... typo, or Freudian slip? Intentional??? I'll never tell!



tack said:


> Just to add to your "etc" I find that EAReverb2 does a fine job at positioning -- much better than VSS2 in my opinion -- and it's certainly priced better than MIR and Splat.



I keep reading comments on EAReverb2, and yet I've been unable to pull away from my insane laser focus on VSS2. Well a huge thanks to you, and whatever circumstances colluded to get me to download the Eareverb 2 demo. Part of it was the very generous/fair upgrade path, but I think I'll end up starting with the full version.

I know how my afternoon will be spent! Thanks Tack!


----------

