# Considering Vahalla Room or Breeze



## musicalweather (Jan 9, 2016)

Hello everyone,
Sorry to add to the abundance of discussions about reverb... I'm interested in getting an algorithmic reverb (I already have a convolution one that I'm very happy with - SIR2) and am looking at Valhalla Room and Breeze (currently on sale for $75). I'm wondering what your experiences and opinions are on these two reverbs. I'm particularly interested in hearing from those who use these reverbs with sampled orchestral instruments.

I've been reading a lot on various forums about each one, and plan to download demos of each.

I compose music almost exclusively with sampled instruments, in most cases orchestral. But I also use sampled drums, bass, and guitars when needed. Here's an example of an orchestral track: 

Thanks for any info.


----------



## TGV (Jan 10, 2016)

Nice track, although perhaps the attack on the trumpets is a bit unnatural. What stood out most though, is that much of the brass sounds dull. Perhaps your speakers or headphones make everything sound very bright?

Anyway, Valhalla Room is great, except for really long reverbs. I don't have Breeze, but I do have B2, and that does sound better, fuller. However, Valhalla Room is easy to play with, and B2 isn't. Breeze is apparently simpler than B2, but if you want to color your sound with a reverb and tweak it, Valhalla Room would be my first choice.


----------



## transverb (Jan 10, 2016)

I also have a lot of love for Valhalla... I have VVV and plate - both brilliant. Worth giving VRoom a demo run. Another one to consider, just to throw a spanner in the works, is Acon Verberate - also worth a demo and they also have limited CM edition. I don't use it while tracking but during mixing. 

If you decide on Breeze and don't mind buying second hand... then check out the KVR marketplace... there is one bouncing around in there for a while.


----------



## musicalweather (Jan 10, 2016)

TGV said:


> Nice track, although perhaps the attack on the trumpets is a bit unnatural. What stood out most though, is that much of the brass sounds dull. Perhaps your speakers or headphones make everything sound very bright?


Thanks for the feedback on the track, and your thoughts about the reverbs. Yeah, the trumpets were cobbled together from a variety of poor sources, thus the problematic sound. Since writing this track, I've acquired HW Brass Gold, so hopefully some of these brass problems have been resolved. 

I'm realizing that my orchestral sounds in general are probably too reverberant and consequently the clarity and brightness gets lost. I'm thinking of changing my approach and using dry, or drier, samples and then being much more careful about what and how much reverb I use. It could also be that my monitors aren't giving me a flat response.

You write that Valhalla colors the sound. Does it always do that? Actually, I'm looking for something that is pretty transparent...


----------



## KEnK (Jan 10, 2016)

musicalweather said:


> I'm realizing that my orchestral sounds in general are probably too reverberant and consequently the clarity and brightness gets lost. I'm thinking of changing my approach and using dry, or drier, samples and then being much more careful about what and how much reverb I use.


Well written track! I liked the development and how well all the various ideas flowed from one to the other.
You know your stuff- for sure. I didn't think it was "too reverberant", btw.

I also prefer dry samples to the baked in room sound-
So I use the 3 ERs for Near, Middle and Far all feeding into a Tail approach.
If that's what you are also doing, then the question is-
Are you using the convolution for, Tail or ER?
Personally I use algos for ER and Convo for tail, but just as many people take the opposite approach.

I like Valhalla, but tend to not try it for ERs. I think of it as a very lush sound.
I tend to use it more for smaller pop type music than orchestral. (Latin, rock, R&B, psuedo ethnic etc)
I don't think of it as something to reach for when I'm trying to emulate a "real space".
(Maybe this is my limitation)
But I do use it a lot- especially on guitars. (My main instrument)

I haven't used Breeze but Aether had been my go to reverb for a long time.
Supposedly Breeze is a simpler version of Aether.
If that is true, I can only imagine it is excellent.
Aether has a lot of knobs- so it's extremely tweakable which is what I look for.

Another reverb you may want to look into is Melda's Mverb- very cheap!
I'm still in the "checking it out phase", been to busy to really get a handle on it.
What I like right away is it's spacial positioning system- very cool.
I've read some not so good user reviews on gearslutz or KVR,
but I think it's because people are often thrown by Melda plugs-
There is a lot of tweakability and many people object to the interface.
Melda makes good stuff though.
So far I think it'a an excellent reverb and may be my new "go to"

Hope my comments have been helpful.
It really boils down to what you'll use an algo reverb for.

k


----------



## wst3 (Jan 10, 2016)

Not too terribly long ago I trialed (is that a word) reverbs from 2CAudio, Exponential Audio, and Valhalla. I was in the market for a "better" algorithmic reverb - actually, I'm still searching for a reverb to replace Wizzo-Verb...

My take, and I might get some of the names reversed but that will be obvious enough...

From a sound perspective I am fascinated that three reverbs at three very different price points from three different companies sound so good, and so similar. I did end up ranking them, and I thought the Exponential reverbs edged out the others, but I'd be hard pressed to spend the difference between Pheonix and Room or R2 and Vintage... in my studio, with my monitors (and ears) the differences were very subtle, and almost entirely in the very last few mSec of the tail. Which is not to discount 2CAudio, they also sound great, but not quite as great as the Exponential reverbs, so they sort of fell out of the running.

From a workflow perspective I preferred the Valhalla plugins - they lean towards homely I suppose, but they are really easy to use. And at $50/each it's difficult to justify the more expensive alternatives.

Oddly enough LiquidSonics released V2 or Reverberate around this time, and I was really impressed, I upgraded. It's till a resource hog, but it is getting close to that magical "best of algo & convo". And I thought that killed my reverb budget.

And then a few days ago PSP released their 2445, and it's a really convincing Lexi clone, and a LOT easier to tweak than the UAD version.

So for now I'm using the UAD Plate 140 and Ocean Way Studios plugins, plus Reverberate 2 and PSP2445 as my stable of reverbs. I'd still like to add either the Valhalla or Exponential plugins, but both are going to have to wait a bit.

Did I have a point here?

If you can not immediately hear a difference between the Valhalla and either the 2CAudio or Exponential Audio plug-ins then get the Valhalla reverbs. They sound really good, and the differences between them and anything else is subtle, and probably lost in most listening environments. If you can hear the difference then you gotta go with the one that sounds best to you. Even if it means saving pennies for a little longer.

And I too applaud your composition and your production.


----------



## tack (Jan 10, 2016)

I've become less of a fan of ValhallaRoom over the past month or two. VR does what the name implies: it simulates a _room_, and so there is some "early energy" in the signal. But even with early send at 0, there is something about the quality of some of the reverb algorithms that my ear doesn't like when there are instruments with sharp attacks (like a glock).

I would turn (and have done) to something else for room simulation and early reflections (namely EAReverb2). But I really, really like ValhallaPlate for tails. It was buttery and consistent and reminded me of Ircam Verb v3's tails, which I also quite liked.

But as Bill said, the Valhalla plugins are so sensibly priced, if you like what you hear from the demos, you're not risking much in buying them.


----------



## musicalweather (Jan 10, 2016)

Thanks for the lengthy response, K. 

I'm just now starting to take a more complicated approach to reverb. Up until recently, I've been using the EWQLSO Platinum library, which gives you the baked-in reverb. I believe even the "close" mics are not the same as _dry_. So I've generally not applied a separate reverb when using that. But now I've got Cinematic Strings 2, which gives you a dry option, and HW Brass Gold, which unfortunately does not have a dry option (at least I think that's right - someone correct me if I'm wrong) . Not even sure if it's a good idea to add a third party reverb to that, but I'll probably give it a try.

But I would like to try the algo + convo technique. 

I actually prefer a plugin with fewer controls. I don't want to spend so much time tweaking, plus I don't think I'm particularly skilled at it. I think that's why the Valhalla and Breeze reverbs appeal. 

Thanks also, Transverb, for your feedback. I'll check out Verberate.


----------



## KEnK (Jan 10, 2016)

musicalweather said:


> I've been using the EWQLSO Platinum library, which gives you the baked-in reverb. I believe even the "close" mics are not the same as _dry_.


Yes, I used to SO too- I think you're right about that.
If you ever want to look at another string lib-
Look at LASS. Especially if you're preference is dry.
Still my fav.

k


----------



## musicalweather (Jan 10, 2016)

Thanks, Tack and Bill, for your responses. I've looked at EAReverb 2, but it's out of my budget at the moment. They have a light version ("SE") that's slightly cheaper than the Valhalla Room, but it's been hard to find any review of it. I was impressed with the sound examples and videos of Phoenix, but am not keen on having to use iLok. I have an iLok dongle on my slave machine, but this plugin would have to go on my master machine, which is quite old and doesn't have many USB ports. I guess having it authorized to the machine rather than the dongle is an option...?


----------



## Beat Kaufmann (Jan 10, 2016)

*YOU SHOULD BE ABLE TO PRODUCE DIFFERENT DEPTHs WITH YOUR NEW REVERB*

There are always a lot of posts around reverbs because these are PlugIns with their own sound, handling and often a huge difference in their handling.
I have had and used most of the important reverbs since I've started in 2002 with using samples.
So *I also use Breeze, B2, Phoenix and all the other Algos.* *They all do a good reverb-tail-job.*

*But*
*Keep in mind that you need to produce different depths with your new "reverb"* for positioning your dry brass instruments for example at the rear of the stage... My experience is that you can get nicer depths (without a lot of tail) with good IRs of convolution reverbs than with algos.
So if you are going to test different reverbs *check out how good you can simulate distances of instruments from close to far* and not only the sound.

Such differences in the depth should be possible. Please observe with my example that the reverb tail is always the same more or less...
Further: Check out these possibilities with percussive signals. Sometimes you can get a sort of depth with predelays or other echo tricks. But as soon as you would have a timpani at the back you would get horrible results with such wrong "echo-depths".
BTW: The farther away an instrument plays in reality the smaller is the amount of predelay (time between direct sound and the first reflections).
So getting distance it isn't a matter of predelay in reality as well.

Good IRs (as I used one) contain such real recorded and natural distances. They can do very good jobs for placing instruments on the stage which lead to nice mixings Example. 
BTW: EAReverb2 (an Algo) is specially done also to create such (also dry) depths. Give him a try as well if you can download a demo.

_All the best
Beat_


----------



## emid (Jan 10, 2016)

@Beat Kaufmann what do you think about the Reverberate built in ER system (if you have tried it)? Is there any particular settings you recommend for depth creation? There is also an option of positioning. Any suggestions how to use it and how accurate it sounds? Thank you very much.

OP: Nice track but I am also in somewhat similar situation and you cannot beat Beat when it comes to reverb


----------



## KEnK (Jan 10, 2016)

Beat Kaufmann said:


> My experience is that you can get nicer depths (without a lot of tail) with good IRs of convolution reverbs than with algos.


Thanks for your input here Beat.
Still work on the depth thing, though reasonably close now.
I will look again at using convos for ERs instead of algos

k


----------



## Penthagram (Jan 10, 2016)

I normally use a mix of Ql Spaces, sometimes Reverberate with Valhalla Reverbs, Valhalla shimmer for creating beds, and soundscapes, and the other ones for more natural tones. I find it really amazing tools at a very very competitive price. You cannot go wrong with them. Also maybe one month ago Softube put their TSAR reverb at 29 dollars. i have tested it this days, and i really liked also. I have download the 2caudio demos, and they are really amazing and good sounding reverbs, with lot´s of possibilities for sound design also, but maybe the learning curve is a bit more harsh there than with valhalla  as always there is no right or wrong :D


----------



## musicalweather (Jan 10, 2016)

Beat Kaufmann said:


> *But*
> *Keep in mind that you need to produce different depths with your new "reverb"* for positioning your dry brass instruments for example at the rear of the stage... My experience is that you can get nicer depths (without a lot of tail) with good IRs of convolution reverbs than with algos.
> So if you are going to test different reverbs *check out how good you can simulate distances of instruments from close to far* and not only the sound.
> 
> Such differences in the depth should be possible. Please observe with my example that the reverb tail is always the same more or less...



Thanks very much, Beat -- this has been very helpful. I'm not sure I know how to move a sound from front to back with a convolution reverb, as you did in your very cool example. SIR2 has separate controls for the dry and wet signals, so I guess you can do it that way. Are there other methods of moving the sound from front to back? In any case, I will definitely follow your advice as I try out various demos.


----------



## musicalweather (Jan 11, 2016)

KEnK said:


> So I use the 3 ERs for Near, Middle and Far all feeding into a Tail approach.
> If that's what you are also doing, then the question is-
> Are you using the convolution for, Tail or ER?
> Personally I use algos for ER and Convo for tail, but just as many people take the opposite approach.
> k



K: I would like to try this approach, but I don't know how I would use the convolution reverb I currently have (SIR2) for _ER_s. As far as I know, there's no control of the ERs in SIR2. One can alter the pre-delay. 

I'd also like to know how to set up separate reverbs -- one for the ER and one for the LR -- for a sound signal. Do you just insert the reverbs as effects in a channel (turning off the respective ER and LR of each reverb)? Clearly I'm a novice at this, and could probably hunt around the web to find answers to all this, but you guys are so helpful!

Thanks again for your help!


----------



## tack (Jan 11, 2016)

musicalweather said:


> I'd also like to know how to set up separate reverbs -- one for the ER and one for the LR -- for a sound signal. Do you just insert the reverbs as effects in a channel (turning off the respective ER and LR of each reverb)? Clearly I'm a novice at this, and could probably hunt around the web to find answers to all this, but you guys are so helpful!


The main reason for separating ER and LR is so that you can use shared LR plugin for all tracks. The key benefit here is CPU, but depending on the reverb function, there _could_ be an acoustic benefit too. For ERs and other room room placement, you use an FX insert. For LRs, you'd have a separate reverb bus and use a send.

Alternatively, you might want a separate bus for ERs if you want to have multiple tracks with the same room placement. A pair of flutes, say. Another reason could be if your room placement/ER plugin allows multiple inputs for independent placement (such as Ircam SPAT). But in this case, be careful you're not also routing a dry signal up to your master in addition to your ER reverb. (You might want some dry, but _usually_ that's accomplished by a wet mix setting in your ER reverb.) Then instead of a send to LR on the instrument track you'd send to LR from your room placement bus.

Hope that made sense.


----------



## KEnK (Jan 11, 2016)

Hi Musicalweather-

I'm afraid another long winded response is required, I'll do my best 

First some links-
Here is a vid tut by Carl Ruessmann. 
I always point people to it when they want to know about the ER/Tail routing thing.
It's one of the most clear and concise descriptions I've seen. (Sadly, Carl no longer posts here)

More about that later.

It came from this 2011 thread, a very detailed informative discussion worth looking at.
http://www.vi-control.net/community/threads/confused-about-er.20664/

Here are some free IR's to expand your pallet if you haven't done so already.
Most people speak quite highly of this collection.
http://www.samplicity.com/bricasti-m7-impulse-responses

As far as why I use algos for ERs instead of convos,
I was having some cpu issues when running a dense template-
So I took the time to match an algo verb to the Logic Space Designer convos I was using at the time.
It worked ok for me, but I want to try convos for ERs again.
_
I don't know how I would use the convolution reverb I currently have (SIR2) for ERs. 
As far as I know, there's no control of the ERs in SIR2._

What you'd do w/ a convo is use a smaller (or specific) stage or room sound as the ER,
instead of a large sounding hall sound- and you can experiment w/ altering the length.
Then you route that to the longer Tail.
You need to decide the environment and the listeners perspective, and choose the IR's based on that.
I look for something that gives me a sense the dimensions of a "room", w/o being to large.
Sometime I want to hear a little "slapback echo",sometimes not-
Also the "brightness" of the room can be decided upon here- Wood walls? Cement? etc
That's what convos are great at. 
You basically want to choose an IR that gives you a picture of the stage

_I'd also like to know how to set up separate reverbs -- one for the ER and one for the LR_

The routing is clearly explained in the vid above, but briefly-
You would set up 4 separate reverb channels- 3 for ER's, Near, Middle and Far, and then one for the LR or Tail.
You use sends from the inst or group tracks to decide where on the stage your instruments sit.
Example- Violins-Near, Woods-Middle, Perc-Far. You then use an additional send to feed the Tail
I've used both a separate inst send for this as well as a feed from the 3 ERs to the Tail.
Both work, it depends what I'm looking for.
One thing I always do is use an EQ on the reverb return channels-
This helps keep to much low end mud from building up.
Most people think the EQ should come 1st, there is a difference, but I decide on a case by case basis.

The big mystery about this method is how to set the pre-delay- 
Again this is clearly explained in the vid-
and much of the theory is discussed in the thread.
Another thing people had issue w/ the last time I linked Carl's vid was the concept of using a "pre-fader send".
You'll see what this means in the vid-
What happens when you do it this way, is the Volume fader becomes what you use to set the distance-
But then you might not be able to use the Volume fader w/o moving your inst forward and back on the stage.
There are a couple of work arounds- Using cc#11 (Expression) instead of cc#7 (Volume)-
or you keep your reverb send "Post Fader and adjust the theoretical distance by the send amount.

Much of this will make more sense after you've seen the vid and get your hands dirty w/ some experimentation 
I recommend 3 dry percussion tracks for the initial attempts at working out this puzzle.

Feel free to ask more questions-

k


----------



## musicalweather (Jan 11, 2016)

Thanks so much, K. I watched the video and think I understand the concept, although, honestly, your written explanation was clearer to me. I also read the previous discussion that was linked in your post. 

I'm still trying to understand the relationship between pre-delay and ER. The higher the number of ms for pre-delay, the closer an instrument is perceived to be? I guess this is because the listener has more time to perceive the original sound source before the early reflections begin to smear that perception a little bit (although as I understand, the ERs have not bounced off a lot of surfaces, and therefore, are similar in sound to the original signal). Is this right? 

Before this I had always thought that pre-delay gives you a sense of a larger room. Longer pre-delay = bigger room. But maybe that's pre-delay as it relates to a reverb tail? I'm a little confused. I have the Izhaki book, so I'll take a look at that, too.

I look forward to experimenting with this, though I probably won't get to that until next week.

Thanks again for all your help.


----------



## KEnK (Jan 11, 2016)

musicalweather said:


> I'm still trying to understand the relationship between pre-delay and ER. The higher the number of ms for pre-delay, the closer an instrument is perceived to be?


Yes- It does seem very counter-intuitive.
Perhaps easier to grasp if you imagine only the rear wall reflections-
Then you can visualize the distance between source and reflection growing smaller
as the the source gets further away from the listener.
But let's not forget that the differences in ms are very small- 50ms, 42ms and 36ms on Carls vid.
These are the kind of numbers most people are using. Those are truly minute differences-
I think what creates the sense of 3D has as much or more to do w/ the ER amount.
It's a simulation of 3D, a trick played on the ear by "math".
But it seems to work.
Such reflections are how we perceive distance in a room w/ our eyes closed.
So there is a kind of audio physics about it.

I still use pre-delay on large reverbs to separate the source from the reverb.
This has more to do w/ clarity of the sound than perceived distance.

k


----------



## muk (Jan 12, 2016)

What KEnK wrote is very much the state of the art-reasoning behind using predelay. However, thinking it through, I can not help but finding it rather askew. The 1. violions should have a higher predelay than, say, an oboe, because the oboe is closer to the back wall. Therefor the delay between the direct sound and the reflection from the back wall will be smaller for the oboe. So far so good.

But a building usually has more concretes than just a back wall. A floor, for example. All instruments are approximately at the same distance from the ground floor. And if they aren't exactly leaning on one of the sidewalls - or sitting on a chair that is dangling mid-air from the ceiling, for that matter - the closest concrete will always be the floor. The delay for this will be somewhere between 3 to 6ms, give or take. And that's the very first ER, and the same for all instruments. I don't understand why this reflection shouldn't count.

The other reflections that seemingly don't count are the ones from the sidewalls. If you look at an orchestra on stage, you will find that the rear desks of the strings usually are much closer to the sidewall than the woodwinds are to the backwall. Here's an example:






(taken from this informative page: http://andrewhugill.com/manuals/seating.html)

Here most of the desks of violins 1 are closer to the left sidewall than the woodwinds are to the backwall. That means that even if we were to ignore the very first early reflections (the ones from the floor), even the second reflections for many of the strings will come in on a shorter delay than the ones for the winds. And the delay times will vary greatly between the individual desks of the strings, which makes setting a single one for all of the strings a rather blunt decision in my eyes.

What you are doing if you set a large predelay for the strings is basically ignoring the floor and sidewalls, and pretending that there are no reflections coming from these. I'm not convinced that that's a very sound basis for your reverb decisions. But as always, if you are happy with the results: go for it. I'd simply advise to not take anything for granted and decide for yourself with your own set of ears.


----------



## Patrick (Jan 12, 2016)

muk said:


> But a building usually has more concretes than just a back wall. A floor, for example. All instruments are approximately at the same distance from the ground floor. And if they aren't exactly leaning on one of the sidewalls - or sitting on a chair that is dangling mid-air from the ceiling, for that matter - the closest concrete will always be the floor. The delay for this will be somewhere between 3 to 6ms, give or take. And that's the very first ER, and the same for all instruments. I don't understand why this reflection shouldn't count.



My guess would be that the ERs from the floor "don't count" because they are not reflecting the sound towards the listener in a significant way, compared to the back of the stage. The same would go for the sides of the room to a slightly lesser extend. I am just thinking out loudly here so please, anybody, correct me if this reasoning is utter nonsense 

Thank you for this informative thread by the way, I am learning a lot. And thank you KenK and Beat for explaining so much in such a comprehensive way and pointing to even more sources!


----------



## KEnK (Jan 12, 2016)

muk said:


> What KEnK wrote is very much the state of the art-reasoning behind using predelay. However, thinking it through, I can not help but finding it rather askew.


Hah! Funny muk- 

What you say is also entirely true, in the _real_ world.
But in fact there are no walls, floors or ceilings in a stereo speaker set up,
just as there are no space ships in a sci-fi movie.
It's a simulation, a trick of math.

Let's for a second think about the Haas effect.
Where a single miniscule delay will seem to cause a center image to move left or right.
Higher delay values will cause the image to appear further afield.
In this case the "movement" is left/right.
But it does show a relationship between small delays and stereo image. (Position)

I wouldn't discount this theory just because the confines of only working
within L/R parameters don't take into account 3dimensional reality.
We simulate it as best as we can.
There are no violins or pianos being played when you hear a recording.
That is an illusion too. (Even when real instruments were recorded)

I did say in my post that I think the near/far illusion has as much do do w/ ER amount
as this ephemeral pre-delay effect. Whatever works, works.
Lotsa people are doing this now.
I'm just willing to explain this theory and link to posts and vids to others who want to know about it.

Now if I can get my virtual hyper-drive to function I'll have a good day

k


----------



## muk (Jan 12, 2016)

Hey KEnK, no need to justify your post. I think you gave a very nice summary of the state of the art, and I didn't mean to attack you, or anybody using that technique. It's just that, thinking it through, I couldn't really follow the logic behind. A while ago I thought about opening a thread about that, but decided against it because I thought it wouldn't be of much interest. Experimenting with different settings, all it did for me was making everything sound wetter overall. But personally I didn't perceive any increase in depth. But then again, that might be different for different people, and I am not using the 3 depths approach anyway. As I wrote, experiment with the settings, and go with whatever sounds good for you. As you wrote, many people go with the approach you detailed, so it seems it does sound good to many ears.


----------



## KEnK (Jan 12, 2016)

No offense taken my brother-
You made an interesting point worth discussing 
By all means- begin another audio science thread.
That's why we're here

k


----------



## Beat Kaufmann (Jan 14, 2016)

"It counts the result" and not the way how it was produced... this I can accept.

But then only use percussive signals to test the produced depths.
Those transient signals will quickly uncover "wrong" produced depths as a sequence of echos (produced with predelays and unsuitable Algoreverbs).
I do not say that all Algoreverbs are doing a bad job here (checkout EAReverb2 for example).
But as soon as you use a predelay (echo) for getting a sort of depth you leave the way how "depths" are produced in reality (See the post of MUK above).

As mentioned before: There are some IRs (not all) which are originally recorded very far from the spot of sound. Most of them can simulate distance without using annoying echos.
How to find them?
You need to go through all your IRs for finding those which can produce a soundsource far away (Check them with 100%Wet).
Choose such a "far-IR" and only use the first milli seconds of it (fade out to -60dB at 100ms-300ms).
Check it then with a percussion instrument. If you don't get a lot echos you just found a perfect and natural IR for creating nice depths.

*If all went OK you should get such a result: Depth-Example Close to Far*
I mixed from *Dry 100% ER-Wet 0%* .... *Dry 0% ER-Wet100%* and *I had a bit an Algo Reverb for the tail over all *(fix around 20% wet).
Please try to reach this with the "Ruessmann-Technology"...

Further: You can enhance far played instruments by using an EQ for damping the high (and eventually the low...) frequencies even more.

*So back to the Title of this Thread:*
Use Valhalla or Breeze - both are super Reverbs for "giving" the Tail over all. Search for a Convolution Reverb (IR) which can produce the depth(s). Maybe you will find one within your DAW...?

Beat


----------



## KEnK (Jan 14, 2016)

Beat Kaufmann said:


> I do not say that all Algoreverbs are doing a bad job here (checkout EAReverb2 for example).
> But as soon as you use a predelay (echo) for getting a sort of depth you leave the way how "depths" are produced in reality (See the post of MUK above)...
> 
> ...You need to go through all your IRs for finding those which can produce a soundsource far away (Check them with 100%Wet).
> ...


Beat-

Thanks for your input here. I'm always trying to learn something.
I never thought of tapering an IR length to use it as an ER.
I always just looked for something on the short side naturally.
I started using algos for ERs because of cpu issues,
but I'll look at this again.

@ muk 

k


----------



## emid (Jan 14, 2016)

Beat has mentioned shortening IR in some other thread too, completely forgot. But now it's 'how to' which is more awesome. Thank you.

@Beat Kaufmann forgive me but have you seen my other post regarding built in Reverberate ER please? Will really appreciate on any thoughts because this is what I currently have and I am missing some great potential in this plugin. There is also a bank of new 'fusion IR' claimed to creating a lively, organic reverberation that is impossible to achieve with traditional static convolution.


----------



## Beat Kaufmann (Jan 14, 2016)

emid said:


> ... forgive me but have you seen my other post regarding built in Reverberate ER please? ...


Ups, yes, ... seen > but no time then for an answer > and forgoten... I'm sorry. 
I do not own Reverberate... As I mentioned: It is not the plugin which mainly makes the race with a convolution reverb, no, it is its IR-Library. So I really do not know what you get with Reverberate but I see that you obviously could shorten the IRs with the "Envelope Shape" if you only want to use the first part of an IR for doing the rest with a nice algo-reverb.
If the IRs are suitable for producing nice depths you can check out as I wrote it in my post above.

Beat


----------



## emid (Jan 14, 2016)

Thank you very much Beat. I mentioned Reverberate because if you are familiar with it then it is easy to explain a particular function. Like when you say fade out to -60dB at 100ms-300ms, if I understand correctly, -60 db is achieved by lowering down 'sustain' of ADSHR (the envelope). But thank you anyway. I will play around with it to see what I can get.


----------



## creativeforge (Jan 14, 2016)

Excellent resources and tutorials shared on this thread!


----------



## Beat Kaufmann (Jan 15, 2016)

emid said:


> -60dB...


 -60dB (1/1000) In connection with reverbs an often used value. The reverb-time (2,3s) means that the fading out tail has passed the -60dB at 2,3s...




emid said:


> ...when you say fade out to -60dB at 100ms-300ms...



I used the following setup for checking out Reverberate LE...






The "church-small01" would do a good job. But unfortunately the release at "Envelope Shape" does it not how it should (violet curve would be great). It unfortunately fades out faster and faster so that the ER-Part seems to stop suddenly. This does not sound very good...
...which means Reverberate LE is not very usefull for our task "producing natural ERs".

A PlugIn which has both in it - just for showing "my" reverb concept once more - is the VSL-Hybrid Reverb:






If you do not own it - no Problem: You can use every Convolution Reverb with a usefull IR and every Algo-Reverb for producing the Tail. 
BTW: The tail in the image above could be a bit more delayed so that it is formed while the ERs are fading out.

_Beat_


----------



## emid (Jan 15, 2016)

Many thanks again Beat for taking the time to download the demo for explanation. Image clears up a lot of confusion and now anybody who has this plugin can experiment with different IRs'. Highly appreciate.

In LE you can't modify the curves which gives a blunt snapping type sound that is of course unnatural. However, the full version gives you the option to modify the curves. Two curves are available. Have a look please: (I chose the curve that you have described above)




[





Just for the sake of interest, below is the image of 'ER' system I was talking about. Like ER, it has a separate 'Tail' option too available through drop down menu. I think ER and Tail is available in LE too (not sure). Please look at the parameters. There you can even increase/decrease the size of the hall, position the instrument and play around with the distance all with respect to ER. This is what I was saying.


----------



## KEnK (Jan 15, 2016)

Beat-

Another question, if you would be so kind.
When using 3 ERs for the near, mid and far effect-
Are you using the exact same ER for all 3?
Varying only the predelay and Dry/Wet balance using the reverb send?
or are you perhaps altering the ERs? (making them a little longer or denser perhaps?)

I bought Reverberate a long time ago because I read it was light on cpu.
Never used it much as I became very familiar w/ Space Designer.
I was also able to get an ER curve close to the purple line in your diagram.

I'm very intrigued by your last post

Thanks

k


----------



## MarcelM (Jan 15, 2016)

sorry to interrupt here, but i got a question about reverberate. can you actually disable the tail completely? if yes, how do you do it?

a friend of mine is using hofa reverb and he can disable the tail there. i want to buy one of those two, and cannot make my decision yet. 

thanks to beat at this point for all the usefull information he is writing here in the forums. i really learned alot already!


----------



## emid (Jan 15, 2016)

Heroix said:


> can you actually disable the tail completely? if yes, how do you do it?



Yes you can. From the drop down menu on the right side (see below) you can choose whole IR (File), ER or Tail separately. Simply find a suitable IR and choose ER. Reverberate will load the ER of the chosen impulse and disable it's tail.







Also check my other post (#33). An ER is loaded without it's tail.


----------



## Beat Kaufmann (Jan 15, 2016)

KEnK said:


> Beat-
> 
> ...Are you using the exact same ER for all 3?
> Varying only the predelay and Dry/Wet balance using the reverb send?
> ...



Hi Emid
Listen once more to my Marimba Example above. 
As I mentioned I had an Algo Reverb (fix, 15-20% wet - so that it fits to the Dry/ER-signal(s) in the master output channel. 
This reverb nicely "glues" with its tail - if you have- all the different signals together which are coming from different depths. 
This is a huge advantage compared to using tail(s) already in each depth-section. 

For creating different depths you use different ratios between dry and ER. 
A possibility is to prepare such different depths in different bus-channels (also called group channels in Cubase). 
Adding EQs or compressors or whatever into each depth will help to enhance a certain depth. 
4 depths for large orchestras with choir or organ is not too much. For a Baroque setup you can probably work with just two depths.
Having such different Depth-Groups you can easily route all the instruments through its belonging depth. 
Important: Do not use the Send function! 

And for answering now your question: With *one* good IR you can build all the depths from close to far *without using any additional predelay* (= echoes with percussion instruments). Observe the first 300ms of IRs and you will find already built in a sort of echoes sometimes as well. These are the real Early Reflections. They are naturally recorded and they finally give us the natural depths... 
Nevertheless, there are IRs which are not suitable for this job. For example IRs from Algo-Reverbs, Plate-Halls, bath rooms etc. 
Therefore you need to select one which can do a good job in our case. 

Beat

_
Keep in mind that this Reverb-Concept is not the Holy Grail. But it is a possibility for getting transparent mixes with samples because most of the mixes suffer from "not enough different depth". _


----------



## KEnK (Jan 15, 2016)

Thanks again Beat for your detailed response!

Some experimentation is in order as I've always tended to route things a little differently.
I do a lot of "smaller group pieces"- jazz, funk, latin, world, rock, etc.
So I tend to group instruments by function rather than distance.
A bass & perc group, a guitar or horn group, etc.
I've always used dedicated channels for various reverbs,
but I've always used them 100% wet,
varying the amount w/ the main fader and the sends from each instrument.

I will try out what you propose here-
using inst groups w/ variable wet/dry balance for stage position.
I suspect I may end up liking that better than the way I do it.

Thanks for all your help.

k


----------



## emid (Jan 16, 2016)

Thank you again Beat.

As Beat says "send" should not be used which means only one routing option left which is "output" of the channels. So the proposed routing pattern would be like this:

Creat Fx tracks with the same ER but different dry/wet settings, say 3 channels (ER dry/wet settings for close, mid, far). The outputs of the instruments are then routed to their respective Fx channel depending on their position on the stage. A final "Tail" with 15-20% mix level is sitting either on master or submix bus taking all signals from Fx channels again through the Fx outputs. There is no use of pre-delay at any level.

Unless routing is different than what I understood, this seems to be a pretty realistic setup. Thanks to Beat, magic lies in ER dry/wet setting.


----------



## Beat Kaufmann (Jan 16, 2016)

Hello again
The "Reverb Concept" which I suggested here is more for orchestral mixes of course. 
Nevertheless you can draw a stage layout for any Jazz- or other music ensemble as well. 
Then you will have in minimum a front and a back - which means two depths. 
For keeping instruments in the front only use a little bit of tail (without ERs if possible) 
and for instruments more in the back you could create a "depth". 
With such situations it could be OK to keep out the bass from any room-treatment (dry) for having it as powerfull as possible.

Here is an example (a real recording) of such a jazz-small-room-mix (Hoff Ensemble, from Album "Quiet Winter Night"). 
As you can make out with this beautiful recording we have a "close" and a more "far" and the acoustic bass is integrated in the room.

With Rock and other music styles we are used to have audio tracks where all instruments are mixed in "their own rooms" so to say. So the Guitars have their own (Spring-) Reverbs, the EP and the drums as well... their is no common room - a matter of taste of course and how we are used to have it since years. https://vimeo.com/92351315 (Example)

All the best
Beat


----------



## KEnK (Jan 16, 2016)

To Beat and Emid-

The last 2 posts make things crystal clear.
I've understood the routing and the "theory" behind this idea,
but there were a few missing pieces to the puzzle.
Pretty sure I have them now. 

Most interesting is the idea of _not_ using pre-delay when choosing convolution for ERs.

I've noticed this thread has more than 1300 views!
Very great information here

Thanks once again

k


----------



## MarcelM (Jan 16, 2016)

great stuff, but got a question 

what would you do if you are mixing a more dry library with a wet one using this concept.

putting the dry library into a space first would come to my mind, but not sure how people solve this in general when creating depth.

edit:

is it also okay to use reverb sends in the depth busses for the tail and then one reverb on the master for final glue?


----------



## emid (Jan 16, 2016)

Heroix said:


> is it also okay to use reverb sends in the depth busses for the tail....



No, use outputs only (as per Beat's reverb model).



Beat Kaufmann said:


> Important: Do not use the Send function!


----------



## MarcelM (Jan 16, 2016)

hmm okay. i thought he was only talking about the ER.

so what would be the right way if you work with dry and wet librarys? put them in a space before? puuuh


----------



## Chandler (Jan 17, 2016)

I've been reading this forum for a while, but this is my first post. I've been experimenting with different ways to add depth using reverb also. I think I've found a way that might help others. I'll add some samples below and any feedback is appreciated. 

https://od.lk/d/Nl82OTgyNDEyNV8/depthtest.mp3
This uses WIVI band, so it is completely dry with no sense to depth at all. I used the same phrase 4 times(sorry, I'm a terrible keyboard player). Close, medium depth, far depth and finally completely dry. In retrospect I could have made the differences in depth more dramatic, but let me know what you think.

https://od.lk/d/Nl82OTgyNDA0NV8/drumsdepth.mp3
I automated the plugins here, so it starts close and then gradually moves farther back and then returns to the original position. 

This is a different method than the other recommended in this thread(although that method obviously works) and I beleive it uses much less CPU. Please let me know what you think of the sound.


----------



## Beat Kaufmann (Jan 18, 2016)

Heroix said:


> ... so what would be the right way if you work with dry and wet librarys? put them in a space before? puuuh



Puuuh, yes, not an easy situation - specially not with several different wet libraries. Maybe it could work to adapt dry samples into a wet library. But the otherway round... puuuh. In any case a reverb for tail over all (output channel) will glue all the libraries a bit together. But it could be a very "wet matter" in the end. 
Solutions:
- Save money 
- Search for good (dry) midi sounds
- Change your hobby: What about "stamp collector"? 

Beat


----------



## nas (Jan 21, 2016)

I have both Breeze and Valhalla Room. They are ok but not my favorite. Perhaps you might consider looking at Ircam Session Verb - it is a real nice algo reverb that works very well for Virtual orchestras. 

http://www.fluxhome.com/products/plug_ins/ircam_verb_session-v3

That and a good convo will get you a lot of mileage.


----------



## devonmyles (Jan 22, 2016)

A big thanks, Beat Kaufmann - Very helpful posts in this thread.
OT. I also found your tutorials on your website regarding VSL,
to be very helpful!


----------



## musicalweather (Jan 22, 2016)

Wow, thank you all for all this information and your thoughts on this topic. Very helpful! (I'm sorry I haven't replied sooner; have had a busy schedule in the last week or so.) I'm planning to reread it all more carefully this weekend and try out some of the techniques that have been suggested.

As far as my choice, I've been doing some pretty thorough but _very_ unscientific and subjective testing of these reverbs. I think that if I test them thoroughly enough with a wide variety of virtual instruments, I'll get a good _general_ sense of what these reverbs are like. Here's what I've discovered so far:

Valhalla Room seems too colored for what I want. So I've ruled it out. Right now I'm really on the fence between 2CAudio Breeze and Acon Digital Verberate. Both can achieve a pretty transparent sound, which is what I prefer. Verberate seems a little bit more transparent. Its presets settings can also be very subtle; sometimes it was hard to hear the effect without turning up the reverb slider. But it seems it would be easy to quickly find a suitable reverb for orchestral / acoustic instruments in particular. Very smooth, very transparent. It works especially well with _dry_ instruments (well, the same is probably true of Breeze). Unfortunately, even the close mic position in EWQLSO is not dry, so you're adding reverb to reverb. But one can get some decent results also in that case.

Breeze and Verberate are similar in their relatively small number of controls, which I like. I don't like to spend a lot of time tweaking. Breeze comes with an excellent manual -- very well written and useful, but daunting in its length. It would take a while to really understand the discussion of each control. Verberate's manual is much more of a quick read. I've only skimmed through it, so I can't say whether it's very helpful or only adequate.

Breeze really seems to shine with instruments like guitar, bass, synth. using some of these instruments, I found how it could beautifully "liven" up the sound. Some of its presets can be colored and it also offers more creative possibilities, where it goes beyond reverb into creative effects. It has many more presets than Verberate. Kinda cool, though ultimately I don't know if I would take advantage of that.

I'm sure there may be some important technical differences between the two, but I'm still learning about how to use each one.

So.... hmmmm. They're actually comparable in price (Breeze is currently on sale for $75; Verberate goes for $99).

To Nas: I took a look at Session Verb. Its price is higher than I'd like to go, but I thought I'd download the demo and try it out. Ugh, nothing but aggravation. I followed the procedure to download it and get it authorized through my iLok account. But Digital Performer would not load it, even though I did a re-install of it. VE Pro recognized it, but when I tried to open it, it crashed the program. I've sent a support request to Flux but haven't heard anything back. Gah.


----------



## Snoobydoobydoo (Oct 20, 2020)

musicalweather said:


> K: I would like to try this approach, but I don't know how I would use the convolution reverb I currently have (SIR2) for _ER_s. As far as I know, there's no control of the ERs in SIR2. One can alter the pre-delay.


Diggy di diggy.
Id like to know this too, with 7th Heaven.
I mean, there's the option to choose different ER patterns, but nothing else.
Convos in general dont have many options, or any option at all to fiddle with the ER's,
just with the predelay, like shown on the Carl Ruessmann Video on Page1 in this Thread.


----------

