# Convo for ER Algo for Tails or vice versa ?



## bcarwell (Feb 25, 2016)

As a reverb/delay newb I read somewhere that one practice is to employ convolution reverb for the ERs and algorithmic reverb for the Tails or is it vice versa ?

Could somebody explain which it is, and why one form of reverb may be better suited for ERs and the other one for Tails ? And is this a common practice or just used in specific instances ? Is it more common to just stick with one or the other generally ?

Tnx for any explanations,

Bob


----------



## Hannes (Feb 25, 2016)

I think every combination is possible as long as you are happy with the sound 
Some people only use algorithmic or only convolution reverbs and some people use both - there's not really a general rule, I think it's a matter of taste... 
But it also depends on which libraries you have - if they are already recorded at the right position, you don't really need ERs...

I mostly use an algorithmic reverb for ERs (if the samples are very dry) in combination with QL Spaces. And sometimes I add another algorithmic reverb if I need more tail...


----------



## KEnK (Feb 25, 2016)

Read this excellent thread.
Lots of good info especially by Beat Kaufmann
It changed my opinion
http://vi-control.net/community/threads/considering-vahalla-room-or-breeze.50957/

k


----------



## muk (Feb 25, 2016)

As Tastenklopfer wrote any combination is possible. The general wisdom was to use convolution for ERs, and algorithmic reverb for tails. That's how VSL Hybrid Reverb works, for example.
The reasoning behind is that a convolution IR is a snapshot of a room, kind of like an aural photography. Therefor you can use it for the ER part to give an impression of the source being in that room. Remember that the ERs contain a lot of information about a room, like size, damping qualities etc.

On the other hand, a convolution IR is static, again like a photography. It captures the response of the room at one exact moment in time, but it doesn't change over time, and it acts completely linear in terms of volume of the source. A real world tail is a bit more complex than that though, that's where the modulation of algorithmic tails comes in. It adds some variance to the tail so that it doesn't sound as static.

That's the theory behind it, and all sounds reasonable and fine. In practice however, you can do it exactly the other way around and get completely valid results. Or you can go with convolution only, or algorithmic only. It all depends on what you want to achieve, and on your particular tastes.


----------



## re-peat (Feb 25, 2016)

Have to disagree with you there, KEnk. Very little useful info in that thread, in my opinion. If anything, plenty of mis-information and confusing pseudo-information.

- - - - - - - 

As for the opening question: the technique of combining different types of reverbs for different parts of the reverberation — usually adopted (rather thoughtlessly and somewhat foolishly, it always seemed to me) as part of the quest for more realism in mock-ups — is, in my view, complete sillyness. People who swear by it, often have no idea what they’re doing because if they did, they’d know that it makes no difference whatsoever — certainly not to the degree of ‘realism’ of a mock-up — what type of reverb you’re using. (To anyone who disagrees: prove it, please.)

Convolution-based ER’s, or tails, don’t bring more realism to a mock-up, nor are they better capable at suggesting depth than algorithmically-generated ER’s. And the opposite is also true: algorithmic reverbs aren’t intrinsically better or more capable either.

Realism or depth isn’t dependent on the type of reverb, it’s dependent on how one uses reverb and, much more so, on dozens of other far more important choices and decisions most of which have nothing to do with reverb whatsoever (orchestration, balancing, choice of libraries, programming, a good knowledge of the tools you’re using, etc. …)

A solid understanding of what ER’s are, how they work and what they contribute to the illusion of space, is MUCH more important than whatever type of reverb you choose to generate these ER’s with. Likewise, a good insight in how sound and its surrounding space interact, and how that interaction needs to be manipulated in order to suggest various degrees of depth, will have much more of a positive influence on the results you achieve than any choice of reverb(s) in itself might do.

It’s simple really: get yourself one or two good reverbs — there’s plenty of them about, and it doesn’t matter what type —, really get to know them inside out (experiment extensively with every single parameter until you know exactly what they all do and don’t do) and get completely comfortable using them, and you have everything you need to address every possible spatial challenge that a mock-up might ever present you with. And you’ll be able to do it in a musically and technically wholly satisfying way. 

Furthermore, an added bonus of such knowledge — the knowledge that makes you the master of your tools (as opposed to the ignorance which results in the opposite) — is, that it’ll allow you to arrive at good solutions in the simplest and most efficient way. And a wise foundation, in my opinion, to build this knowledge on, is understanding and accepting the extent of the power that reverb has in enhancing a mock-up: what it can do, but just as important, what it can’t.

_


----------



## rayinstirling (Feb 25, 2016)

Wise words indeed Piet but.........................I'm still a sucker for spending my money on something new even when it's an emulation of something old


----------



## tonaliszt (Feb 25, 2016)

I've actually been having good results with a synthesised IR for the early reflections. I don't know if it's the best way, but I haven't heard about anyone else doing it yet.


----------



## emid (Feb 25, 2016)

I would strongly suggest to read the link provided by @KEnK and try yourself. Lots of solid information with practical examples is there provided by @Beat Kaufmann . There are many methods to achieve a good result and time to time they differ. One of them is what Beat explains in that thread.


----------



## KEnK (Feb 25, 2016)

re-peat said:


> Have to disagree with you there, KEnk. Very little useful info in that thread, in my opinion. If anything, plenty of mis-information and confusing pseudo-information.


Luv ya Peat ! I do 

I agree w/ what you've said in your post- re: knowledge being the primary thing.
But there were some things I learned in that thread.
Or at least things worth experimenting with.
Never thought about shortening a longish IR to make a reasonable ER.
The idea of using the same ER for various distances-
Also not using predelay when using a convo ER.
The importance of using a mixed wet/dry signal when routing it that way.

You may agree w/ none of that, and I may very well (after enough experimentation)
agree w/ you,
but it does seem like there are plenty of ideas to chew on in that thread.

Disregarding the algo vs convo argument,
curious if you find the routing concepts flawed.

thanks

k


----------



## re-peat (Feb 25, 2016)

KEnK said:


> curious if you find the routing concepts flawed


I do, I’m afraid. On top of which I also read very little in any of Beat’s contributions (in that thread) that I subscribe to.

My main objection to all of it is that it disregards the intrinsic artificiality of a mock-up mix — which, in my view, is the worst place you can start from, on your way to a good-sounding mix — and also, that it is foolishly inconsistent in, on the one hand, demanding photographic realism for certain aspects of the mix yet, on the other hand, totally ignores the all-pervading fakeness of most other aspects of the mix.

(I'm reminded of one of the responses in the MIR-vs-Origami-vs-SPAT-vs-VSS thread: evaluating the SPAT-mix, someone said: “Wonderful sound, but I think it does not necessarily sound like reality.” To which my first reaction is: “And the samples do?”)

Anyway, back to that video. Apart from the very unhealthy fact that you end up with certain amounts of direct signal in at least three different spots (100/0-mix in the instrument’s track, 50/50-mix in one or more of the ER buses, 60/40-mix in the tail bus), resulting in a near uncontrollable balance between dry sound and the room response, the whole thing also strikes me as unnecessarily and even dangerously complicated. Perhaps able to achieve satisfying results, yes — provided it’s a very static mix of instruments that all have similar spatial characteristics, to begin with — but certainly no more so than far simpler approaches can give you as well.

Say you want to pan your clarinets? How do you pan the direct signal that’s part of the ER-bus and the tail-bus? You can’t, can you?

And since the sends are pre-fader, any level adjustments you make on the instruments’ tracks won’t be reflected in a corresponding level change of dry signal in the ER-bus or the tail-bus … 

Not to mention the fact that the video also shows little understanding in what ER’s actually are and how their presence, definition and behaviour differs, significantly so, depending on the distance between source and listener. (It’s even more complicated than that actually because the intensity and character of ER’s is also very much source dependent — percussion, for example, generating an entirely different ER-behaviour than, say, strings or woodwinds. Which is another thing that the technique, as demoed in that video, doesn’t allow for.)

Or, say you bring in an instrument that already has some ambience baked into its samples? What do you do then? How make it fit? It may remain smooth sailing if you stick to VSL instruments exclusively, but what about if you bring in, say, a Berlin or an 8dio Claire woodwind? Or a Cinesamples horn section? Or a Spitfire harp? 

And, let’s say, halfway through you mix, you want to compare different sizes of room/chamber/hall, to find the one that works best for your piece (or to try and match your reverb with the baked-in space of some more ambient library which you suddenly decided to use): you’ll not only have to adjust the size in four different instances of your reverb-plugins (and you’ll need to adjust the length of the tail as well), but you’ll also have to adjust the predelay settings too (since length of pre-delay is partly determined by the size of the venue). That’s dozens of parameters that need to be carefully adjusted before you can do something as simple as comparing one size or type of space with another …

And these are just the first things that spring to mind watching that video. Again, I’m not saying that it is impossible to arrive at satisfying results with that method, I’m simply saying that I find it a depressingly inflexible formula — of a complexity (and, paradoxically, rigid bluntness at the same time) which is, given the context, never justified by the results it yields — and moreover fraught with tedious impracticality, and also inviting all sorts of sonic problems that something as problematic-in-itself as a mock-up already is, can well do without.

_


----------



## Guy Rowland (Feb 26, 2016)

What Piet said. Sorry to say, that video is a shocker.

Routing-wise, I still keep it stupidly simple - one bus for ER, one for tail. That's it. This business of 50/50 wet / dry with sends, or different predelays, is the road to madness. If you want to improve on my basic method (which is surprisingly flexible), go for VSS / MIR / SPAT and do it in a properly controlled way. Being able to easily adjust the parameters from everything from ambient to anechoic sources - and then mix with them - is key.

As for convo / algo, again I find it makes little difference - there's good and bad examples in both. Practicalities like the number of instances, CPU-use, flexibility of control can be of greater consideration.


----------



## dreamnight92 (Feb 26, 2016)

I always use convo for ER and algo for tail. This works nice for me


----------



## re-peat (Feb 29, 2016)

One couldn’t wish for a better example of everything that can and, Murphy-wise, will go wrong as a result of too complex a reverb setup (and the “3 ER’s”-technique in particular), by listening to *this mix*.

All the effort that went into this … and for what? To end up with a mix that suffers from severe phase-problems (that unfocused, hollow and constantly strangely whirling chorus-y sound), a mix that has no solid stereo image and that has — and isn’t this particularly sad? — MUCH less depth and scale than the unprocessed, out-of-the-box libraries actually allow for …

_


----------



## Silence-is-Golden (Feb 29, 2016)

re-peat said:


> One couldn’t wish for a better example of everything that can and, Murphy-wise, will go wrong as a result of too complex a reverb setup (and the “3 ER’s”-technique in particular), by listening to *this mix*.


I presume you have notified the creater of that piece, so he knows you are using in it in this way in another thread? Also so he can benefit from the feedback he asks for.

I think that would be a repectable thing to do.
Thanks


----------



## Silence-is-Golden (Feb 29, 2016)

Guy Rowland said:


> What Piet said. Sorry to say, that video is a shocker.
> 
> Routing-wise, I still keep it stupidly simple - one bus for ER, one for tail. That's it. This business of 50/50 wet / dry with sends, or different predelays, is the road to madness. If you want to improve on my basic method (which is surprisingly flexible), go for VSS / MIR / SPAT and do it in a properly controlled way. Being able to easily adjust the parameters from everything from ambient to anechoic sources - and then mix with them - is key.
> 
> As for convo / algo, again I find it makes little difference - there's good and bad examples in both. Practicalities like the number of instances, CPU-use, flexibility of control can be of greater consideration.


I am glad you posted this Guy, because after many trials and errors this seemed to me like the best / simplest direction to go for with various wet/ dry libraries
I am still learning how to do it well, and gladly received some useful feedback from various members here.


----------



## EastWest Lurker (Feb 29, 2016)

I keep it even simpler: 1 instance of a convolution reverb for each orchestral section, and one instance of an algorithmic verb for gloss and breathing that all send a little to.


----------



## Guy Rowland (Feb 29, 2016)

EastWest Lurker said:


> I keep it even simpler: 1 instance of a convolution reverb for each orchestral section, and one instance of an algorithmic verb for gloss and breathing that all send a little to.



Actually that's more reverbs than my ultra-light version - I count 5 versus my 2! And IMO that would only work if you only have stuff from the same libraries in each section. If you need to blend, say, VSL's flute with Spitfire's Clarinet, you'd definitely need an ER for the VSL at the very least.


----------



## EastWest Lurker (Feb 29, 2016)

Guy Rowland said:


> Actually that's more reverbs than my ultra-light version - I count 5 versus my 2! And IMO that would only work if you only have stuff from the same libraries in each section. If you need to blend, say, VSL's flute with Spitfire's Clarinet, you'd definitely need an ER for the VSL at the very least.




No, you simply send smaller amounts from a wet library instrument to the verb and greater amounts from the dry libraries.

Anyway, I stay clear of wet libraries generally, no disrespect to their quality.


----------



## Vin (Feb 29, 2016)

EastWest Lurker said:


> I keep it even simpler: 1 instance of a convolution reverb for each orchestral section, and one instance of an algorithmic verb for gloss and breathing that all send a little to.



And my setup is even simpler: one (algorithmic) reverb on a send (100% wet, of course) and if I use a really dry library (SM/VSL), I'll insert a ER reverb (also algorithmic) on the insert. Works great for me, less is definitely more in my case, after spending countless hours with different spatialization and convolution plugins and overthinking it. Good ol' panning + send to reverb bus to taste and that's it.


----------



## Silence-is-Golden (Feb 29, 2016)

To the last 4 posts, this a great help!
I was increasingly worried I needed more and more tools, but as my efforts proceed, it means less and less tools but use a few good reverbs well.
The effort now is to set it up well and listen to how it sounds.

Btw: my easy integration of VSL libs is pitting an instwnce of Mirx on them an voila, wet they have become, placed in position, and ready to be joined easily with other libs.

PS: great to see you are posting again Jay!


----------



## afterlight82 (Feb 29, 2016)

How you buss your reverbs is more a matter of what you need where. I'm set up so I can print dry or wet easily, since I almost always pass my stems to a mixer and want to quickly be able to give them that option without a headache. Beyond that, where you source the signal from (assuming it isn't changing the signal sent, eg post or pre eq, delay or whatever)...who cares, beyond "does it sound good?".

A little bit of delay effect goes a long way, especially with clarifying early reflections.

I've also found of late that unless someone is doing something breathtakingly wrong with reverb, much of what people obsess about in terms of reverb algorithms and space etc. is in fact at least partially and often entirely due to _balance_. Same thing with EQ (even more so with eq). You should be able to produce a good result with either convolution or algorithmic verb, and judicious automation and eq of reverb sends _and _returns. The reverb algorithm does not the music make.


----------



## Guy Rowland (Feb 29, 2016)

EastWest Lurker said:


> No, you simply send smaller amounts from a wet library instrument to the verb and greater amounts from the dry libraries.
> 
> Anyway, I stay clear of wet libraries generally, no disrespect to their quality.



Truthly I don't think that's a good system at all. You should use ER for blur / diffusion and Tail for, well, tail. Just adding a lot of tail to a dry library to make it blend with wet isn't a good idea IMO.

However, for those using ER as inserts, that would work great. It's potentially a lot more instances though. Multiple sends to one ER is super-versatile and efficient - the only down side is it screws up your stems, so you'll need to export one stem at a time.


----------



## Silence-is-Golden (Feb 29, 2016)

Guy Rowland said:


> Multiple sends to one ER is super-versatile and efficient - the only down side is it screws up your stems, so you'll need to export one stem at a time.


Are you willing to explain the last part regarding screwing up stems?
Why is that different to the other approaches?
I gues if you want the stems free of any ER or tail you simply disable them?


----------



## Guy Rowland (Feb 29, 2016)

Silence-is-Golden said:


> Are you willing to explain the last part regarding screwing up stems?
> Why is that different to the other approaches?
> I gues if you want the stems free of any ER or tail you simply disable them?



Yes indeed, but that would likely produce lousy stems. Let's say you have that VSL flute and Spitfire clarinet. If you just cut all the ER and tail, the flute will be completely dry, while the clarinet will be bathed in magnificent Air. You can't suck the ambience out of Air, so the only solution really is to export each section one at a time with the reverbs active.

The less CPU-efficient but quick-to-render method would be to have ER and Tail for each section - then you can batch process stems.


----------



## germancomponist (Feb 29, 2016)

I never liked convo reverbs! They sound steril and unnaturally, at least to my ears!


----------



## Vin (Mar 1, 2016)

re-peat said:


> I do, I’m afraid. On top of which I also read very little in any of Beat’s contributions (in that thread) that I subscribe to.
> 
> My main objection to all of it is that it disregards the intrinsic artificiality of a mock-up mix — which, in my view, is the worst place you can start from, on your way to a good-sounding mix — and also, that it is foolishly inconsistent in, on the one hand, demanding photographic realism for certain aspects of the mix yet, on the other hand, totally ignores the all-pervading fakeness of most other aspects of the mix.
> 
> ...



Great post as always, re-peat. I'd be curious about your reverb(s) of choice and your setup. If I remember correctly, you're a fan of SPAT?


----------



## Hannes_F (Mar 1, 2016)

This whole ER/Tail topic did get an own life with the years. In my perception it started in 2008 when Steven (SvK), a much valued member with golden ears, wrote a little tutorial about how to color dry samples with a dosis of early reflections:
http://vi-control.net/community/threads/tutorial-applying-early-reflections-to-get-that-sound.9139/

I was inclined then to comment that while this can work in some cases it is certainly not the only way to bliss but did not want to be negative since many seemed to like the result. However note that SvK himself wrote in that very thread "i do not believe in using ER's on libraries with baked in halls ..... you're asking for trouble...", something that has been widely ignored subsequently.

With time more and more people picked that up, published methods, tutorials and even books about the topic of spacialisation and coloring of samples with reverb, especially early reflections, and asymmetrical delays. In nearly all of them there is a vast amount of half-true and even false information that keeps being copied over and over.

So one thing is to consider: If you feel 'boxed' while listening to your mix, if the result changes considerably while you are moving your head a little left and right from your ideal listening position, or if you feel _any _pressure on your ears then it is time to mute all the added ERs and check whether this still is the case. If not, then your ripped-out-of-the-context-and-generously-and-conflictingly-applied ERs have messed up your mix.


----------



## Guy Rowland (Mar 1, 2016)

Hannes_F said:


> I was inclined then to comment that while this can work in some cases it is certainly not the only way to bliss but did not want to be negative since many seemed to like the result. However note that SvK himself wrote in that very thread "i do not believe in using ER's on libraries with baked in halls ..... you're asking for trouble...", something that has been widely ignored subsequently.



I'd very much agree with that. ER is really for dry. I think I have just a smidge on LASS which is pretty dry, but it's really for the likes of your Sample Modelling, VSL etc.


----------



## Silence-is-Golden (Mar 1, 2016)

Hello Hannes,
Since you have commented on a thread some time ago ( I believe it was the"ultimate ER question" or so thread) I carefully read a part where you referred to a Lexicon L480 manual, where it was said that to use ER's in a "sampled environment" was too much out of context. Mainly due to the walls, floor, ceiling, placement of instruments towards these objects that so much diffusion and other reflections are occurring that is not reproducible for using ER's within a sample based setting.
So what you posted here is another addition to the clear message that is to be got from all this (at least this what I draw out of it), ER's may not be a good path to follow up on.

I am still pursuing to learn the methods whereby spacial placement of virtual instruments becomes possible with some good tools, some guidelines and listen to what the results give, and adjust according to what the piece needs.
(There is probably a reason why so many struggle with this part of mixing and mock-ups)

With help of another member of this forum I am trying to get to a reasonable sound for LASS with reverb, and I am now using newly purchased it's from Numerical Sound. You have mentioned before that you also use custom ir's. Do you by any chance use these as well?


----------



## playz123 (Mar 1, 2016)

In the past I experimented with two approaches....the ones outlined by Guy and Jay. Overall I tend to use the one Jay described most often, even though I do use fewer reverbs on occasion. Based on what I like and what I hear, I do prefer convo on ERs and a plate for tails/"gloss". On the other hand, one approach may work most of the time, but it's important to keep in mind it doesn't work in every situation.


----------



## Hannes_F (Mar 1, 2016)

Silence-is-Golden said:


> Hello Hannes,
> Since you have commented on a thread some time ago ( I believe it was the"ultimate ER question" or so thread) I carefully read a part where you referred to a Lexicon L480 manual, where it was said that to use ER's in a "sampled environment" was too much out of context. Mainly due to the walls, floor, ceiling, placement of instruments towards these objects that so much diffusion and other reflections are occurring that is not reproducible for using ER's within a sample based setting.
> So what you posted here is another addition to the clear message that is to be got from all this (at least this what I draw out of it), ER's may not be a good path to follow up on.



Dear Silence-is-Golden,
that would be too much of an absolute, of course. Nothing is just black or just white.
I don't use the Numerical Sound IRs currently in my setup.

I was going to elaborate more on my view but it is again too combative for me in this thread, sorry.


----------



## Guy Rowland (Mar 1, 2016)

Silence-is-Golden said:


> So what you posted here is another addition to the clear message that is to be got from all this (at least this what I draw out of it), ER's may not be a good path to follow up on.



Eh? Certainly wasn't anything like what Hannes posted, as I read it. The issue highlighted - with which I'd strongly agree - is not to add ER to wet samples. Oh, and the notion that ERs aren't suited to samples would be bizarre, at best.

Again, if you're just blending - say - Cinesamples with ProjectSAM with Spitfire, then you likely won't need any ER at all - that process has occurred naturally in the recordings of the samples. If you've got dry samples to blend in, that's a whole different story. Effectively what SPAT, VSS, etc do (as I understand it) is micromanaging the basic tools of pan (and width), ER and tail, with a level of finesse far exceeding my simple method [EDIT - oh, and delay]. I wouldn't recommend throwing out ER as a tool than I would either of the others, except when it's understood that that information is already baked into the samples. It's exactly the same (and more obvious) with the pan pot - if its a violin stereo sample already panned in place, it would be daft to pan it AGAIN in the same direction, you'd just be placing the player in the wings, the poor thing. Exactly the same with ER - don't do it twice, but if it's not done once it's unlikely to sound convincing (in the context of an orchestral recording).


----------



## EastWest Lurker (Mar 1, 2016)

Guy Rowland said:


> Truthly I don't think that's a good system at all. You should use ER for blur / diffusion and Tail for, well, tail. Just adding a lot of tail to a dry library to make it blend with wet isn't a good idea IMO.
> 
> However, for those using ER as inserts, that would work great. It's potentially a lot more instances though. Multiple sends to one ER is super-versatile and efficient - the only down side is it screws up your stems, so you'll need to export one stem at a time.



Guy, You certainly are entitled to your view. But I like my end result and so do my clients. And when it comes to audio in a DAW discussing aesthetics, "should" is the most useless of words. That said, if I used a lot of wet libraries, which I don't, I might have to change my approach, but I stick to mostly dry libraries as I said, and I find my approach works very well with them.


----------



## Guy Rowland (Mar 1, 2016)

Your clients were happy before you got UAD, right? 

There does seem to be a lot of confusion about this - wet libraries DON'T need ER. If you're using primarily the Hollywood Series, that doesn't really need ER either - that sounds similar to LASS in terms of space, you have the ER already but not really any tail (certainly not a long tail). However, if you're also using any completely dry stuff like VSL and sitting them alongside the Hollywood Series....well go on, give it a try. Ease in a little ER and see what happens. What have you got to lose?


----------



## EastWest Lurker (Mar 1, 2016)

Guy Rowland said:


> Your clients were happy before you got UAD, right?
> 
> There does seem to be a lot of confusion about this - wet libraries DON'T need ER. If you're using primarily the Hollywood Series, that doesn't really need ER either - that sounds similar to LASS in terms of space, you have the ER already but not really any tail (certainly not a long tail). However, if you're also using any completely dry stuff like VSL and sitting them alongside the Hollywood Series....well go on, give it a try. Ease in a little ER and see what happens. What have you got to lose?



For orchestral stuff I use essentially 3 libraries: the Hollywood series, the old Sonic Implants Symphony, and Kirk Hunter's Concert Strings & Brass.

I _tried_ the approach you mention and virtually every _other_ approach there is to try. I heard no appreciable benefit with the libraries I use, so I went back to this more simple approach.

What I am still experimenting with and trying to decide how much I like it, is creating depth with the UAD Oceanway plug-in re-micing. There is something there for sure but I am not nailing it yet.

BTW Guy, and I swear to you, I don't mean this in an unfriendly way or as any kind of attack, but do you have a website? I would love to hear some of what you actually do as I don't remember having heard any.


----------



## Guy Rowland (Mar 1, 2016)

I do Jay - its there in the sig. You might wanna jump straight to the Bandcamp stuff though - https://guyrowland.bandcamp.com/


----------



## Silence-is-Golden (Mar 1, 2016)

Guy Rowland said:


> Eh? Certainly wasn't anything like what Hannes posted, as I read it. The issue highlighted - with which I'd strongly agree - is not to add ER to wet samples. Oh, and the notion that ERs aren't suited to samples would be bizarre, at best.
> 
> Again, if you're just blending - say - Cinesamples with ProjectSAM with Spitfire, then you likely won't need any ER at all - that process has occurred naturally in the recordings of the samples. If you've got dry samples to blend in, that's a whole different story. Effectively what SPAT, VSS, etc do (as I understand it) is micromanaging the basic tools of pan (and width), ER and tail, with a level of finesse far exceeding my simple method [EDIT - oh, and delay]. I wouldn't recommend throwing out ER as a tool than I would either of the others, except when it's understood that that information is already baked into the samples. It's exactly the same (and more obvious) with the pan pot - if its a violin stereo sample already panned in place, it would be daft to pan it AGAIN in the same direction, you'd just be placing the player in the wings, the poor thing. Exactly the same with ER - don't do it twice, but if it's not done once it's unlikely to sound convincing (in the context of an orchestral recording).



Yeah, you are absolutely right. In my mind I was thinking about the 3 ER method that I abandoned.
So yes, ER's are useful where needed and not to be discarded. And the dry libraries is where they may be best applied with carefulness.

And if I take onboard the various indications that were put here by members with considerations regarding ER's and the 'reality' of working with vi's, I could put into perspective my earlier 'blissful ignorance' about emulating reality with something as a sampled instrument.


----------



## EastWest Lurker (Mar 1, 2016)

Guy Rowland said:


> I do Jay - its there in the sig. You might wanna jump straight to the Bandcamp stuff though - https://guyrowland.bandcamp.com/



Thanks Guy, now I have a frame of reference for your views.


----------



## Guy Rowland (Mar 1, 2016)

EastWest Lurker said:


> Thanks Guy, now I have a frame of reference for your views.



(psst... that link has been there for years you know. And I'm pretty sure you said awfully nice things about a track I did once upon a time  )


----------



## EastWest Lurker (Mar 1, 2016)

Guy Rowland said:


> (psst... that link has been there for years you know. And I'm pretty sure you said awfully nice things about a track I did once upon a time  )



It would not surprise me at all.


----------



## Altine Jackson (Mar 1, 2016)

Guy Rowland said:


> (psst... that link has been there for years you know. And I'm pretty sure you said awfully nice things about a track I did once upon a time  )



Yes, I thought reading this page gave me deja vu... I couldn't resist digging this up: http://vi-control.net/community/threads/do-you-guys-layer-different-string-libraries.22570/page-2#post-3588333 

(Frequenting this forum for years and attempting to find useful posts has definitely taught me how to search efficiently!)


----------



## Ashermusic (Mar 1, 2016)

Clearly I forgot, senior moment


----------

