What's new

Need help with early reflections...

What plugins would you recommend that create early reflections well? :)

My favorites..





But as others have said you can mimic some early reflections lots of interesting ways... and of course you can experiment with the IR's of nice rooms, as Beat Kaufmann, suggested, which takes less expertise but maybe more time hunting for the right IR's, presuming your IR plugin can truncate off the tails.
 
I'm going to confuse you even more unfortunately, but it is necessary. If you are a sample based composer reverb takes on a whole new type of function. So the end result of what you are trying to acheive and working that into your template and workflow are crucial.

You have to keep close attention to what kind of sample you are using. For samples recorded in a large room or hall, ect. the early reflections are baked into the sound. Any attempt to add more will just muddy it up and confuse the ear. Also, there are a fair amount of late reflections and so your reverb has to perform a different function. On "wet" samples reverb acts more as a glue or to add a longer tail. For that kind of reverb which is nearly 80% of what I use I tend to just use verb as a masking agent and to enhance the room a little. Just about any verb will do that well. Even believe it or not rather cheap ones.

On closely recorded or more dry samples or solo recorded instruments, overdubs ect. Reverb plays a crucial roll. But, I've found that to get that sense of space reverb can't do it alone. You have to take into account air absorbtion, relative loudness vs. distance, ect..

Since for me I use just a smattering of dry samples or close mic overdubs I tend to put the early reflection on the track. I use specifically designed room verbs for that. My thinking is that first you put the instrument on the stage, set the balance (relative loudness),position and adjust the EQ (for air absorption related to distance) ect. Then I send that full package to my main reverb for the hall sound and for the tail and to blend with the rest of the ensemble.

I find that dealing with sends for early reflections tends to send it to the same early reflection so you lose some of the spacing.

Also, one can get too reliant on reverb. Reverb alone can't handle much. I'm sure we've all suffered from the over reliance on reverb for a mix. Work first on the ensemble aspects of balance and blend and spacing of instruments, then add reverb and you will find that reverb tends to work whereas before you were having trouble.

I remember scoring a film and because of time constraints, I had to do the God awful trick of just slapping a reverb across the entire orchestral bus. I was actually surprised that because I had setup a well balanced template for the film and already had everything in its own space, the generic catch all reverb worked fairly well.

I will provide links so that you can hear. I always put the caveat of I'm not posting these links for any reason but for demonstration purposes. The music is already done and completed 5 years prior and I have no interest in rehashing these pieces. So reserve comments to just the topic at hand.

Generic reverb example(reverb just across the orchestral buss):



Explanation on how to take a close mic'd sample and blend it into a hall:

 
Last edited:
I wonder, what is the difference between a mix of the two channels and a dry/wet mix of the reverb? Isn't that the same? Couldn't I just turn the dry/wet knob?

Rowy,

As Averystemmler already pointed out, it’ll depend on the actual reverb plugin whether there is going to be a difference in sound between using the send or insert technique. It depends on whether the dry signal is allowed to pass through unmodified or not. With most conventional reverbs I know, this is the case and there is indeed no difference in the final 'blend result' between working with the send-method or the dry/wet-insert method.

So, if your computer can handle dozens of instances of your reverb plugin of choice, by all means, keep using the insert method. No problem.

Except …

… there are a few very good reasons why it remains advisable to get yourself familiarized with the send technique as well, chief among them being this: say, half-way through a mix, you change your mind about the type, colour or length of the reverb, what then? Not an uncommon thing, as I'm sure you know. And a few run-throughs later, you change your mind again? Or you wanna change the pre-delay? Or the high-frequency decay?
If you use the send-method, you will only have to edit the settings in one or maybe two reverb instances, whereas if you've restricted yourself to the insert method, you could well be looking at having to change the settings in several dozens of reverb instances. The former (changing a single send instance) is quick and pleasant to do, the latter (changing numerous insert instances) is tedious in the extreme and takes all the fun out of the mixing process. Not to mention the fact that it will also make you loose your focus and concentration and, eventually, you no longer bother with it and you find yourself settling for a result you know could have been better.

That alone already makes it worthwile, I believe, to also have the send method in your arsenal of mixing techniques.

Me, I always use the insert method for spatializers — the perfect example of a plugin where it makes no sense whatsoever to use it as send — but for all reverbs that are shared by multiple tracks, I will invariably choose the send route.

_
 
But the benefits of using a send for reverb means that you can compress the signal and eq it separately from the track. You can also eq the incoming signal before the reverb which might be an effect you are after. Lastly, if your track also has a delay, if that delay is set up as a separate send, you can then send a touch of the delay to the reverb send to add glue to the overall sound.

Reaper processes the FX chain from top to bottom, so I can start with an eq, then a delay and end with a reverb, all in one chain. Perhaps that's why sending a signal through different channels is not for me. I can get the same results with a method that is closer to older, more sequential methods.

I can learn a procedural programming language but not an object oriented programming language.
 
… there are a few very good reasons why it remains advisable to get yourself familiarized with the send technique as well, chief among them being this: say, half-way through a mix, you change your mind about the type, colour or length of the reverb, what then? Not an uncommon thing, as I'm sure you know. And a few run-throughs later, you change your mind again? Or you wanna change the pre-delay? Or the high-frequency decay?

Ah... that's not how I work. I don't like big computers and large screens, so I work on a laptop. I can hide a laptop in a desk drawer. The downfall is that I have to work in several steps.

The first thing I do, is rendering the instruments solo and really dry. If a vst has a bit of reverb, I don't use it. Session Strings Pro 2 is perfect. It's dry and precise.

Then I have, with a 4-part piece for strings, 4 dry waves, 24 bit 48000 kHz. That is my material. I don't use filters, yet. To widen the sound I combine 2 copies of each wave, one 4% left, one 4% right. Again, I have 4 waves. This trick is very important. Without it, the reverb wouldn't cut it.

In the final mix I use filters per instrument (per track) to improve the sound. The violins sound a bit harsh, so I tone down certain frequencies (no, not just a plain hi cut). I finetune the violas and the basses. I balance the general sound. Then I start working the volume enveloppes per instrument.

After that, in the fx chain, I put Valhalla Vintage Reverb to work with a subtle short reverb and a relative longer trail (I learned that from an experience music producer).

That's it. You can imagine my surprise when I read somewhere that you need to send your reverb. It's like you managed to paint a nice picture with acrylic and then you get told that you should have used oil paint. A nice picture is a nice picture.
 
Last edited:
Reaper processes the FX chain from top to bottom, so I can start with an eq, then a delay and end with a reverb, all in one chain. Perhaps that's why sending a signal through different channels is not for me. I can get the same results with a method that is closer to older, more sequential methods.

I can learn a procedural programming language but not an object oriented programming language.
There have been several other really other good posts following the last post I made which offer some other reasons why the send method is considered best practice. But you are correct that you can run all inserts in series. Many people do this to keep it simple. But your statement above is not accurate and - depending on your plugin chain - can potentially lead to a muddy, unusable end product.

For example, it is typical that an electric guitar will have a delay mixed in series before a reverb. Think of how a guitarist normally routes his pedals before the amp. While there are numerous ways guitarists can setup their pedals (in parallel and/or in series), for the most part, the end product would be similar to your description above. So in this case, we can say that your statement is true. That said, there are other guitar mixing techniques where a delay would come after everything via a send to simulate two guitars rather than one.

But for something like vocals, it is very common that a vocal track will get sent to multiple delays of different types as well as multiple reverbs. This gives the vocals a fullness and width that cannot be achieved by processing everything in series. If processed in series, a delay would flow into a delay, then into another delay, then into reverb 1, then into reverb 2, etc. This would create mud. A very thick, yucky mud. Manny Diaz typically routes his vocals to four separate delays and uses one or two reverbs (depending on the type of production). You've heard this sound on countless #1 hits but it is impossible to reproduce by stacking the plugins in series (procedurally). Dave Pensado recently posted a video on his youtube channel demonstrating how he used 7 separate reverbs to achieve the end product of a recently mixed song. This would also be impossible to do in series.

With orchestral sections, it is very common for composers to combine the insert and send techniques where they will place an insert reverb on each section that only has early reflections (no tail). They will adjust parameters so the strings sound closer than the brass (for example). Then, all of the sections are bussed to at least one reverb where the parameters have been adjusted so that this reverb is "tail only." This makes the instruments sound like they are sitting in different places on stage, but are in the same room. Of course this is a technique to be used with dry samples (e.g. VSL) and wouldn't be used for something like Spitfire's Orchestral collection which already has the reverb "baked into the recording."

Other orchestral mixing techniques may send a dryly recorded orchestra to three or four separate reverbs to simulate "close", "mid", and "far" mic positions to reproduce a sound like you might achieve out-of-the-box from a Spitfire library.

Hope that clarifies. It certainly can be confusing to understand all of these nuances when starting out with mixing, but it is a technique that will benefit you greatly down the line. Great results can be achieved through both techniques, but they have distinctly different results. And as @re-peat mentioned, changing verb settings and such is maybe one of the most straight-forward reasons to practice the send technique.

Also - the good thing about VI-Control is that it is a community that rallies around members trying to learn how to grow their techniques. So don't worry about having to shy away from techniques you don't know. Maybe it is isn't important for you now, but when the time comes, I'm sure members here will be happy to continue helping you learn this part of the craft.
 
I'm going to confuse you even more unfortunately, but it is necessary. If you are a sample based composer reverb takes on a whole new type of function. So the end result of what you are trying to acheive and working that into your template and workflow are crucial.

I don't think I'm a sample based composer. I write music on a piece of music paper, set it in Finale, edit it and export a midi. The midi is pretty balanced and has the most important information embedded, but it's just a start. I import the midi in Reaper and then the work really starts.

That's what I hate most. A composition doesn't cost me more than a couple of hours and all I need is a piano, music paper and a pencil. The production costs me several days. I shouldn't do that to myself.
 
The Early Reflections determine to a large extent whether we hear an instrument close to us or rather far away. They also tell our brain whether it is a small room or not.

Anyone remember the Roland D50 synth? It came out in the late '80s when memory for samples was very expensive, so companies used as little as possible when they wanted to create real instrument emulations. The DX-7 is the ultimate example.

Anyway, the D50 attached a synthesized tail to a sampled attack. It worked for the same reason that ERs tell you about the room: our brains figure out what's going on very quickly.

There are reverbs with a sampled attack and "algorithmic" (meaning synthesized) tail. I believe the one in VSL's Vienna Suite does that.
 
I don't think I'm a sample based composer. I write music on a piece of music paper, set it in Finale, edit it and export a midi. The midi is pretty balanced and has the most important information embedded, but it's just a start. I import the midi in Reaper and then the work really starts.

That's what I hate most. A composition doesn't cost me more than a couple of hours and all I need is a piano, music paper and a pencil. The production costs me several days. I shouldn't do that to myself.
Okay.

By sample based I mean that you don't have an orchestra at your immediate disposal so you have to mock it up for other people to hear with samples. I didn't mean to slight you in any way.

I am also a pen a paper composer but to compete in commercial music or even concert music these days, a good sample mockup is a must.

The key to quick mockups that don't take you days or weeks is to have a good balanced template to work from. My main point is that will take more than good reverbs.

The best mockups I've heard come from those that can really handle a library well. If you are frustrated in the amount of time it takes and believe me I'm with you on that one, focus a lot of attention on the libraries you use and what their strengths a capabilities are. The finer points on reverb are kind of a moot point these days. Not to deter you. It is still important but it was more important when VSL was the only game in town 20 years ago. Not today. Unless you are recording live stuff and even then you'd want to do that in a good space first.
 
Okay.

By sample based I mean that you don't have an orchestra at your immediate disposal so you have to mock it up for other people to hear with samples. I didn't mean to slight you in any way.

I see. English is not my native language, so I have to translate all this difficult stuff into Dutch. That might give people the impression that I'm a bit slow.
 
Joël, I never updated the old Stereo Room plugin to SP2016. Did you use this older plug in as well? Have you found any substantial difference - in this specific use case - to warrant an upgrade?
I used the other one in the past, only use the new one now. The vintage mode is perfect, not too intrusive. Older one is more chorusy. Still very usable though.
 
Top Bottom