# Techniques for Mix Glueing?



## ModalRealist (Jun 3, 2014)

I find that, even when working solely with samples recorded in the same original space, that I have trouble getting a certain "uniformity" or "glue" to the final sound produced. I've tried judicious applications of covering reverb, the application of subtle EQing as a "glue", and so on, but seem to have very hit-and-miss success at making the final mix sound like it is actually a recording taking place at one and the same time.

Am I missing some trick? Or is it pretty much just the application of the things I've mentioned with sufficient experience that results in a better final mix?

P.S. Speaking here almost exclusively about orchestral music, for the record.


----------



## ghostnote (Jun 4, 2014)

VCC is great for glueing, aswell as TDR. Den has shared a TDR preset here on the forum which I think works great with orchestral stuff:
http://www.vi-control.net/forum/viewtop ... t=#3737022


----------



## Jem7 (Jun 4, 2014)

Good transparent compressor on master bus with good eq on indivdual stuff and a good reverb setup.


----------



## bryla (Jun 4, 2014)

You mention that 'even though working with samples' as if it would seem easier. Working with a real recorded orchestra is the easiest to glue! They play according to each other and according to the space. Good orchestration and good space through a decca tree doesn't need anything else. 

Samples are by nature not glueing! There isn't even any bleed between mics and the overtones of different libraries excite the halls they were recorded in differently. 

What you must try to do is to match the timbre of instruments. Not to apply the same eq or even reverb but make each instrument and section individually to sound as if they were sitting in the same hall. Eq and possibly exciters and saturators can be the way to go. Most film mixes contain 80% hall mics (according to Dennis Sands - not applicable to Elfman). This means that you also have to match the dynamics of the group. Sidechain compression or master bus limiters can help you to contain the dynamics. When the brass or percussion section really kicks in the woodwinds and strings can't have the same dynamic size as they had. 

These are just some of my thoughts on how to make samples glue. It's been a while since I mixed samples because I only have been recording and even mixing real orchestral recordings. Hope it helps.


----------



## alexmshore (Jun 4, 2014)

I find using a channel strip plugin can help with help this, a little crosstalk also. Other than that mix busses always.


----------



## re-peat (Jun 4, 2014)

If you bring samples from different libraries together, and add to that all the sonic issues related to try-and-match spaces as well, not to mention the difficulty of making the sonic ‘stamps’ of different libraries/developers co-exist, forget about glueing (for all the reasons Thomas explained so well, to start with). In such cases, you can’t glue, you wrap. You wrap your audio in the sonic equivalent of cellophane ― either by adding stil more reverb, or by using virtual summers, channel-strips, saturators, tape simulators and other such trickery ― and hope your listeners won’t be too put off by the clumsy results. But take that virtual cellophane away, and your mix comes falling apart again in all its ill-matching bits and pieces.

Andy Blaney, using nothing but Spitfire material, never has to worry about glueing. Nearly all of his demos sound almost as cohesive as a real orchestra does. That’s the big advantage of sticking to one brand only (especially if it is as fine a brand as Spitfire’s).
His spaces are consistent, his depth and/or perspective is consistent, his sounds are consistent, his stereo-image is consistent (and solid), the sonic stamp of the entire orchestra is consistent, … the whole thing simply is what is is, effortlessly natural-sounding, nothing doctored or forced and … nothing requiring glueing.

But the moment you add a bit of Cinesamples or LASS here, bring in some VSL there, add some solo work with SampleModeling stuff, and fill some holes with 8dio or ProjectSAM, and put the cherry on top by way of having the HollywoodStrings sing out the main melody or whatever, you’re in trouble. Big trouble. Glue-wise, I mean. Especially if certain percentages of that lot then get sent to reverb1 for early reflections and to reverb2 for tails (or whatever it is people think they need to be doing to create spatial consistency and realism). 
What you have here is pure artificiality multiplied to such an absurd degree, that there is simply no glue in existence with which to restore even a minimum of cohesion and natural blending.

_


----------



## AC986 (Jun 4, 2014)

It's true that using something like Spitfire Audio libraries and the way they are recorded means you don't need to worry about 'glue'. 

The only downside of this is that because of the nature of samples, you will eventually have the same sound all the time. That may not be regarded as a problem mind you, with some writers. If you're going to have the same sound all the time, make sure it's a good sound.


----------



## ModalRealist (Jun 4, 2014)

Thank you all for the advice; it is very much appreciated.

I would like to point out that in my original post, I did specify that I _am already_ talking about libraries recorded in the same space: e.g. Spitfire, or in my case, the Hollywood series. My problem with "glue" persists _even though_ I am using uniformly recorded libraries.


----------



## Oliver_Codd (Jun 4, 2014)

Awesome post re-peat! Couldn't have said it better. 

Since modal realist is addressing orchestral libraries that are already recorded in the same space, I have a couple of additional thoughts. 

Firstly, make sure your template is truly balanced. Mock up a familiar piece you like to make sure dynamics are matching each other etc. 

Your orchestral mix from this point on is largely going to be a result of your arrangement, orchestration decisions and programming. 

Make wise choices on mic mixes for each instrument group before reaching for the reverb. 

Once things are more or less balanced, start sending groups to a good reverb if need be. If you're using spitfire products, you probably wont need much. If you're using only one reverb, like a bricasti, send individual sections to an EQ that is bussed directly to the reverb, so that you can filter out frequencies that are muddying the ambient mix. 

If things still need gluing, the first thing that would come to mind is very subtle mutliband compression on individual sections to tame mid end build up that results in a bloated sound. Then possibly a very very subtle compressor on top of the entire orchestra. 

Wouldn't it be great if we only had to think about composing?? :D


----------



## cortlandcomp (Jun 4, 2014)

Has anyone had any success with things like Slate's VCC and Statson, in terms of glue?


----------



## marclawsonmusic (Jun 4, 2014)

re-peat @ Wed Jun 04 said:


> What you have here is pure artificiality multiplied to such an absurd degree, that *there is simply no glue in existence with which to restore even a minimum of cohesion and natural blending*.
> 
> _



This all sounds rather hopeless, especially for folks like me who still trying to get some basic engineering chops.

I can understand that the best outcome is achieved by recording everything in the same room. However, many filmscores today are recorded in multiple rooms, sometimes by multiple engineers, and combine live players with samples, synths, solo players, ethnic instruments, vocalists, etc.

And those recordings still have a fairly homogeneous sound... at least the end product sounds pretty good to my ears...

Does one need to hire Alan Meyerson or Shawn Murphy or Dennis Sands to achieve this result? Or is there some sort of middle ground? Surely working with samples isn't that much different from taking recordings / stems from multiple studios and combining them into a final mix, right? I can understand how it might be difficult, but is it really impossible? Is it really as hopeless as Piet describes? :(


----------



## bryla (Jun 4, 2014)

Even though libraries are recorded in the same space they are not recorded simultaneously. My post above tries to describe ways to go about that since libraries will never glue the same as recordings.


----------



## marclawsonmusic (Jun 4, 2014)

bryla @ Wed Jun 04 said:


> Even though libraries are recorded in the same space they are not recorded simultaneously.



OK, I get that, but how is this different from comping a vocal track, recorded over several days (because the vocalist was nervous), across half a dozen or more takes?

That's a lot of splices, crossfades, and edits (kind of like samples?)... and one cannot say that the final vocal performance was recorded simultaneously - either on its own OR with the other instruments.

Yet, this is done every day in pop / rock music.

Please understand that I do not disagree nor am I trying to be combative. I greatly respect the opinions represented in this thread and am genuinely trying to understand why this is being described as an impossible task.

Respectfully,
Marc

PS - I can also accept the possibility that I "just don't get it" (and maybe can't at this point). I know that some of these things can only be comprehended as your skills and ears grow through experience.


----------



## Oliver_Codd (Jun 4, 2014)

I think it's important to stress that you can still get an amazing, cohesive sounding mix without all of the elements sounding like they are in the same room. A lot of times it's preferable to have more spacial contrast, so they'll record percussion in a studio, orchestra in the scoring stage and choir in a very large space etc.

Our brains are used to hearing the core of the orchestra (strings, brass, winds, perc) as a whole, in one environment, so it's quite noticeable when different sections sound sonically detached. 

You can do a lot to eliminate this sense of detachment, and still achieve wonderful results (Thomas Bergersen's mockups come to mind), but that's not the same thing as making it sound as if it was recorded in the same room. Distinguish between the two, and act accordingly.


----------



## givemenoughrope (Jun 4, 2014)

re-peat @ Wed Jun 04 said:


> ... using nothing but Spitfire material … nothing requiring glueing.
> 
> But the moment you add a bit of Cinesamples or LASS here, bring in some VSL there, add some solo work with SampleModeling stuff, and fill some holes with 8dio or ProjectSAM...
> _



Right, that's a given almost but what about when you start with LASS, 8dio's close mics, vsl, ect? Trying to glue those with reverb, or wrap them them in cellophane, while easier, still isn't easy, for me anyway.


----------



## re-peat (Jun 4, 2014)

marclawsonmusic @ Wed Jun 04 said:


> (...) Surely working with samples isn't that much different from taking recordings / stems from multiple studios and combining them into a final mix, right? I can understand how it might be difficult, but is it really impossible? (...)


With samples you have to negotiate your way over and through many, many, many more hoops and hurdles before reaching the ear of your audience, than you have to with live recordings, Marc. 
*Firstly*, there’s the way how the samples were recorded (not always ideal), *secondly* you have to consider what was recorded (do the captured timbres and articulations speak the exact same language as your music does?), *thirdly*, there are the sonic consequences of converting those recordings into a user-friendly sample library, *fourthly*, there’s your programming skills (did you manage to program an authentic and convincing performance?), *fifthly*, there’s your mixing skills, *sixthly*, there’s the interference from other libraries (see my previous post), *seventhly*, there’s the spatial issues (see above as well), … *eightly*, there’s the fact that the sound in samples is ex-parrot-like dead, *ninethly*, there’s the frequency conflicts and build-ups as a result of stacking dozens and dozens of audio recordings, *tenthly*, there’s the fact that an average DAW-based studio ― with its diversity in plug-ins, its sometimes questionnable audio-hardware and monitoring, its on-the-fly samplerate conversions, its acumulative digital bussing and summing, etc. … and the lot often being run by someone with only moderate engineering skills at best ― is not always the most quality-audio-friendly environment, …

And this is just the beginning of what is a list running well into the several dozens of similar considerations.

With live audio, you have almost none of the above to consider. And even if you have to in certain situations, well-recorded live audio is much, much much stronger and more 'integer', and it can withstand outside interferences much, much, much better than sample-based audio can. Not only musically, but technicaly as well.

I could ruin a recording of a real stringsection by converting it several times to mp3 and back to wav, mess a few times with its samplerate, send the file through a guitar amp and record it back, and then, as the final coup de grâce, post it on Soundcloud, and you would STILL, even though the audio quality has horribly deteriorated by this point, recognize that it’s a real stringsection. And you’d still be able to enjoy the music and the performance. Try the same thing with virtual strings and you have people bombarding you with excuses even after the very first conversion: that the reason for their poor sound and lack of realism is because of the mp3-conversion, or because Soundcloud messed with their audio, or whatever. But those are utterly silly excuses. The real reason occurred much earlier than that: the moment they loaded up their sample libraries.

To experience how powerful and easy-to-work-with recorded live sound is (as compared to sample-based audio), just try, if you have the opportunity, to record, say, a guitar and a cello, or a flute and a piano, or some other combination, and then mix that and experience the effortless, totally natural way in which these sounds will find their place in the mix. You don’t even have to be a great engineer or a top-rate producer to get good results. And even if you process it badly due to inexperience or injudicious use of whatever tools you feel like using, it’ll still sound totally real and convincing. And you don’t have to think about glueing or anything. All the glue you need is already part of that wonderful thing that is living sound.

EDIT: 
I agree with Oliver: if you’re in that situation of having to bring together source material from widely different sources (and most of us are, I believe), simply accept, even try to embrace the artificiality of combining these sonically incompatible elements, and try to somehow get ‘a decent sound’ with whatever it is you’ve thrown together. It's difficult, but doable and even if it’ll lack that natural, cohesive strength and powerful simplicity that comes with using ‘one brand only’, that doesn’t mean that it still can’t sound pretty good _in its own way_. 

_


----------



## marclawsonmusic (Jun 4, 2014)

As always, I appreciate your thoughtful reply, Piet.

I think I am beginning to understand.



> Our brains are used to hearing the core of the orchestra (strings, brass, winds, perc) as a whole, in one environment, so it's quite noticeable when different sections sound sonically detached.



So, getting a truly "natural" or "authentic" sound may not be possible, but it is still possible to achieve a _musical_ result.

It sounds like the key is not trying to force your samples into doing something they were never intended to do.

Cheers,
Marc


----------



## clarkus (Jun 4, 2014)

Him Alex - What do you mean by "Mix busses always?"

thanks


----------



## clarkus (Jun 4, 2014)

Hi, re-peat - Some of us are marrying synths and samples from a variety of sources with acoustic sound sources (i.e. audio files). The question of how to make those live together well in an acoustic environment that is extremely artificial by nature is a great one. I appreciate your insights (you just gave me some the other day about the use of compression). I'm just pointing out that we aren't all writing mock-ups of orchestral music, where you have the choice of getting everything sourced from the same developer.


----------



## Oliver_Codd (Jun 4, 2014)

marclawsonmusic @ Wed Jun 04 said:


> As always, I appreciate your thoughtful reply, Piet.
> 
> I think I am beginning to understand.
> 
> ...



I honestly think the words "natural" and "authentic" should be thrown out the door as soon as you start creating music in a computer. As you said, focus on musicality. Don't forget that simple mixing techniques like adding EQ or reverb to a live recording is already adding an artificial element to the mix, but one that's improving the emotional delivery in one form or another. Go listen to Thomas Bergersen's "An Awfully Big Adventure" demo on the sample modeling website. Then search the same track on youtube, but look for the version that was recorded for the album Two Steps From Heaven. One is a mockup with samples recorded in different spaces, the other is a live recording of a full orchestra recorded at once in a hall. Both sound incredibly musical to me.


----------



## Udo (Jun 4, 2014)

Although good ears and knowing what to listen for are obviously key requirements, using your eyes can be very helpful. I'm surprised no one mentioned a product like Nugen's Visualizer.


----------



## jeffc (Jun 4, 2014)

I'll throw a few thoughts, just for kicks:

- I agree, that natural should be thrown out the window. It's rare to have a purely orchestral score with no synths/sweetening, etc. So most are mixing disparate sources, who cares, as long as it sounds good, who cares if it sounds real.

- I think too many people have too many plug-ins and over process everything because they can. In my experience using pro mixers, I'm amazed at the results they get with really stock stuff. Not 10 plugins on every track. Sometimes using the digi stock eq, stock compressors, and just by the lack of processing. Good monitoring and ears, and they are able to dial in great sounds, depth of field, just by simple old school panning, delay, eq and comp, minimally processed.

- Also, and this is a big one. What are you mixing for. Are we talking a mix for a film or TV show? Or are we talking about a stereo track. Because the stereo track you're talking about comparing to, isn't close to the mix that's in the film. Again, you'd be shocked to see how moderate the levels are on a stemmed out mix that goes to the dub stage. Not every stem limited and Ozoned to 0db, they are peaking really low, maybe -12, with a lot of dynamics. Usually are mixed with the dialogue in, so you can hear how a mix is really going to play with the sound design and dialogue. How it sounds cranked to 11 by itself, isn't really an issue when mixing a film. Of course, if there's a soundtrack, that stereo file is then mastered and brought up to a decent level, but again, a good mastering guy does it with some really basic old school tools. Everything out there today doesn't have the latest tape saturation/vintage com/multiband compressor on it. Sometimes people forget that a lot of stuff out there is being sold to us, so of course it's made to sound that we need everything new under the sun. But I don't think so. There were some pretty great recordings made way before all that stuff even existed. And ironically now we're getting plugins that are trying to make it all sound like that vintage stuff again. Go figure  A bit of a tangent but just a thought.....


----------



## marclawsonmusic (Jun 4, 2014)

Oliver_Codd @ Wed Jun 04 said:


> I honestly think the words "natural" and "authentic" should be thrown out the door as soon as you start creating music in a computer. As you said, focus on musicality. Don't forget that simple mixing techniques like adding EQ or reverb to a live recording is already adding an artificial element to the mix, but one that's improving the emotional delivery in one form or another. Go listen to Thomas Bergersen's "An Awfully Big Adventure" demo on the sample modeling website. Then search the same track on youtube, but look for the version that was recorded for the album Two Steps From Heaven. One is a mockup with samples recorded in different spaces, the other is a live recording of a full orchestra recorded at once in a hall. Both sound incredibly musical to me.



Hi there, Oliver...

Great point and I agree that both versions of that tune sound very musical. 

I think I might have misinterpreted some of the earlier comments as saying that there is no way to achieve a _musical _result using blended libraries. If that's the case, then I think I might have wasted a lot of money... :shock: 

I learned a long time ago that it's the player who makes the music... not the gear. I once saw this Bela Fleck and The Flecktones show where the drummer (Future Man) played the most wicked drum groove using only some brushes and a *piece of paper*.

At that point, I realized that a creative person will create with whatever is available... and I didn't really need that $4K set of DW drums after all :-D

Anyway, cheers and thanks.
Marc


----------



## marclawsonmusic (Jun 4, 2014)

With respect to the OP (sorry ModalRealist for derailing your thread a bit)... 

Is there any way you could post an example so we can hear what it is you feel is not working? If you are using the same set of libraries, it could be an orchestration thing... or programming, or something else other than solely "mix".

Best,
Marc


----------



## waveheavy (Jun 5, 2014)

ModalRealist @ 4/6/2014 said:


> Thank you all for the advice; it is very much appreciated.
> 
> I would like to point out that in my original post, I did specify that I _am already_ talking about libraries recorded in the same space: e.g. Spitfire, or in my case, the Hollywood series. My problem with "glue" persists _even though_ I am using uniformly recorded libraries.




Definitely sounds like mixing approach, not the libraries.

First thing to check, always, is phase issues between tracks. Can we always be certain those who recorded even the same sample libraries did so with the same mic setup every time? I don't think so. Listen to the mix in 'mono' to hear if anything drops out. If you hear that, then there's tracks out of phase. You can save yourself time by doing most of your initial mix in mono (like EQ, compression, and levels). There's a neat little plugin called AutoAlign that's a huge time saver for correcting phase between tracks. Otherwise, you'll have to manually nudge problem tracks back or forward, aligning them to correct any phase issues.

Secondly, just about everything needs some EQ and compression. Can use a frequency analyzer plugin to check an instrument's frequency ranges, reduce problem frequencies first (like snare hums usually anywhere around 240-350Hz). The area around 250-400Hz is a range where almost all instruments contain frequencies, so a lot of 'mud' can build up in that area.

Nothing wrong with cutting frequencies as long as it doesn't make the instrument sound unnatural (unless you want it to). 

Horns like a small bit of boost around 150Hz. Violins often need a slight dip around 2kHz, and shelf boost from there up a bit to create 'air' (this often dependent on the sample library too).

Looking at instruments with a freq. analyzer, low cut anything below their lowest frequency, even if you don't think you hear anything there. Just the slightest hum over many tracks can build up when played together. Samples shouldn't have that problem of low freq. hum, never know though. We mixers do this low cut anyway just to make sure. It's the little things that add up.

Like others have said, based on the instrument type, compression may be needed to balance them between each other. 

For reverb, see my post in the reverb tails thread. I explain Fab Dupont's reverb usage method, which is based on how we actually hear reverberation, and with what's needed because of things recorded close-miked 'dry' now days.

A single compressor on a group buss of each group of similar section instruments can help with the glue (like violins for one, brass for another, percussion for another, etc.) A final EQ on each group might help some too.

A compressor on the master buss shouldn't need much if the level balances of the tracks were peaking in the low yellow during the initial rough mix.

Hope that helps some.

Dave


----------

