What's new

What makes a mix sound wide if its bad practice to pan orchestral samples?

thevisi0nary

Senior Member
I know that orchestral samples are usually pre panned, but when I reference my tracks against a soundtrack that I like, the soundtrack still is usually much wider.

I am wondering what kind of techniques people are using with their sample libraries to get them to really fill out space while still have presence and separation.
 
Nothing wrong with panning orchestral samples further, in my opinion. Just be sure your DAW is using balanced panning (reducing the volume of one channel relative to the other) rather than stereo panning (where one channel gets gradually bled into the other - this will cause issues). Also be sure your arrangement is well balanced from left to right otherwise it can sound a bit weird if you go a bit mad with it.
 
What soundtrack is your reference? in a traditional orchestral style, the feeling of width often comes from recording and orchestration more than mixing techniques.

For example, spaced mic pairs like "outriggers" or Decca tree will be good for a 3d spatial image compared to coincident mic techniques. And capturing a bit of the room in those mic positions also add more feeling of width in the recording. So if you can use Decca mic positions or similar in your template over close mics, that could be worth a try.

But even more important could be orchestration. Layering too much can actually smooth out the high frequencies that would have given clarity and power to each instrument, and the result becomes more muddy and vaguely centered. Instead, assigning separate notes of a chord to individual wind instruments, or individual string sections, will spread the music in stereo, and it will also allow the brightness/clarity of each instrument to shine on its own even when the entire orchestra is playing.

Great examples of these points are the Andy Blaney demos for spitfire symphonic brass. I've heard those were done using outriggers only, but I don't really know.

Do you feel like these points might explain some width that you're hearing in your reference track, or do you feel it's something else?
 
You can experiment with different mic positions. Wide room mics in combination with panned close mics can help sometimes.
 
Instead, assigning separate notes of a chord to individual wind instruments, or individual string sections, will spread the music in stereo, and it will also allow the brightness/clarity of each instrument to shine on its own even when the entire orchestra is playing.
^ This. A big factor I've found in attaining clarity, depth and fullness.
Re. panning, I find this plugin invaluable:
https://www.bozdigitallabs.com/product/pan-knob/
 
Most of the mixers I've worked with go for the Waves S1 on the mix bus. I tried it myself for my writing template, it's powerful, but you need to be restritive with it, just give things a little bit of a boost with it if needed.

For orchestral music you can use the close mic panning more extreme. Only works if the samples are kept in their natural recording phase (so the close mics are ahead of the hall mics). Keeping the close mics in the mix but quite low, as low as you can put them so that you can still hear the difference if you bypass the close, will allow you get that large stereo width and separation, as you will hear the attack of the close mic but then immediately "resolve" in to the ambient mics - gives the impression they are panned more than they are.

Just keep in mind that any changes you make to close mics should be done before you use any stereo enhancement, stereo imaging should always be done last and only IF you need it - not just "because".
 
Most of the mixers I've worked with go for the Waves S1 on the mix bus. I tried it myself for my writing template, it's powerful, but you need to be restritive with it, just give things a little bit of a boost with it if needed.

For orchestral music you can use the close mic panning more extreme. Only works if the samples are kept in their natural recording phase (so the close mics are ahead of the hall mics). Keeping the close mics in the mix but quite low, as low as you can put them so that you can still hear the difference if you bypass the close, will allow you get that large stereo width and separation, as you will hear the attack of the close mic but then immediately "resolve" in to the ambient mics - gives the impression they are panned more than they are.

Just keep in mind that any changes you make to close mics should be done before you use any stereo enhancement, stereo imaging should always be done last and only IF you need it - not just "because".
Great info here. Thanks for sharing.
 
A quick thing I use often is to place a stereo enhancer on the hall reverb bus. Gentle 110% works wonders.
 
Just be sure your DAW is using balanced panning (reducing the volume of one channel relative to the other) rather than stereo panning (where one channel gets gradually bled into the other - this will cause issues).

What kind of issues? 8-/

Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.
 
What kind of issues? 8-/

Quite contrary: Using "balance" instead of a proper panning device will very likely ruin the sound because you lose 50 percent of the information. As a (very obvious) example, imagine the recording of a piano in full stereo: When you just lower the volume of the right side to make it appear to sit left on the stage, you won't hear much of the all-important mid- and treble-range any more.
A relatively close mic'd piano is more an exception, I would say.

I'll need to ramble a bit to explain (for the benefit of others reading), but hey ho.

So, there's two main ways we perceive directionality: level difference and time difference. Basic example being how a mono sound can be made to sound more left by decreasing the level of the the right channel, which would be using level difference.

But then with stereo recordings time difference comes into play. With 1st violins the sound is coming from the left, and since sound doesn't travel instantaneously, that sound will be picked up by the left microphone slightly before the right, only by a few milliseconds, but on playback the brain can perceive that the sound is coming from the left. Haas effect being a way to utilise that on mono sources to create directionality.

In such a situation both microphones receive a similar level of signal, so the directionality is coming more from time difference than level difference.

If you used true stereo panning, you'd be bleeding the slightly delayed signal from one channel into the other, which would if anything somewhat ruin the effect of time difference. Using balanced panning you just reduce the level of one channel, so the time difference is preserved. For instruments with either room/ hall mics, that's why I'd say balanced is better.

The reason I say a close mic'd piano is more an exception is because the source doesn't stay central, and each microphone isn't necessarily receiving a similar level - the left microphone will pick up bass notes louder than the right, since the left is closer to those strings. Likewise the left would pick up treble notes quieter, since those strings are further away. So in such a situation it'd be better to use true stereo panning so that the levels stay consistent across the range.
 
Last edited:
I think both perspectives are valid depending on the scenario. Sounds like Chappell is considering decca tree / spaced microphone setups where room is quite present in the recordings. Sounds like Dietz is considering very dry recordings, especially if they're recorded with coincident mic setups that minimize the delay between the channels.

Feel free to correct me if I'm mischaracterizing the assumptions from either of you =)
 
Sounds like Dietz is considering very dry recordings, especially if they're recorded with coincident mic setups that minimize the delay between the channels.
In a similar vein, yes. Not necessarily "very dry", and not just coincident, but also all kinds of small A/B (like ORTF-setups). - I rarely feel the urge to pan recordings derived form a full-blown Decca tree. ;-D
 
There are a lot of hybrid soundtrack mixes where the center is left for dialogue. So the signals are panned harder but often compensated by corresponding signals on the opposite like first violins left and second on the right or additional textures, synth doublings etc. That makes the mix feel wider than a traditional orchestra mix.
 
So, there's two main ways we perceive directionality: level difference and time difference. Basic example being how a mono sound can be made to sound more left by decreasing the level of the the right channel, which would be using level difference.

.

So this is generally true, and I don't mean to be pedantic, but actually, the two primary localization cues that our brain uses to determine the source of a sound are time delay and frequency response.

The latter is a result of what's called the auditory shadow. Any frequencies higher than about 2k will be blocked by your head. So if a sound is coming from 90° to your left, then that sound will reach your right ear at about the same volume, except those frequencies that are blocked by the auditory shadow.

You can test this by taking a mono signal and splitting it into two channels coming out of your left and right speakers at equal volume. Then roll-off the frequencies above 2K on the right channel and it will sound very much like the sound source is on your left.

The reason is is that the human head is on average about 5 in wide. The frequency whose wavelength is less than 5 in will be blocked by your head, whereas wavelengths longer than 5 in will go around your head. So sound waves around 2K or 2500 K have a wavelength smaller than 5 in. Therefore they get blocked.

The other primary localization cue is the time delay between each ear, as you already mentioned. But again, a difference in level is not required for the localization to occur.

Indeed, it takes an extreme difference in level for localization to occur. If you take your split mono source which is running to two separate channels on a mixer and merely pull the volume down on one channel, localization will only begin to occur when the difference is quite drastic.

But what is happening is that if, for example, you turn down the right Channel, then what you're really doing is allowing the other localization cues to take effect

I mean, if the Sound Source is only coming from your left speaker then you are not simulating localization cues, you actually have a genuine soundsource that's on your left side. So the time delay and the frequency roll off are allowed to have their effect.

When you have a mono sound source running to two speakers, they each have their own set of localization cues, time delay and auditory shadow. But these localization cues are masked by the other speaker. So it sounds like it's coming from The Middle, sort of.

So lowering the level of one side is actually just removing this masking effect.

This all might seem a bit academic, but it actually has real-world consequences. When you have a mono source coming out of two different speakers that are spatially separated. Each ear is hip hearing delayed versions of the opposite speaker combined with the non delayed speaker on each side. This creates a kind of fuzziness that we've all just gotten used to it. If you want to hear a mix without this fuzziness listen to some of the old Beatles records where they were panning everything hard left or heart right.

We all find this painting method a bit novel in these days but actually it was responsible for making the Beatles tracks sound incredibly punchy. When you're in a room and the bass is coming out of one speaker and one speaker only, clarity ensues.
 
Last edited:
@David Chappell - this is a very interesting topic and has been thoughtfully / respectfully debated. I like it.
Maybe if it goes further / new questions come up I might chime in with a few ideas/explanations of my own. But - @Dietz has - er- how to put this - a rather in depth knowledge and unique wisdom (and massive amount of experience) when it comes to this stuff... ;)
 
But - @Dietz has - er- how to put this - a rather in depth knowledge and unique wisdom (and massive amount of experience) when it comes to this stuff... ;)

Huh! Thanks for the flowers ... :emoji_bouquet: ;) ... but I still learn something new from every production I do, and from every discussion - after more than 30 years in this business. So please don't hesitate to share your thoughts!
 
Awesome advice to digest here. What do people think about panning the close mics only? Will this create phase issues if room mics are introduced? Also when it comes to panning mics with room sound, is it optimal or not optimal to use stereo balance for panning?
 
Top Bottom