# Mixing orchestral music in surround (part live/part sampled)



## Ian Livingstone (Jul 28, 2005)

I've got to mix a score in 5.1 in a few weeks - live choir/strings/brass, but sampled percussion harp and wind. Couple of questions if anyone can help out - my heads spinning! I've done quite a bit of surround mixing with sound design FMV etc, and electronic tracks, but never orchestral in 5.1.

1. Percussion is all sampled (Truestrike and EWQLSO plat) - if I do 3 seperate passes and record each mic position one at a time, will I get phase problems? (as apposed to recording all 3 positions at the same time). My percussion template has max'ed out giga3 and Kontatk2's ram just with the surround mics so having all 3 positions loaded simultaneously isn't an option right now. I'm recording my gigas/Kontakts straight into Nuendo3 via ADAT - I can't do this with giga's syncro record unfortunately as it causes stuck notes (don't ask me why - that's a whole new can of worms).

2. For the live stuff I will have the multi-track stems (decatree surround mics etc) - I'm assuming I'll need to add more verb as it's a fairly small stage not a hall. Can anyone recomend a good surround convolution reverb (VST format)? Was looking into gigapulse VST but think you need 3 instances to get surround as it's just a stereo plugin, or am I talking bull? can't tell from the tascam site. Wizoo W5 has true 5.1 I/O so looks nice... Can't afford Waves IR2 right now.

3. What should I put through the LF channel - is it worth filtering everything and putting a bit of everything in, or would it be more effective to have the occasional gran cassa / timp on the stabs?

4. What should I put through the centre - one engineer told me "_I usually make a phantom center to get a better spread. And also It's strange if a solo instrument suddenly appears at the right side of the screen, so in that case I'm put the solo normally in the center. But if it's brass or another type of instruments which is in the vocal range and we have dialog going on then I work with that phantom center (share L and R and work with reverb routed to C) As delivery format,_". Can anyone elaborate on this phantom centre as I can't quite get my head around it!

5. For the soundtrack CD (if it happens) is it worth me remixing everything again just in stereo, or would a 6to2 plugin do it ok without phase problems?

6. Talking of phase - what's the best way to check phase problems which might occur by adding mics together, when it's played in stereo (ie if a DVD downmixes) - Nuendo3 has a 6to2 plugin so I can use my ears for problems, but is there a way of seeing the phase visually in a 5.1 mix in-case I miss something?

That's it for now!

Thanks in advance - please reply even if you don't know the answer to all of these questions - it all helps 

Ian


----------



## Thonex (Jul 28, 2005)

Wwith Platinum, do as you say... but use only the Left *or* Right Mic (not stereo) for the center channel. F mics can be assigned to Front L,R with a little S mic blended for a "lusher" sound. For the surrounds, use the S mics exclusively. Use your judgment on how much signal to sent to the surrounds.

For the live stuff, set up a stereo verb of the front L,R and another one for the Surround L,R. Send some of the Front L,R verb to the center channel. Blend to taste.

For the LFE channel send what you want into it and use an Elliptic type eq (Very steep pole... almost vertical) and set it to low pass abound 110-120hz. I would monitor to see what need to be included. Some music purists say nothing except cannons form the 1812 overture should got to the LFE... I say use it judisiously. Try to send discrete parts in . Str Basses if needed... tymps etc. You need good monitoring for this though. Make sure your speakers are calibrated... your sub as well. 

In Nuendo, you can mess with the convergence of the LRC channels to see how much Center channel you want. The more I talk to people the more opinions on the subject I hear. Just use your judgment.

I'd do a separate Stereo mix for the stereo release. Absolutely.

Read this AES pdf: http://www.aes.org/technical/documents/AESTD1001.pdf

Hope some of this helps.


----------



## Ian Livingstone (Jul 29, 2005)

Andrew - thanks - that's all great advice 

Cheers,
Ian


----------



## groove (Jul 30, 2005)

Hi Ian,

one thing you should set before mixing is : are you gonna use the 5.1 format to really send instruments to unusual chanels (rear left or right etc...) or only to give a suround reverb effect ?

as a dubbing mixer myself i earded some interesting mixes with weird instrument position but usually it is a stereo mix with extra wide stereo reverb. (i wouldn't get a plug but rent a good Lexicon 480 or M6000 TC electronic if you can)

then i would do separate stems of percs/harmonics section/soloist
check with the mixer (of the movie) if he has some time it will be more flexible for him to fill with dialoges FX etc...

as a composer myself i just mixed a score with complete lib (EWQLSO gold) mainly you can chek it here : (it only the music thought not the full mix)
http://stefmail2.free.fr/add_temp_c.mov

it is not 5.1 but we rarely used the L&R chanel together so we could exagerate the instrument position because it was interfering with fx to much. some French horns sound strange to me but we were short on time...

get a phase meter absolutely your ear might be great but some weird thing come out of those libs !!!

for the LFE chanel i agree with thonex filter hardly even at 100 Htz

i'll get back to you anytime you need for mixing trick but i have to admit that i'm not acurate on how to perfect the use of platinium in such situation.

stephane


----------



## Marsdy (Jul 30, 2005)

Hi Ian



> 1. Percussion is all sampled (Truestrike and EWQLSO plat) - if I do 3 seperate passes and record each mic position one at a time, will I get phase problems? (as apposed to recording all 3 positions at the same time). My percussion template has max'ed out giga3 and Kontatk2's ram just with the surround mics so having all 3 positions loaded simultaneously isn't an option right now. I'm recording my gigas/Kontakts straight into Nuendo3 via ADAT - I can't do this with giga's syncro record unfortunately as it causes stuck notes (don't ask me why - that's a whole new can of worms).



Unless you can get all three passes sample accurate they won't be phase coherent. This doesn't matter too much UNLESS the 5.1 mix is collapsed to stereo. There's always going to be some "jitter" on satellite PCs that stops sample accuracy. The only way to guarantee phase coherence is to run the samples in you DAW where they should be sample accurate.



> 2. For the live stuff I will have the multi-track stems (decatree surround mics etc) - I'm assuming I'll need to add more verb as it's a fairly small stage not a hall. Can anyone recomend a good surround convolution reverb (VST format)? Was looking into gigapulse VST but think you need 3 instances to get surround as it's just a stereo plugin, or am I talking bull? can't tell from the tascam site. Wizoo W5 has true 5.1 I/O so looks nice... Can't afford Waves IR2 right now.



Dunno about the PC sorry.



> 3. What should I put through the LF channel - is it worth filtering everything and putting a bit of everything in, or would it be more effective to have the occasional gran cassa / timp on the stabs?



Personally I'd leave the LFE channel and the centre channel with no music in, especially if it's going to be released on DVD. People do really stupid things with their home cinema systems and you have no control over where they put their centre speakers or they might have their sub turned up all the way and so on. It's safer to leave well alone. Many mixers like to keep the centre channel free for dialog and the LFE channel free for, you guessed it, low frequency effects!



> 4. What should I put through the centre - one engineer told me "I usually make a phantom center to get a better spread. And also It's strange if a solo instrument suddenly appears at the right side of the screen, so in that case I'm put the solo normally in the center. But if it's brass or another type of instruments which is in the vocal range and we have dialog going on then I work with that phantom center (share L and R and work with reverb routed to C) As delivery format,". Can anyone elaborate on this phantom centre as I can't quite get my head around it!



I thought a phantom centre image was what you get in the middle of stereo speakers! There's no actual source of sound even though it sounds like there is. 



> 5. For the soundtrack CD (if it happens) is it worth me remixing everything again just in stereo, or would a 6to2 plugin do it ok without phase problems?



Absolutely do a separate stereo mix!! All the collapsed 5.1 mixes I've ever heard have been pooh. If you do some clever routing and bussing in your DAW you can make life easier for yourself here. Our mutual buddy Keith has done this in Pro Tools so he's the dude to talk to about this.



> 6. Talking of phase - what's the best way to check phase problems which might occur by adding mics together, when it's played in stereo (ie if a DVD downmixes) - Nuendo3 has a 6to2 plugin so I can use my ears for problems, but is there a way of seeing the phase visually in a 5.1 mix in-case I miss something?



You'd need to get the mix down to stereo first unless there's a 5.1 phase scope out there. I don't know of one. There are plenty of stereo phase scope plug-ins, Waves do one I think. I've been using the rather dinky one that comes with Logic. If you're not sure how to interpret a phase scope, send it a stereo mix with one side phase reversed, you'll soon see what "out of phase" looks like! Listening to a stereo mix in mono is a great way to check for phase problems with your ears.


----------



## synergy543 (Jul 30, 2005)

*Re: mixing orchestral music in surround (part live/part samp*



Ian Livingstone said:


> 4. What should I put through the centre - one engineer told me "_I usually make a phantom center to get a better spread. And also It's strange if a solo instrument suddenly appears at the right side of the screen, so in that case I'm put the solo normally in the center. But if it's brass or another type of instruments which is in the vocal range and we have dialog going on then I work with that phantom center (share L and R and work with reverb routed to C) As delivery format,_". Can anyone elaborate on this phantom centre as I can't quite get my head around it!
> 
> Thanks in advance - please reply even if you don't know the answer to all of these questions - it all helps
> 
> Ian


I wonder if by "phantom center" he was referring to an equal balance of Left and Right as opposed to actually using the center speaker? 

I know that Tomita often mixed sections in three parts using a phantom center for stereo mixes. It was treated as three distinct sections (strings left, center, right) and created a very rich sound using this technique. This is a very interesting idea as it allows you to place instruments in the phantom center without having music physically coming out of the center speaker - thus not clashing with the dialog (just as with a 3-way speaker system it sends different frequency components to different speakers - the 5.1 surround system splits sounds in a similar way - LFE, music stereo L/R, Ambience nd FX-rear, Center dialog - for the most part).


----------



## PolarBear (Jul 31, 2005)

I'm absolutely no expert on this, but perhaps consider using Voxengo's Pristine Space and some good impulses from Ernest Cholakis or Peter Roos.


----------



## Ian Livingstone (Aug 1, 2005)

stephane, Dave, Synergy543, PolarBear - thanks - some good stuff there 



> Unless you can get all three passes sample accurate they won't be phase coherent. This doesn't matter too much UNLESS the 5.1 mix is collapsed to stereo. There's always going to be some "jitter" on satellite PCs that stops sample accuracy. The only way to guarantee phase coherence is to run the samples in you DAW where they should be sample accurate.



bah - that's what I suspected - yes it will be collapsed to stereo on the DVD so I may have to avoid this. I'll give it a go anyway and try Nuendo's 6to2 to see what happens


> I wonder if by "phantom center" he was referring to an equal balance of Left and Right as opposed to actually using the center speaker?



dunno - I will get him to clarify. I thought he meant have the reverb from the LR in the centre but not any of the dry signal, so it'd still leave space for the dialogue... Will let you know when I've spoken to him - this is the guy in Technicolour studios who's dubbing so I think he knows his stuff...

Cheers,
Ian


----------



## Thonex (Aug 1, 2005)

Ian Livingstone said:


> bah - that's what I suspected - yes it will be collapsed to stereo on the DVD so I may have to avoid this. I'll give it a go anyway and try Nuendo's 6to2 to see what happens



Ian,

Just do separate stereo stems for C, F and S and play them all back in stereo... you shouldn't have any problems. I combine the F and S mic sounds on some things in a stereo mix to "beef it up" and have never noticed any problems. 

Let us know how it works for you.

T


----------



## Ian Livingstone (Aug 1, 2005)

Thonex said:


> Ian,
> 
> Just do separate stereo stems for C, F and S and play them all back in stereo... you shouldn't have any problems. I combine the F and S mic sounds on some things in a stereo mix to "beef it up" and have never noticed any problems.
> 
> ...



Andrew - just to clarify - the issue I'm worried about is that I won't be recording all 3 mic positions at the same time, but one after the other - so they won't be sample accurate. If I had the ram I'd have all 3 mics in my template and record all 3 simultaneously as EW suggest you do.

So are you saying I should be fine doing them one at a time?

I think in theory Dave had a point with this



> The only way to guarantee phase coherence is to run the samples in you DAW where they should be sample accurate.



but are you saying that in practice it's still ok?

Cheers,
Ian


----------



## synergy543 (Aug 1, 2005)

Ian Livingstone said:


> Thonex said:
> 
> 
> > Ian,
> ...


Hi Ian,

Remember what happens to sound in the real world. If takes about 3 milliseconds to move one meter. So if one of your speakers is 1/3 meter further or if you slightly shift your head to the side, the time difference between speakers can easily be a millisecond or more off. So unless you want tapping drum sticks from all five speakers to sound as if they are "inside" the listeners head, I wouldn't worry too much about recorded tracks not being sample accurate. Give it a try and I think you'll find its not neary as critical as so many other factors.

A few short tests should confirm whether your approach is satisfactory or not. Empirical confirmation, this is the best test.


----------



## Thonex (Aug 1, 2005)

Ian Livingstone said:


> Thonex said:
> 
> 
> > Ian,
> ...



I did it that way with EWChoir... no problems.

Also, if you record stems... and you hear phasing (which I doubt you will) then you can shift the tracks accordingly.

But ideally, yes... all the mic positions on the same machine sharing the same midi channel and they should be phase accurate... at least it is here on my DAW.

On a side note... phasing and combfiltering is everywhere... first reflections phase with the initial wave and so on... I think the EWQLSO is pretty forgiving with that kind of stuff...

Just try it out in stereo... if it sounds good in stereo and mono... then bob's your uncle.... otherwise... do the all mic combos sharing the same computer... which is ideal... but what a PITA.

T


----------



## Ian Livingstone (Aug 1, 2005)

ok guys - thanks 

Hope it's worth all the effort compared to just choosing 1 mic position and using a good surround convolution reverb. I know Nick/Doug/Maarten will tell us it does, but then again Herb says otherwise 

Cheers,
Ian


----------



## synergy543 (Aug 1, 2005)

Ian Livingstone said:


> ok guys - thanks
> 
> Hope it's worth all the effort compared to just choosing 1 mic position and using a good surround convolution reverb. I know Nick/Doug/Maarten will tell us it does, but then again Herb says otherwise
> 
> ...


Let us know how it goes Ian. Also, you might want to check the following thread on Quicktime 7 and surround (also works with QT6 to some degree).

http://www.vi-control.net/forum/viewtopic.php?t=1627


----------



## Marsdy (Aug 1, 2005)

The problem is with percussive transients. I've pretty much compensated for all the latency in my system between my Mac DAW and PCs running GS and Kontakt. However, you can't do anything about the jitter you get with PC MIDI interfaces and even the time stamped stuff at the Mac end isn't 100%.

I can definitely hear this on transient material. It is not so much an issue in 5.1 although it's still apparent, the percussive transients get slightly smeared to my ears. It's a very real problem when you downmix to stereo.

I did some tests with a sample accurate click in Logic and then MIDI playing the same click in GS on the PCs. There's no apparent latency but there's definitely phasing going on and it varies, some clicks phase cancel but most don't unfortunately. However, this is only really an issue with percussive material as far as I'm concerned. 

You'd think that you'd be able to get sample accuracy between machines in this day and age. MIDI sucks!!!


----------



## synergy543 (Aug 1, 2005)

Marsdy said:


> The problem is with percussive transients. I've pretty much compensated for all the latency in my system between my Mac DAW and PCs running GS and Kontakt. However, you can't do anything about the jitter you get with PC MIDI interfaces and even the time stamped stuff at the Mac end isn't 100%.
> 
> I can definitely hear this on transient material. It is not so much an issue in 5.1 although it's still apparent, the percussive transients get slightly smeared to my ears. It's a very real problem when you downmix to stereo.


OK, so why not record tracks separately and then shift them in the sequencer afterwards so that all attack envelopes line up precisely? This should be easy enough to do and would be as good as recording them all at the same time accept for a very slight amount of MIDI latency over which you have no control. However, you could even slice the tracks up and go through phrase by phrase to tweak the spots that you want to tighten up (by slightly moving tracks to align the attacks). This is about as close as you can get to compensating for MIDI latency - I've actually heard of engineers (maybe BT?) "time-aligning" on entire projects.


----------



## Ned Bouhalassa (Aug 1, 2005)

This may sound a little 'trad', but how's about just adding a sync beep 2 seconds before each cue stem/mic mix? You could then align track to the sample, right?


----------



## José Herring (Aug 1, 2005)

I don't think you'll ever be able to completely get rid of Jitter. Though I think you can get it down to a point that it really doesn't matter. 

How about Midi over Lan. Do you use that Marsdy? It seems to me that that would reduce the posibility of excessive Jitter.

I think the problem with Jitter actually lies in that uncertainty pricinple thingy I remember studying some time ago. The more precisely you get to locating something in time and space by time and space the more tiny variations seem like huge gaps.

Seems to me that time whether midi or digital is always going to be a best guess within knowable parameters rather than a dead on event.

Ah, yes time and space constantly moving. Kind of annoying.

Jose


----------



## Marsdy (Aug 1, 2005)

synergy543 said:


> Marsdy said:
> 
> 
> > The problem is with percussive transients. I've pretty much compensated for all the latency in my system between my Mac DAW and PCs running GS and Kontakt. However, you can't do anything about the jitter you get with PC MIDI interfaces and even the time stamped stuff at the Mac end isn't 100%.
> ...



MIDI latency on PCs can be surprsingly large, often in double figures. On my PCs it is significantly larger than my soundcard's latency for example. Static latency itself is easy to compensate for of course.

The issue is MIDI "jitter" on a note by note basis which is inconsistent from take to take and easily enough to destroy true phase coherence. It's totally impractical to fix this in the real world!

I know if I have the close mic percussion from EWQLSO sample accurate in my Mac, it doesn't sound right layered with the stage and hall mics coming from my PCs when monitoring in stereo. In surround it doesn't sound as bad but I'm sure I can hear the transients get a little messed up and it's inconsistent with each playback.


----------



## synergy543 (Aug 1, 2005)

I think Ned's suggestion of using a sync click at the beginning of tracks is a good idea to get various machines or tracks aligned. MIDI latency can be adjusted by slightly shifting tracks until attacks align. 

MIDI jitter is another beast that's more difficult to tackle. But is this really the biggest problem? And is it really that severe? As long as you're not folding down two tracks with the identical material you shouldn't hear phasing. And why should you need to fold the 5.1 mix down? It would be better to make a separate stereo mix. Another solution to the problem would be to compositionally place critical percussive sounds to mainly come from one speaker and balance them on the other side with a different sound - I haven't tried this but it might be worth trying if nothing else is acceptable.

I'm not trying to make the problem seem insignificant but do try to distinguish between what you do and do not have control over and work within that framework.


----------



## Marsdy (Aug 1, 2005)

MIDI jitter IS the biggest problem because (a) there's not much you can do about it and (b) it can lead to phase issues that CAN be heard and cause problems like in the example I mentioned above. 

Time aligning to compensate for fixed latency is easy enough to do usually. In my setup for example, there is no latency to speak of. Logic compensates automatically for it to sub millisecond accuracy during playback BUT... it's not phase/sample accurate on external computers because of MIDI jitter. It's close enough for the occasional "hit" to phase cancel when I've tested it. This is fine as long as phase coherence isn't a concern and it is if you use multiple mic sets. 

Even if you are working in stereo this can still be an issue if you have one mic set sample accurate in your DAW and another on a satellite PC or mic sets split across two PCs. You CAN hear it and it's not consistent between playbacks. The way I work round it is to keep all the drums and percussion sample accurate in Logic if I need to layer mic sets and in all honesty, it's easier just to use Altiverb


----------



## FirmamentFX (Oct 4, 2005)

Just going back to the centre / phantom centre issue. If the music is for the big screen I have often found that post engineers do not like instruments to be in the C channel (which is mostly used for locking vocals to the screen). It helps to keep the music in the background to use a phantom centre

Or Center


----------



## Ian Livingstone (Oct 4, 2005)

FirmamentFX said:


> Just going back to the centre / phantom centre issue. If the music is for the big screen I have often found that post engineers do not like instruments to be in the C channel (which is mostly used for locking vocals to the screen). It helps to keep the music in the background to use a phantom centre
> 
> Or Center



FirmanentFX - thanks - this is quite an old thread and I've mixed my project now - but yes you're right the post engineer didn't like much in the centre other than reverb - most of my cues were ok but there were a few where I'd taken the C from the deca and mixed it into the centre, and also I'd taken close mics from EW Perc and True strike for the centre - in which case he spread that out back to the LR and Ls Rs to create more space on the scenes with alot of dialogue.

Ian


----------



## FirmamentFX (Oct 4, 2005)

Oh yeah - August 1st. Sorry didn't see that... :oops: 

:D


----------

