# Do you create music for 5.1 or 7.1?



## Joram (Jun 9, 2015)

We all know how great surround sound can be for music. A lot of composers seems to work mostly in stereo. There are good practical reasons for that: a surround set-up is more costly, mixing in surround asks for specialized knowledge and quite some customers (tv-production companies for example) are not very interested in 5.1 or 7.1. 

As a mix engineer I am interested to know how you look at 5.1 and 7.1 surround sound. What are the pros, what are the cons? Which problems did you encounter? What is the influence on your work when writing and composing for 5.1 or 7.1?

It would be great if you fill in the poll and let know what your experience is.


----------



## Daryl (Jun 9, 2015)

I work primarily in library, and around 8 years ago we decided to offer 5.1 mixes as well as stereo for a rather expensive double album. So far nobody has wanted the surround mixes. I think that says it all for me.

D


----------



## Joram (Jun 11, 2015)

Daryl, I have the same experience. I regularly mix for libraries and although I offered surround mixing, it seems that there is no market for it. Not so strange as libraries are not very often used for feature films.


----------



## charlieclouser (Jun 11, 2015)

For my first feature (about 13 years ago) I set up for 5.1, composing and mixing in Logic and printing to ProTools on a separate computer. Signal passed from Logic to PT via 24 channels of ADAT LightPipe, so I had three stems plus a composite mix. Monitoring was done on Dynaudio AIR system, which has 5.1 support, dedicated volume remote with the ability to set preset volume levels and individual channel mutes, direct AES inputs, and CAT5 networking between all the speakers. Works like a dream.

I've used this setup ever since - all my features have been delivered in this fashion, although I expanded the channel count from Logic to PT up to 48 and now with MADI I'm at 64 channels, so with 48 I could deliver seven stems plus a composite mix, and now I'm at nine stems plus composite mix with a couple of stereo pairs left over.

Logic has severely limited surround capabilities - basically if you want to output multiple stems then you can't make use of surround panning and must fake it using multiple stereo and mono output arrays, much like you would when mixing for surround on an old SSL 4k where you'd use the multitrack busses at the top of the channel strip to route to your layback recorder. Since I had done this on the SSL back in the day it was not a head-scratcher to figure it out in Logic, and I was able to set it up and get working quickly.

Because of these limitations with Logic, I'm basically mixing to a front stereo pair, with an additional rear stereo pair, and occasionally using sends to route some things to the Lfe channels or (more rarely) the center channels of each stem. Essentially, I'm mixing in Quad. On most of my stuff I just leave the center channel empty, and the only thing that goes to a stem's Lfe would be kick drums, sub booms, and bass. No orchestral sounds go to the Lfe. I'm not doing orchestral simulations, my scores are very "hybrid" or totally electronic, so I never have a true 5.1 recording "off the floor" that I'm trying to recreate on the dub stage - so the limitations of Logic don't bug me at all.

For the type of features I am doing, the score is heavy on the sound design - with lots of ping-pong delays in quad, weird reverbs in the back, and sounds that swoop from back to front for accents. I often "quad track" instruments, much like you'd record two performances of a rhythm guitar in a stereo rock mix, I record four performances and lay them out in hard-panned quad. Leaving the center channel empty has never been a problem on the dub stage - in fact, on my movies the mixers usually appreciate this as that means they've got the center all to themselves for dialog and fx. If they need a phantom center for the score they just fade a little of the L+R into the center, but this is very rare. Sometimes they will take a strong kick drum or bass part and put it in the center and Lfe, but usually those elements are on separate stems so if I switch over to plain old quad it won't be a problem.

I may re-arrange my output matrix and just use Quad for everything - this would give me sixteen stems going from Logic to PT, and I'd just use bass management on the Dynaudio rig instead of a discrete Lfe channel. 

Recently I saw some of Junkie XL's videos describing his setup, and he's basically doing everything in quad. Front pair, back pair, done. I called up Alan Meyerson to ask his opinion on this and see if it was a crime to just eliminate the center and Lfe, and his opinion was mixed. He told me it's not a problem per se, but that on a JXL score there is a mixture of 5.1 and quad sources - the synth and sampled stuff is all in quad, but the live orchestra is in 5.1 - but since that's all "downstream" from JXL's rig it's fine. He said if I'm doing all in-the-box and hybrid stuff that I needn't worry too much about just moving to quad, as this gives the ability to deliver more stems in a given channel count, and the mixers can then isolate elements from those stems and distribute them to center, Lfe, or even 7.1 or Atmos arrays easily.

For television it's all stereo. They just splash some of my score into a surround reverb on the stage and it sounds fine. 95% of the listeners will not have a surround rig set up to watch a show on FOX anyway. For these stereo deliveries I use my same output matrix but only put signals into the L+R pair of each stem, and record each cue into ProTools as only four stereo pairs, skipping all of the surround, center, and Lfe channels. This way all my channel names don't have to change - my default position is full 5.1 on every stem and if the project calls for narrower stems I just skip the unneeded channels. For the moment I will ignore 7.1 until a project calls for it. I have the monitoring all set up, and with my MADI rig I can output to seven 7.1 stems plus a composite 7.1 mix, so that's not a problem, but all of the projects I've been on have had 5.1 as a mix destination on the dub stage so far.

The most important thing is that I don't RELY on the surrounds being there at all - I make sure the mix sounds as intended when the surrounds are muted. If they are audible, great... but if the guys on the dub stage decide that it's too crazy with all the quad ping-pongs and stuff and they kill the surrounds I won't be butt-hurt, since I've got mixes that sound just fine without the surrounds. The surrounds are just extra sauce for me.

If I were doing "legitimate" surround recordings of my sources, like tracking orchestra at Air Lyndhurst or whatever, then I might be more concerned with how the dubbing mixers deal with my surround channels, but since I'm just making a hybrid chaotic mess it really doesn't matter that much. The quad ping-pong delays and quad-tracked prepared pianos do sound cool though....


----------



## Tanuj Tiku (Jun 11, 2015)

Thank you for your detailed explanation Charlie. Always helpful!

I am getting into surround soon. My room will be ready by August and I have never done this before.

My first question is printing stems. Why do you print them in Pro Tools? Is it because the film mix engineers want a Pro Tools session from you or is there any other advantage?

In Mumbai, they are most happy with wave files because not everybody is using the latest Pro Tools and have trouble loading sessions lot of times. 

But, its easier in stereo. In surround, I am not sure. At the moment, I am just thinking of printing internally in Cubase. Adding a Pro Tools system will cost more money too, not to mention a good clock!

What are your thoughts on this? 

Second question:

Some people even claim that printing via your AD-DA is better because what you have been hearing gets printed. But, I think may be its a bit of an over kill. And we are working hard to make the system as transparent as possible. A few changes to the sound should be minimal. It will change anyway at the dub stage. I am planning to get the Orion 32 with MADI. So, if I do go for a Pro Tools rig (I can only afford Native at this time), it will all be printed digitally via RME.

Third question:

How long did it take for you to be confident to use surround on a film? I am a little worried about using it too soon on a project unless I understand what I am doing. It is a new thing for me.


Tanuj.


----------



## charlieclouser (Jun 11, 2015)

Tanuj Tiku @ Thu Jun 11 said:


> My first question is printing stems. Why do you print them in Pro Tools? Is it because the film mix engineers want a Pro Tools session from you or is there any other advantage?
> 
> In Mumbai, they are most happy with wave files because not everybody is using the latest Pro Tools and have trouble loading sessions lot of times.
> 
> ...



1 - I print to a separate machine running ProTools because back when I set all this up for the first SAW movie in 2003 or so it was not practical to print back to empty tracks in the Logic machine - we were running old Mac G4 machines then! Plus, I already had a separate ProTools machine all set up from the days of making rock records - I'd use ProTools to track drums, bass, and guitar at other studios, and then I could easily exchange sessions with those outside rooms, and I'd mix in ProTools as well. I'd use the Logic+Ableton+Reason machine for my own remixes and electronic stuff, and when doing rock records that would be the synth and loop station, printing those things over to the separate ProTools machine as the tracks were built up. So, I already had things set up like this, and when I started doing scores I just did everything in Logic and I had this ProTools machine just sitting there, all connected via LightPipe, so it was a simple matter to use it as a "layback recorder" or "mix down deck". 

It is definitely the case that they are all using ProTools on the dub stages here in LA, but I don't actually give them my session data since they would have to remap all the i/o channels, etc. - so I just give them time-stamped WAV files. I do, however, get a copy of their empty session before I start printing mixes, and then I load that up, change the i/o settings to match my setup, and use that modified version of their template as my template. I do this so there is no mis-match between the various session parameters like frame rate, offset, etc. But in the end they just import my WAV files into their sessions on the dub stage. For each reel, I also print a 2-pop and a tail-pop, and merge them into one gigantic long mono audio file. Then they can lay that into their session and verify that it matches what they are seeing on the stage so there is no confusion about things being in sync. When I have printed a set of stems for a cue, I rename the files like this:

SAW3-1m07-HORRORSHOW.Ls

...where "SAW3" is the project title; "1m07" indicates reel (or act, in the case of television) one and cue seven; "HORROWSHOW" would be the cue title; and ".Ls" indicates the channel within a surround or stereo set of files. For surround the file suffixes are L, R, Ls, Rs, C, and Lfe. I also do NOT "re-start" the cue numbers with each reel or act. Some folks would have the first cue in reel/act two be designated "2m01" but I do not like this - I use sequential numbering from the first cue to the last, so if there are 9 cues in reel one then the first cue in reel two is 2m10. I also make sure to use "leading zeros" so that thing alphabetize correctly in folders on the desktop - I never use 2m7; instead it would be 2m07.

As each cue is printed I move all the files for that cue into a subfolder, and name the folder like this:

SAW3-1m07=01.11.22.00

...and then I make a "zip" file of that folder and upload it to the stage. That way they have an absolute indication of the timecode that the files should start at, without the need for a text file, ProTools session, in case the time-stamping doesn't work (which has happened once or twice). Also, after they "un-zip" the file they still have the original "zipped" version so if they screw something up it's like a backup copy that they can "un-zip" again if they need it. 

I also make sure that files always start on a whole second - that is, never on xx seconds and 18 frames or whatever. This simplifies things since now we only need to know minutes and seconds in order to place the cue correctly. It also means that I always have a little bit of "dead air" before the start of each cue, so there is no danger of chopping off the very beginning of a cue that has a hard start, which could happen if you are trying to have ProTools "punch in" exactly where the first kick drum hit is or whatever. If the first bit of audio for a cue happens at 01.11.14.22 then I will set ProTools to punch in at 01.11.14.00 and then I have 22 frames of dead air. Sometimes I will even back it up another second, like if a cue has a hard start at 01.11.14.01 then I will set the punch-in to 01.11.13.00 so there is one second and one frame of dead air at the top.

So even though I use ProTools, I don't actually exchange PT sessions with the stage - as you do, I just give them time-stamped WAV files - but ProTools is a great way to create those WAV files and know for sure that they are actually time-stamped.

Another reason I like to use the separate PT machine is because most of the time my cues overlap - the first cue is trailing off while the second one is fading in. Using two machines lets me have the PT session as a "whole reel" session, so when cue #2 is loaded into Logic I can go ten seconds before the start of cue #2 and hit play in ProTools and hear the tail end of cue #1 playing while Logic starts cue #2. This really helps me fine-tune the transitions between overlapping cues. A bonus is that the PT session is a representation of the whole reel, and I can store alternate versions, demos, and rough mixes on muted tracks in that session at their appropriate time code points. If the director is there and wants to preview and approve the project, I can just play the whole reel from ProTools without needing to stop and start and wait while loading up each cue in Logic.

I don't use an external clock like a Big Ben, Rosendahl, or Antelope unit. When working with two machines I just use the Avid SYNC peripheral to send MIDI TimeCode to a MIDI interface on the Logic machine, with Word Clock coming from the SYNC unit to the word clock input on the MOTU audio interfaces on the Logic machine. Although using MTC and word clock to sync the two machines works, I tend to favor sending actual LTC from the XLR jack on the back of the SYNC unit into the LTC input on a Unitor mk2 MIDI interface on the Logic machine. This way the timecode is a big strong audio signal on a balanced audio cable, not MIDI (which could get weird or fail) and this is bulletproof. I verified that using MTC does work, since the Unitors are discontinued and may stop being compatible with modern computers someday, but I actually have a bunch of them and I'm still using the LTC on a mic cable since it's foolproof.

I will say that when I am just doing television stuff I don't bother with the ProTools machine at all - I just bounce directly within Logic. For television I usually only give them three stems plus a composite mix, and I just color-code my regions in Logic to represent which stem each regions should be a part of, then I can select one of the drum regions, for instance, and then use a command in Logic called "select equal colored objects" which selects all the rest of the drum regions, then I use the "object solo" function in Logic which is like muting or deleting all the un-selected objects. Now, when I do a bounce, only those objects will play, even though the entire mixer and all plugins and effects are wide open. I repeat this twice more for the other two stems and un-solo all objects and bounce the composite mix. This does mean that I have to bounce each cue four times for three stems and a composite mix, but it's not a big deal really, and I get to hear each stem play all by itself as it bounces. I never use off-line bounce because I want to hear each stem by itself as it bounces. I like each stem to be a complete-sounding subset of the whole piece of music, and this helps me insure that each one sounds okay. This 'object solo' function is unique to Logic, and I rely so heavily on it that this is one reason I might not ever be comfortable with Cubase. 

With television shows, the director never comes over to preview cues, so being able to play a whole reel or act in one continuous pass is not needed - this is why I don't bother with ProTools on a tv series. For previewing I just upload stereo mixes to the picture editor or music editor and they lay it against picture and the director can preview cues in the edit bay or via download on the PIX system, which everybody seems to use now. 

Anyway, I don't think it's a disadvantage if you just want to bounce within Cubase and not bother with having a separate ProTools system... but since I already have the systems it does make some things a little more convenient as I described above.

2 - In my system there are no analog cables in use, anywhere. My speakers have AES digital inputs. The only analog wire in my room is inside the speaker cabinets, between their built-in power amps and the drivers themselves. When I am working in Logic, the speakers are connected to Logic via AES, and when I need to start printing stems, I have a switch box that lets me connect ProTools directly to the speakers so I am monitoring through ProTools - this is how I can hear the overlap between cues. But for the first few weeks of a project, when I am just writing in Logic I hit the switch so that Logic goes right to the speakers and I don't have to boot up ProTools just to monitor through it. I switch Logic back to internal clock when not using ProTools, and when I DO want to print stems to PT I just set Logic's audio clock to slave to word clock coming from the Avid SYNC, tell it to slave to incoming LTC/MTC, and that's it.

I am a firm believer in an all-digital setup. For many years, before this was practical, I spend way too much money and time on Apogee AD-8000 units and so forth, and I hate all the comparisons and fiddling around with trying to get the best A>D and D>A. I love being all-digital. No noise, no ground loops, no hiss, no hum, no oxygen-free speaker cables, no need to be careful about running AC power cables next to audio cables, none of those hassles. I haven't had a noise issue since I started using the Dynaudio AIR system with AES inputs. It's great. I will never go back to analog outputs between my rig and the speakers. Never.

3 - I had done some crude surround mixing on analog SSL 4k consoles before my first score, but when I set up my rig in 2003 I jumped right in, and although I've increased my channel count some so that I can deliver more stems, the basic concept, layout, and channel arrangement has remained unchanged for 13 years - and I am in no hurry to change it ever again! Keep in mind that I am not dealing with legitimate recordings of acoustic 5.1 sources like live orchestra or whatever - it's basically me using front and back pairs of signals, quad ping-pong delays, front and back stereo reverbs, etc. to create an immersive sound space. As I mentioned above, due to the limitations of Logic's surround implementation I don't have any "joystick" panners like you see in Cubase or ProTools, and I can't freely automate things swirling around the room - I am actually just doing front pair plus back pair and using sends to direct certain things to the center and Lfe channels. It's a bit crude compared to some setups, but I find it's actually faster and easier to do things this way. I don't have to mess with actual surround reverbs - I just use two instances of a stereo reverb for font and back, and use slightly different programs on each to create a little depth. It also means that I am not limited to only using those effects plugins which are available in surround versions - I can use any old stereo effects and just use two at a time. This is why it will be a simple switch over to just plain old quad routing if I decide to do that in order to gain more stems (16 quad stems across 64 outputs versus ten 5.1 stems with four channels left over in the same 64 outputs).

I dove right in to surround on my first feature and I do things basically the same way as I did 13 years ago. In my setup, the rear speakers are NOT tiny little things with 6" woofers - they are the same as my front speakers, and I think this helps somewhat - although I did just buy a set of gigantic Dynaudio AIR25 3-way dual-woofer monitors and AIRBase-24 dual-12" subwoofers for the front, so when I get those set up then the rear speakers will be a bit smaller as they are AIR15 single-10" woofer 2-ways - but the Dynaudio AIR series sound remarkably consistent across the range, the bigger ones are just louder but don't really have a significantly different "sonic footprint" so I don't think it will be a problem.


----------



## clisma (Jun 11, 2015)

Mr Clouser, your post is an absolute treasure of useful improvements to my current workflow. Thanks so much for the detailed share: I feel like I've just had an exhaustive course on proper organization for the modern film composer!

Your post is now a sticky for my current project. 
Luc


----------



## charlieclouser (Jun 11, 2015)

Happy to help, guys....

A couple of things I left out:

- In my examples of how I name files for delivery to the dub stage, I completely spaced out the part about naming each stem. In my examples above, I described a typical file name as something like:

SAW3-1m07-HORRORSHOW.Ls

This is incorrect, as it does not indicate whether that file is the left surround of the drum stem, keys stem, or what. In actuality, the file would be named:

SAW3-1m07-HORRORSHOW-DRM.Ls

Where the "DRM" indicates that that file is the left surround channel of the DRUMS stem. My short stem names are DRM, KEY, ORK, MIX when dealing with three stems and a composite mix, but I put a "z" before the word "MIX" so that the MIX files appear at the end of any list in the Finder. Otherwise they would stack up as DRM, KEY, MIX, ORK and this would be confusing. When dealing with more than three stems, my typical stem name suffixes would be:

DRMa = drums A
DRMb = drums B
KEYS = synths / keyboards
METL = bowed metals
Obrs = orchestral brass
Ostr = orchestral strings
Ownd = orchestral winds
VOX = orchestral choirs
WILD = crazy fx shit and wild-card stem
zMIX = the composite mix

Even though they never actually use the composite mix files on the dub stage, I always include them in any delivery so that my music editor has them on hand in case he needs to give a director/producer reference mixes or whatever.

- When dealing with multiple revisions of a cue, I name them as follows:

SAW3-1m07v2-HORRORSHOWv2-Ls

The "v2" indicates that it is the second version of that cue. The first one is usually not name "v1" since I'm secretly hoping there won't ever BE a "v2" - but if there is then the original one appears first in any list, with successive ones below it. I put the "v2" in both the cue number and the title, so that later if I'm making CDs or whatever and I strip off the "SAW3-1m07v2" from the front of the name I still know what version I'm dealing with.

- Sometimes there is an "overlay" for a cue - this is just a single string line or sound design element or whatever. Like if they're on the stage and already have the cues edited and mixed but they just want one more little zing or something. In cases like this I will reprint the whole cue, stems and composite mix, for my own use but they might not want to replace the cue in their session and have to re-do any edits, etc. In this case I would print JUST the overlay, and name it:

SAW3-1m07olay-HORRORSHOW OLAY-Ls

This clearly indicates that these files are just an overlay, and not a complete cue. But I always reprint the whole cue with the overlay included, and often give that a "v2" or whatever for in-house clarity.

Being strict and uncompromising with file naming conventions will save your ass down the line when you're rooting around in folders searching for the keys stem of the fourth revision of a cue. I recently had to compile the mixes from my first-ever tv series, before I really had this shit figured out, and it was a nightmare - looking at folders full of files name "FL183-2m05-MIX" with no cue titles in the filenames, and no copies of the original spotting notes to indicate cue titles, it was a mess. Luckily I found the paper printouts of the spotting notes in a box in the garage, but most of the time I don't keep them. So be vigilant and uncompromising in your file naming, boys - no matter how much of a hurry you're in, don't skimp or rush through it or you'll regret it later. 

That first few characters in the file names, where "SAW3" is in my example, is also important to keep short. I never use more than five characters there - movies get abbreviated, so "Resident Evil:Extinction" (which was the third movie in the franchise) as abbreviated to RE3, and "Death Sentence" was abbreviated to "DS". For television, I include the episode number, so LV214 indicates "Las Vegas, season two, episode 14" and Ns511 indicates "Numb3rs, season five, episode 11". If the production uses different numbering schemes that DON'T clearly indicate season and episode number, just use whatever number they assign to that episode, since it's asking for trouble if you are giving them files with your own episode numbering scheme when every single other document they've got that deals with that episode has a different number on it.

In the old days, various versions of the MacOS and Windows had a limit to how long file names could be - I think it was 32 characters. Logic had it even worse - EXS24 couldn't deal with samples whose file names were longer than 24 characters. So I've always tried to keep things nice and tidy. No cue titles like, "ESTEBAN GIVES ESMERELDA HER KEYS BACK" or crap like that. It should be "ESTEBAN KEYS" or something short like that. You'll thank me later.

I also make sure to never re-use a cue title, and I do mean NEVER. I never have a cue called "Car Chase" or "Shootout" or basic-ass titles like that, since it's inevitable that I'll score many "Car Chase" scenes over a long career. My music editor creates the spotting notes using a Filemaker database template that is set to NOT allow duplicate cue titles - it even checks ALL other projects we've ever worked on, so if I did a cue ten years ago called "NIKKI LEAVES" we're never allowed to use that title again, EVER - if this week's episode has a character named Nikki who does, in fact, LEAVE... well, we'll just have to think of a different, unique title for the cue.

This "never re-use a cue title" restriction has really worked out great over the years. Now, whenever I'm talking with my agents about what cues they should put on a submission reel, we can just say, "How about FOURTH CENTURY followed by BLISTERS followed by CHARM BOYS followed by HELLO ZEPP?" and we both know exactly what pieces of music we're talking about. I think this might be a holdover from my years making records - a band would never have two songs with the same title, and you'd always try to avoid using a song title that another band had used. So it was sort of a reflex action - but it really, REALLY helps down the road when you're going through archives and such. I just did a tally and in just the last 13 years I've done 8,644 cues - and each one has a unique title. Crazy.

Food for thought.


----------



## lachrimae (Jun 11, 2015)

I was reading through your generously insightful post when it occurred to me that I was hearing your music in the background (wife watching Wayward Pines). 
Get out of my head Mr Clouser! 







Seriously, thanks for sharing...


----------



## charlieclouser (Jun 11, 2015)

How dare you be looking at a website while half-watching Wayward Pines?!?! Wayward Pines demands your full attention!!!

I love me some Tim and Eric... but Dr. Steve Brule is my favorite.


----------



## Joram (Jun 12, 2015)

Thank you , Charlie, for the extensive explanation!


----------



## AR (Jun 12, 2015)

Hey guys! I mix in 5.1 all the time (nowadays). I would've mixed it in 7.1 but somehow Cubase still doesn't support 7.1. Someone else here, who complains that Steinberg does not implement this feature in Cubase (only)?


----------



## Tanuj Tiku (Jun 12, 2015)

charlieclouser @ Fri Jun 12 said:


> Tanuj Tiku @ Thu Jun 11 said:
> 
> 
> > My first question is printing stems. Why do you print them in Pro Tools? Is it because the film mix engineers want a Pro Tools session from you or is there any other advantage?
> ...




WOW! Thank you ever so much Charlie! That helped a lot! 

Couple of more questions ~o) 

1. I have no sub in the room. My designer was not in favour of it and neither is Dolby for music as such. So for better or worse I am not going to have a sub. So, is it better to send a little to the LFE once in a while or in such a case just leave it for the dub stage?

2. How can I listen back to music released in 5.1 and watch movies in 5.1? 

I am unable to find any information on how to playback a Blu-Ray etc in surround via RME on my system. And lets say that works, what happens to the sub information? Is there a way to route the sub into the front speakers or I will never hear whats going on in the sub?

The only reason I want surround playback and blu-ray is to learn from them.

My front speakers are full range and will go down to 20 Hz. 


Thank you once again!


Tanuj.


----------



## charlieclouser (Jun 12, 2015)

Tanuj - 

1 - I wouldn't worry about sending things to the Lfe if I were you, and leave it for the dub stage guys to figure out if they want to. If you give them enough stems, they can pick some elements to send to the Lfe. I have heard some opinions from mixers who say that no music should ever be routed to the subs in the theater, but if you're doing Mad Max Fury Road then that might not be the case! I have done a few scores with very pounding electronic / industrial beats, and I did route the kick drum (hard sampled kick on the DRM stem) and the synth bass (on the KEY stem) to the Lfe channels on those stems, and in a couple of cases they actually did use these channels on the dub stage, but not much. I actually was glad to hear that JXL is just using quad and ignoring the center and Lfe channels as this is basically what I've been doing for a long time. I use my center speaker to play back dialog and fx from the video, and I don't route any music there at all, and I actually like just using my subs in bass management mode, where signals below 50hz are extracted from the front L+R and sent to the sub. So in your case, as long as you can split up your music into enough stems I wouldn't just route things to the Lfe channels if you can't monitor them while doing so. If all your drums are mushed together into one stem then it might be more problematic for the mixers on the dub stage to just route the whole drum stem to the sub, but if you can put the things you want to go to the Lfe on their own stem, or at least separate things a little bit, then the mixers could easily push a bit of those elements to the Lfe channels on the stage.

2 - For listening to 5.1 sources and comparing to my mixes, I actually have a DVD player that has analog 5.1 outputs connected to analog inputs on my MOTU audio interface.  No Blu-Ray here yet. I have those inputs routed directly to the appropriate speakers in the MOTU CueMix software. I have this connected via analog because there are problems connecting sources like these via digital - the receiving device usually needs to support some form of copy protection so you can't just record a direct rip of a Blu-Ray into your DAW. TOSLink optical and HDMI both support some form of copy protection, and usually a receiving device like an RME interface will correctly report that it is a digital input capable of recording, so the sending device will say, "But you're not allowed to record my output digitally! Now disabling digital output..." This is why I use an old Onkyo home stereo DVD player that actually has 6x analog outputs, and I don't care that it means I have another layer of D>A and A>D in my chain. It also means I can record or sample things off of DVD! I guess in your case you'd want to route the Lfe channels from the Blu-Ray to your front speakers only and just leave it at that.


----------



## Tanuj Tiku (Jun 12, 2015)

Thank you Charlie! I have been meaning to find many of these answers for months and you have basically answered all of them. Thank you very, very much. I am going to mark these posts to go through them again before I begin working in surround.

Now...off to look for an analog DVD/Blu-ray player!!


Tanuj.


----------



## nogills (Jul 7, 2021)

charlieclouser said:


> For my first feature (about 13 years ago) I set up for 5.1, composing and mixing in Logic and printing to ProTools on a separate computer. Signal passed from Logic to PT via 24 channels of ADAT LightPipe, so I had three stems plus a composite mix. Monitoring was done on Dynaudio AIR system, which has 5.1 support, dedicated volume remote with the ability to set preset volume levels and individual channel mutes, direct AES inputs, and CAT5 networking between all the speakers. Works like a dream.
> 
> I've used this setup ever since - all my features have been delivered in this fashion, although I expanded the channel count from Logic to PT up to 48 and now with MADI I'm at 64 channels, so with 48 I could deliver seven stems plus a composite mix, and now I'm at nine stems plus composite mix with a couple of stereo pairs left over.
> 
> ...


Just popping in, 6 years later, to say your posts Charlie have really really helped me with questions I've had a while now. Thank you!


----------

