# Working with surround and Spitfire libraries



## europa_io

Hi - 

I've been working with a 5.1 (well 5.0 really) approach with Spitfire Libraries for some time now as I prefer the sound I can get, and with the thinking that I can always down-mix to 2 channel, but can't do the reverse satisfactorily (other than fudging it with an up-mix plugin).

But it is a workflow overhead and just makes things a bit over-complicated when I don't have minions like the big kids. 

Recently I've been getting asked to deliver stereo stems or mixes to the dub for TV work, and they do a 5.1 upmix with Halo Upmix or similar if their delivery format requires it. So I kind of wonder - why bother with the extra effort for small sonic benefit?

Options: 
Stick with 5.1 right up to the last step as it has the most flexibility and most satisfying sound while working, (with gallery mics panned to the rear for instance), albeit with more pain and less access to really nice stereo-only reverbs etc.

Or, work with a satisfactory 2 channel mix of instruments end-to-end which includes gallery mics blended into the front from the outset, and work 2 channel throughout. 

I'm trying to convince myself to do the latter...

What do you think?

Interested in your approach @christianhenson - I don't think it's something you've covered one of the Spitfire videos..?

Thanks!


----------



## Sekkle

Hi 

Yeah I've tried both approaches and found that routing everything in 5.0 just created more work for not-so-much return. Especially when mixing with other libraries and stereo/mono instruments like synths it becomes problematic. I don't have an all-spitfire template so this probably contributed to my frustration. I also found that the galleys in the surrounds easily get lost once on the dub stage, especially once atmos/fx and 3db is dropped and I think they sound better in the front if the extra reverb is needed. 

My current approach is to compose and produce everything in stereo and once the cues have been approved, mix everything in 5.1 as required. When I get to this stage I buss everything out the the main stems for surround mixing, which then filter down into the final delivery stems. I use either surround verbs like Pheonix/R2, or two stereo reverbs - one front and one rear. Its offers more consistency when trying to blend different libraries and instruments. I also use Halo mainly for synth/sample and ambient sound design based sounds. I found using it on orchestral samples that are mainly geared for a stereo mix meant that they became too thinned out spread throughout the soundfield as it essentially derives and spreads everything out through all the channels from the stereo source. I find it also does this with most sounds and plan to experiment with using Trevor Morris's technique of only adding the centre and surrounds to the existing stereo mix (essentially adding the extra information to the untouched stereo sound). I need to test this though as I wonder if there could be phase issues when folding back to stereo. One Halo great for though is deriving an LCR from the stereo instruments that folds back to stereo perfectly as long as the correct coefficients are used in exact mode (-3db centre in my case).

I guess after testing different surround approaches on the last few projects, I come to realise that the surrounds are immersive ear candy and are probably best treated that way. The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended. Also, things can end up being changed during the sound mix, so if you have important information in the surrounds or even centre (like the galley mics for example) they could easily be lost with the pulling down of a fader. So now I try and make sure the really important stuff is int the front left/right/centre (where applicable) and then expand out from there when needed.


----------



## europa_io

Thanks Sekkleman for taking so much time to respond. Much appreciated. 

Yes - you're right it's probably just immersive ear candy. That's exactly my addiction!  My brain is telling me there is little discernible benefit when this is folded down (though I can tell the difference), but my ears/"heart" are telling me it feels so much nicer.

I have yet to find an approach or develop the balancing skills to make a stereo downmix that was surround until the very last step sound as satisfying as a stereo-all-the-way mix. 

There might be a little emperor's-new-clothes-effect going on inside me as I A-B the options, but more than a little bit of me is saying to myself "trust my ears, not just my brain".

Thanks Sekkleman. It would be great to hear back about your experiments with Trevor Morris's technique at some point. 

Cheers.


----------



## JohnG

Sekkleman said:


> routing everything in 5.0 just created more work for not-so-much return




Another way to do this is a "poor man's surround," also known as "not really surround but...." This may sound dumb but hear me out because it gives the dub stage all the control they may want and may still be (somewhat) satisfying to you.

Compose however you like -- if you like listening in surround great! -- but PRINT into two discreet stereo pairs:

1. Pair 1 is the "main" stereo track,

2. Pair 2 is your (the composer's) proposed surround set of tracks (left and right rear speakers).

The problems with delivering a 5.1 mixes are, as you may agree, numerous. The stage may put your mix through a black box you don't own, or with settings you can't anticipate or feel are wrong -- but even if you're there your chance to intervene is politically an issue and may be unsuccessful anyway.

Even if they are using a 5.1 algorithm or a box you _do _have today, they might decide without telling you to route the music for next week's episode through the new, exciting "BlamiSurroundO-Maximiser."

Other problems with trying to supply 5.1 music abound -- not infrequently, they don't want anything musical in the sub channel as they reserve that for "booms" and other Real Loud SFX; they will somehow manage to get your 5.1 stems mixed up or out of phase or simply canceling each other in some unexpected way (this happened on a feature film to me -- the director called and asked what the h___ I was doing because music had been strangled, epically).

With the "two pair" approach, you know that there won't be phasing problems even if something is off -- latency to the rear send for example that doesn't quite match the left and right front sends.

When using the two pair approach, sometimes there might be little or nothing in the surround stem. I usually reserve it for quasi-effects, or maybe shimmery tremolo high strings, a faint choir sound or doubling of synths with choir, or something else musical that, while nifty, is not going to wreck the stereo mix if it's poorly adjusted, hard to hear, or even inaudible.

Wow -- too long a post but anyway that's my take.

If you are working with a major studio these considerations are not the same / applicable.

Kind regards,

John


----------



## JohnG

Sekkleman said:


> The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended. Also, things can end up being changed during the sound mix, so if you have important information in the surrounds or even centre (like the galley mics for example) they could easily be lost with the pulling down of a fader. So now I try and make sure the really important stuff is int the front left/right/centre (where applicable) and then expand out from there when needed.



Excellent advice in an excellent post.



Sekkleman said:


> The inconsistencies of theaters and playback systems means that there's no guarantee that the audience will even hear it as intended.



I've experienced this one; the rear speakers in the local Ultraplex can be plus or minus 6dB (or more) from spec, and even if they are within spec range, they may not sound anything like what we anticipated.


----------



## Sekkle

europa_io said:


> Thanks Sekkleman for taking so much time to respond. Much appreciated.
> Cheers.



No worries, I hope it helped a bit!



JohnG said:


> I've experienced this one; the rear speakers in the local Ultraplex can be plus or minus 6dB (or more) from spec, and even if they are within spec range, they may not sound anything like what we anticipated.



Yeah absolutely. Even the difference between hearing my 5.1 mixs on a dub stage or in a theatre compared to near field monitoring in my studio was quite an eye (ear) opener as it sounds quite different in a large room with multiple surround speakers.



JohnG said:


> they will somehow manage to get your 5.1 stems mixed up or out of phase or simply canceling each other in some unexpected way (this happened on a feature film to me -- the director called and asked what the h___ I was doing because music had been strangled, epically).



This is also a also a concern for me. There's always room for human error when importing stems depending on the system and workflow and its something I've learnt to always check. As you say, channels being slipped out of phase either in the sound mix or via latency in a badly calibrated theatre can also be an issue. I once read an interview with Alan Meyerson where he mentioned he never pans/positions sounds between the front and rear channels for this exact reason. Anything going to the rear is a discrete audio stream so even if it's slipped out of synch, it will still work. This is something that's made me really wonder about using Halo for upmixing anything into the surrounds. If the channels ended up out of synch, it could cause weird phasing issues especially if it's then folded back to stereo for deliverables. I still have been using it for ambient beds however I'm using surround reverb more and more.

After going into surround mixing pretty gung-ho when I first started. I've ended up being pretty conservative so as to minimise the chance of running into all of these issues. It's not a bad thing though, it just means there's more limitations which forces us to be creative with how we do it! It's a constant learning experience that's for sure.


----------



## charlieclouser

I have a bunch of posts on this topic on this forum - and I'm by no means an expert, or even doing it "right" - but I agree with what JohnG said, and I basically do "quad" instead of true surround. This is mainly because Logic still only has a single set of "surround" outputs, so doing surround AND stems involves the same kind of work-arounds that you'd need to use when attempting surround on a big stereo console like an SSL 4k - basically using tons of stereo pairs.

So what I wind up with is front pair / back pair and the center and LFE usually empty. (not always, but...)

While composing I basically leave the rear speakers off, and then only during mixing do I get them fired up. My rear pair usually contains not the actual rear mics from fancy libraries, but instead is just a different / longer reverb that's fed from sends off of the front pairs of various instruments. I also use a rear ping-pong delay with different settings than the front ones. This lets me have some sounds that have longer hang time in the rear speakers which can sound very huge. 

In some crazy sound-design-y cues I also "quad track" instruments so there's four different performances spread across the four channels - much like you'd double-track guitars and then hard-pan them left and right in a stereo mix. This can sound amazing.

But I'm always cognizant that the re-recording mixers may lower my rear pair or discard them entirely at any point, and maybe even toss them and then build their own version with an up mix plugin - so I make sure the mix sounds the way I like it when I'm only listening to the front pair. If they do use my rear pair it's a happy bonus.

For television I only do stereo, mostly due to the time constraints - and the workflow is so much easier!


----------



## Sekkle

Hey Charlie, I've read all of your posts about this over the years and it's been great to getting insights into how you deal with surround! It's definitely had an influence on me I think the approach that you and JohnG take is a really smart move. I'll probably try it on my next project.

One thing I've never been able to find a definitive answer on is the whole centre channel debate. All the sound mixers I've delivered to have always insisted on centre channel information as they worry about the proximity effect in a large theatre. That being said I also have colleagues who have worked with other mixers that don't want anything in the centre. I guess it's really down to the individual mixer at the end of the day. 

After trying heaps of ways to create a centre channel via discrete methods it always ended up feeling less 'stereo' and immersive when folded back for the stereo mix deliverables. Since most people are going to hear the mix in stereo this was always a concern. Finally I figured out that Halo offers a 3.0 upmix from stereo sources in exact mode that folds back perfectly to the original stereo sound. The cool thing is it has a slider that allows you to choose the percentage of the sound that gets sent to the centre.The other interesting thing is that the frequencies it derives from the stereo and puts in the centre is kind of rolled off and doesn't get in the way of dialogue. It can also be automated around dialogue although I've never actually used this feature. After a bit of trial and error I found a sweet spot where the mixer would see and hear the centre info and be happy. It meant I could still write everything in stereo and treat the centre channel as part of the whole surround mix at the end. I usually only put the drum buss and bass buss stems through this 3.0 process as they tend to be the main elements that really make any difference due to their low frequencies. The only other thing that goes in the centre are featured lead vocals or solo instruments which are always discretely panned.

It seems that we don't really have any control over what happens to the music mix once it's delivered in stems. All we can do is deliver stems that have the least chance of loosing important musical information which are predominately in the left and right channels.


----------



## JohnG

Sekkleman said:


> One thing I've never been able to find a definitive answer on is the whole centre channel debate. All the sound mixers I've delivered to have always insisted on centre channel information as they worry about the proximity effect in a large theatre. That being said I also have colleagues who have worked with other mixers that don't want anything in the centre. I guess it's really down to the individual mixer at the end of the day.



Yes. Talk to the dubbing mixer early on if you can. Otherwise I leave the centre empty and let them winkle something in there if they like.



Sekkleman said:


> It seems that we don't really have any control over what happens to the music mix once it's delivered in stems



Sadly, very true. 

I once attended a dub of a project that was not my own -- the composer himself was not there, and don't even remember his name now -- but as we were listening the music sounded wrong, so I asked the producer (politely) whether all the tracks were turned on, then _he_ asked the dubbing mixer. The dubbing mixer said, "no." Of the approximately eight tracks, he had muted all but two. Although they did turn some of the tracks back on after that, they didn't turn them all on and they were not at unity.

Yikes.


----------



## JohnG

charlieclouser said:


> I also use a rear ping-pong delay with different settings than the front ones. This lets me have some sounds that have longer hang time in the rear speakers which can sound very huge.



Great idea. Thanks Charlie.


----------



## charlieclouser

Yes, always talk to the re-recording mixers before deciding on your channel layout if at all possible. Most of the movies I've done are not exactly subtle, delicate, immersive soundscapes - there's always a ton of sound design, and it's usually guns, torture machines, and people screaming as their arm is getting ripped out of the socket! For that reason, when I ask if it's okay that I leave the center channel empty of music, most of the mixers have breathed a sigh of relief and said either, "I won't tell anyone if you don't" or "Oh thank you thank you thank you!"

They've always told me that if I don't have any "legitimate" center info, such as the center mics from a multi-channel orchestral recording, and if I'm just creating center information by folding things from the front stereo pair into the center, that it's fine if I just let them do it via Halo or some similar process. They can better judge when and how much of the L+R to fold into the center to eliminate a hole in the audio when it's played in a big theater.

If I'm creating center or LFE info by picking and choosing elements to play up the center or send to the subs, like bass, solo cello, or whatever, again the mixers can do this better if they're given enough stems that have the desired elements isolated. On earlier films I did when I only printed three (!) 5.1 stems, I would send some elements within a stem to that stem's center or LFE channels - and this worked, but was more difficult for the mixers to "untangle" than it would have been if I had just given them more stems with fewer channels per stem. In a 48-channel delivery, some mixers might prefer to have twenty-four stereo stems while others prefer eight 5.1 stems, and still others might prefer twelve quad stems. So it really depends on the material and the personnel - and it definitely pays to have detailed conversations and even send rough prints of early cues if the mixers have time to check them out. I try to do that if I can - send the mixers a cue or two a couple of weeks before the dub and let them throw them up in the room. Of course, the big boys might have a lot of back and forth with the dub stage as the score comes together, but on most of the projects at my schedule and budget level I'm lucky if I can make this happen, but it always makes me feel a little more comfortable when the mixers have checked out a cue or two before I print the whole score.


----------



## charlieclouser

JohnG said:


> Great idea. Thanks Charlie.



The quad-ping-pong thing can work really well on a wide variety of source material, and can even produce a "spinning" effect if the delay times are lined up right. I don't use an actual surround or quad version of an effect plugin, just two normal stereo delays - but it can sound wicked!


----------



## dgburns

@charlieclouser and @JohnG and gang,

Not wanting to derail too far, but one thing I found interesting was to use the surround panner inside Kontakt. I found that I was able to make things ‘surround’ that were not really intended to be, sometimes to good effect. Playing with some of the panner parameters gets interesting. Tried it with ‘una corda’ for example and it was fun, not quite like a room verb surround, not quite front stereo/back stereo but something else instead- and usefull for some things. Maybe it would work on orchestral stuff that was stereo only ? Haven’t tried that yet.

Also, I find reaching for those true surround synths, like absynth and structure interesting as well. You have to watch the Absynth outputs, they don’t match Logic’s, so you need to swap them to match. Absynth does some fun things like circular swirling etc. 
I’m set to work in quad and use the internal Logic busses, but I simply turn off the center channel speaker icon. When you use a multi mono plugin on a surround track, you can set the A B and C parameters to control different channels, so A for fronts and B for rears for example, all in the one plugin. So this way you can set up say, crystalizer from Soundtoys to have slightly different settings for front and rear all in the one plugin and get nice quad effects without having to set up two aux channels etc.


----------



## charlieclouser

Since Logic's "surround" capability is only useful when outputting a single composite surround mix from Logic, you can't use the nifty little surround panners if you need to output multiple stems at once to multiple arrays of hardware outputs. This is a really serious problem for me, since I need to send multiple surround stems to an array of hardware outputs to get the audio over to the ProTools print rig in real time.

A year ago I had Clemens and Jan-Hikkert from the Logic team over at my place to show them why this was such a big problem, and why my output sub-master matrix looks the same today as it did 15 years ago. While it took a few minutes to demonstrate and describe how I route multiple surround stems over to the separate ProTools print rig, they understood immediately and completely - and then they spoke to each other in German for about 30 seconds and proposed a way to solve the issue by adding a new feature to Logic, without incredibly simple and minor changes to the user interface.

Basically, Clemens said, "What if the Project Settings > Audio > I/O Settings dialog had a tabbed interface, with the ability to define the hardware outputs for Surround Busses A through Z? Then, in the pop-up when setting the output for any Audio Object, where you now have a single choice for Surround, you would have Surround A through Surround Z, whose actual hardware outputs are defined in the Preferences. Would 26 Surround Outputs be enough?"

My jaw dropped and I had to resist the urge to hug them both.

At first I imagined we'd have to deal with a crazy grid interface like the i/o settings pages in ProTools, which is very 1990's and makes my head spin. But, as usual, Clemens had a simple, flexible solution that is very "Apple-like" in its simplicity. 

I have no idea if or when this might get added to Logic, but it's clear that the team understands the issue and has already figured out a graceful way to implement it with minimal changes to the user experience. Those guys are just top shelf.

Clemens also told me that, other than this issue with the output configurations, all audio pathways inside Logic are capable of n-channel widths with no restrictions - so it's not like they need to rewrite the audio engine or whatever.

My fingers are starting to hurt from keeping them crossed since that meeting.


----------



## dgburns

charlieclouser said:


> Since Logic's "surround" capability is only useful when outputting a single composite surround mix from Logic, you can't use the nifty little surround panners if you need to output multiple stems at once to multiple arrays of hardware outputs. This is a really serious problem for me, since I need to send multiple surround stems to an array of hardware outputs to get the audio over to the ProTools print rig in real time.
> 
> A year ago I had Clemens and Jan-Hikkert from the Logic team over at my place to show them why this was such a big problem, and why my output sub-master matrix looks the same today as it did 15 years ago. While it took a few minutes to demonstrate and describe how I route multiple surround stems over to the separate ProTools print rig, they understood immediately and completely - and then they spoke to each other in German for about 30 seconds and proposed a way to solve the issue by adding a new feature to Logic, without incredibly simple and minor changes to the user interface.
> 
> Basically, Clemens said, "What if the Project Settings > Audio > I/O Settings dialog had a tabbed interface, with the ability to define the hardware outputs for Surround Busses A through Z? Then, in the pop-up when setting the output for any Audio Object, where you now have a single choice for Surround, you would have Surround A through Surround Z, whose actual hardware outputs are defined in the Preferences. Would 26 Surround Outputs be enough?"
> 
> My jaw dropped and I had to resist the urge to hug them both.
> 
> At first I imagined we'd have to deal with a crazy grid interface like the i/o settings pages in ProTools, which is very 1990's and makes my head spin. But, as usual, Clemens had a simple, flexible solution that is very "Apple-like" in its simplicity.
> 
> I have no idea if or when this might get added to Logic, but it's clear that the team understands the issue and has already figured out a graceful way to implement it with minimal changes to the user experience. Those guys are just top shelf.
> 
> Clemens also told me that, other than this issue with the output configurations, all audio pathways inside Logic are capable of n-channel widths with no restrictions - so it's not like they need to rewrite the audio engine or whatever.
> 
> My fingers are starting to hurt from keeping them crossed since that meeting.



That would be awesome indeed! Wonder how they will deal with parent/child bussing scenarios, like when you want to send something to output surround ‘D’ but only to certain channels...like subsets quad, LCR or rears only... Protools does this elegantly imho


----------



## charlieclouser

dgburns said:


> That would be awesome indeed! Wonder how they will deal with parent/child bussing scenarios, like when you want to send something to output surround ‘D’ but only to certain channels...like subsets quad, LCR or rears only... Protools does this elegantly imho



Yeah, that's one aspect of PT that is just totally handled. It's ugly, but it works and there's no scenario it can't deal with. 

I would imagine that in Logic one way to do it would be to have the surround busses A-Z appear as input choices for Aux Objects, and if that Aux has fewer output channels than the input source... well, I don't know what happens then. Then things get complex. I could do without the whole parent/child thing in Logic, since I'm printing to PT anyway I can just do it over there.


----------



## rlw

I can't thank everyone enough for this post. So much good advice. I have been a composer for over (ugh ? ) years , but relatively new to soundtrack and 5.1 mixing. Quick question, how to you handle stems from Logic Pro X to Pro Tools since timecode is not on Logic Wav files. ? I know its been discussed on other threads but I was unable to locate them. Thx for any help.


----------



## europa_io

Agreed! Thank you to everyone for their great advice.


----------



## charlieclouser

rlw said:


> I can't thank everyone enough for this post. So much good advice. I have been a composer for over (ugh ? ) years , but relatively new to soundtrack and 5.1 mixing. Quick question, how to you handle stems from Logic Pro X to Pro Tools since timecode is not on Logic Wav files. ? I know its been discussed on other threads but I was unable to locate them. Thx for any help.



Well, I use two systems, side by side. Logic spits out 48 channels of audio via MADI and Pro Tools (on a separate computer) records that via the Avid MADI interface. Timecode comes out of the SyncHD peripheral on the Pro Tools machine and goes into the Unitor-8 on the Logic machine. Easy peasy.

But if you're talking about doing bounces within Logic and then importing those files into Pro Tools, you can do a couple of things:

- some folks like to embed the timecode start point into the file name of each and every audio file. This is pretty foolproof but makes for a bit of a mess when viewing files in the finder. But it is foolproof.

- what I do, even with the files recorded by PT (which do have time stamps embedded) is to put all of the files for a given cue into a folder, and then embed the timecode startpoint into the FOLDER name, but not into all of the file names. So each cue will have a folder named something like "SAW4-2m14v2=02.08.22.14" and inside that folder is a bunch of files named things like "SAW4-2m14v2-LEG TRAP-DRMstem.L" etc., where the production name is "SAW4", it's version 2 of the cue "2m14", the cue title is "Leg Trap", and that file is the left channel of the drum stem. When I do it this way, I can right-click the folder to make it a zip file and then send that folder to my music editor. When he unzips it, he gets a folder with the time stamp in the folder name, AND he still has the original zip which has that info in its name as well, in case he moves the files out of the original folder and deletes the empty folder. I also am frequently sending updated versions of cues etc. and this way even when I send just one cue as a folder full of wav files, instead of as a whole Pro Tools session, the info is always there.


----------



## rlw

charlieclouser said:


> Well, I use two systems, side by side. Logic spits out 48 channels of audio via MADI and Pro Tools (on a separate computer) records that via the Avid MADI interface. Timecode comes out of the SyncHD peripheral on the Pro Tools machine and goes into the Unitor-8 on the Logic machine. Easy peasy.
> 
> But if you're talking about doing bounces within Logic and then importing those files into Pro Tools, you can do a couple of things:
> 
> - some folks like to embed the timecode start point into the file name of each and every audio file. This is pretty foolproof but makes for a bit of a mess when viewing files in the finder. But it is foolproof.
> 
> - what I do, even with the files recorded by PT (which do have time stamps embedded) is to put all of the files for a given cue into a folder, and then embed the timecode startpoint into the FOLDER name, but not into all of the file names. So each cue will have a folder named something like "SAW4-2m14v2=02.08.22.14" and inside that folder is a bunch of files named things like "SAW4-2m14v2-LEG TRAP-DRMstem.L" etc., where the production name is "SAW4", it's version 2 of the cue "2m14", the cue title is "Leg Trap", and that file is the left channel of the drum stem. When I do it this way, I can right-click the folder to make it a zip file and then send that folder to my music editor. When he unzips it, he gets a folder with the time stamp in the folder name, AND he still has the original zip which has that info in its name as well, in case he moves the files out of the original folder and deletes the empty folder. I also am frequently sending updated versions of cues etc. and this way even when I send just one cue as a folder full of wav files, instead of as a whole Pro Tools session, the info is always there.


You guessed correctly, that I was referring to bouncing for pro tools editor. I like your folder approach, to put timecode versus every file name. Again, I greatly appreciate your willingness to share. Your advice is greatly appreciated.


----------



## charlieclouser

My approach is perhaps not state-of-the-art, but it's the way I've been doing things for 15 years or so and so I just stick with those naming conventions - partly because some of the movies and tv series I've done lasted for so freaking long, and partly because I've worked with the same music editor for so long. When you have a system that works, it's more painful to change it up than it is to live with some slight inconveniences.

But on some revision-heavy projects, where I was sending new versions of cues over to the dub stage as they were mixing, the "zipped folder with start point in the folder name" approach was very slick and convenient when we had to move "at speed". Having the cue title embedded in every individual file name is just something I've always believed in - I hate just seeing folders full of files with only numbers and letters in their names; years later I can't make head or tail out of that mess without the cue titles in there! Naming the files like this:

(project name or season+episode) - (cue number+version) - (cue title) - (stem name) - (channel suffix)

also helps things to alphabetize neatly when viewed in list mode in the MacOS Finder windows. 

I may not have mentioned that I also embed the version number into BOTH the "cue number" AND the "cue title" fields, so version 4 of a cue would have files with names like:

SAW4-2m14v4-Leg Trap v4-DRMstem.L

Probably redundant, but there are instances, later in time and further downstream, where the filenames get reduced or stripped down - such as when the cues are dumped into my agents' music library - and the "cue number" might get tossed, so it's nice to still have the version in the cue title field. There are folders on my drives with files that have reduced names like:

SAW4-Leg Trap v4.mp3

So it's nice to have that version info in the cue title. Some uses don't need every molecule of info.


----------



## rlw

I agree with your reasoning and at my age I like keeping the details as you outlined in the file names. Makes perfect sense to me. I have one question, I didn't quite grasp why the version number it put in both the "cue number" and "cue title".


----------



## charlieclouser

rlw said:


> I agree with your reasoning and at my age I like keeping the details as you outlined in the file names. Makes perfect sense to me. I have one question, I didn't quite grasp why the version number it put in both the "cue number" and "cue title".



See the last paragraph above. Basically, I think the cue title should also include what version it is just in case the cue number gets removed from the file names further down the line, like when making demo reels, compilations, album releases, etc. Also, sometimes, a v2 of a cue is a completely different piece of music from the v1, and this way they have distinct titles without creating a whole new cue name, which could get (more?) confusing.

That approach may not be "right" or industry standard, but I adopted it years ago and I'm too old to switch now!


----------



## dgburns

charlieclouser said:


> See the last paragraph above. Basically, I think the cue title should also include what version it is just in case the cue number gets removed from the file names further down the line, like when making demo reels, compilations, album releases, etc. Also, sometimes, a v2 of a cue is a completely different piece of music from the v1, and this way they have distinct titles without creating a whole new cue name, which could get (more?) confusing.
> 
> That approach may not be "right" or industry standard, but I adopted it years ago and I'm too old to switch now!



I was always putting the numbers first, but upon thinking more about this, your way makes more sense. If you put the 2m14 first, you don't group the show files together. I think your way = better. BUT I'm not sold on the double version, I think one version is enough, FWIW. I never cared for dates in the stems files myself.

so- from now on I will do-

My Great Show Season 2_cue 14 of reel 2_version 5_cue title_stem 

-to -

MGSs2_2m14v5_spanked_drum.L 

I find it funny how we all abbreviate the show title and call it that so quickly from that point on.


----------



## charlieclouser

My prefix for production name / season / episode / whatever uses a two or three digit abbreviation for he show, and then up to three digits for season and episode number. For instance, a show called "Las Vegas" got abbreviated to "LV", and a show called "NUMB3RS" got abbreviated to "Ns" and then season two, episode twelve gets scrunched down to "212", so then a typical file name would be:

Ns212-3m27-Code Book

Sometimes the show production team might have an internal numbering scheme for episodes that doesn't exactly conform to my scheme - in that case I'd use their scheme so that my numbers match theirs. 

Also, some folks like to restart the cue numbers at each reel break or act; I still don't do this. I keep the numbers incrementing the whole way through a show, so if 1m09 is the last cue in reel / act one, then the first cue in reel / act two would be 2m10. I really prefer this method.

With all of these rules observed, even when you dump all of the cues from seven years of episodes into a single folder, they will sort in correct chronological order. Nice and clean.


----------



## dgburns

charlieclouser said:


> My prefix for production name / season / episode / whatever uses a two or three digit abbreviation for he show, and then up to three digits for season and episode number. For instance, a show called "Las Vegas" got abbreviated to "LV", and a show called "NUMB3RS" got abbreviated to "Ns" and then season two, episode twelve gets scrunched down to "212", so then a typical file name would be:
> 
> Ns212-3m27-Code Book
> 
> Sometimes the show production team might have an internal numbering scheme for episodes that doesn't exactly conform to my scheme - in that case I'd use their scheme so that my numbers match theirs.
> 
> Also, some folks like to restart the cue numbers at each reel break or act; I still don't do this. I keep the numbers incrementing the whole way through a show, so if 1m09 is the last cue in reel / act one, then the first cue in reel / act two would be 2m10. I really prefer this method.
> 
> With all of these rules observed, even when you dump all of the cues from seven years of episodes into a single folder, they will sort in correct chronological order. Nice and clean.



First off, apologies for the total thread hijack, but I must admit a special fascination with cue numbering. Total nerd that I am for this stuff!!

So, I always wondered why I saw other guy's cue list with cues numbering so high, I always reset at the reel. BUT, another reason not to reset is again to keep as much of the cue name as unique as possible. So, again, I agree that not resetting is better. Will do from now on.

'Ns212-3m27-Code Book'

And that's one abbreviated cue name lol. 

One thing that I always wondered about was what to name them on cue sheets. Sometimes just 'underscore' for the whole show with a comped time, sometimes each cue with it's own name. Theme separate obviously. I've been favouring just using one entry for bg instr underscore and lumping it all in one entry. I used to enter each cue in, until I came across the 'Q' system from Viacom. Sorry, rambling off topic again.


----------



## charlieclouser

dgburns said:


> First off, apologies for the total thread hijack, but I must admit a special fascination with cue numbering. Total nerd that I am for this stuff!!
> 
> So, I always wondered why I saw other guy's cue list with cues numbering so high, I always reset at the reel. BUT, another reason not to reset is again to keep as much of the cue name as unique as possible. So, again, I agree that not resetting is better. Will do from now on.
> 
> 'Ns212-3m27-Code Book'
> 
> And that's one abbreviated cue name lol.
> 
> One thing that I always wondered about was what to name them on cue sheets. Sometimes just 'underscore' for the whole show with a comped time, sometimes each cue with it's own name. Theme separate obviously. I've been favouring just using one entry for bg instr underscore and lumping it all in one entry. I used to enter each cue in, until I came across the 'Q' system from Viacom. Sorry, rambling off topic again.



I really prefer the constantly-ascending cue number method - that way, if I know there are 59 cues I've got to do, and I'm on 6m58, well... I'm almost done. Another note is that some shows have a "teaser", which is like a mini-act-one that actually comes before the main titles. Since most of the picture editors still call act one "act one", we number the cues in the teaser as "Zero-M-whatever". So on shows that have these "teasers" I wind up with cues that are named "Ns212-0m02" etc.

Obviously, source cues and other miscellany throw off those numbers here and there, but still.... it's way more confusing for me to restart the numbers with each reel / act - if that's what they teach in film composing school, then I'm glad I never went!

As abbreviated as my file names might be, even if we delete the actual cue title, then "Ns212-3m27" is still an absolutely unique name that leaves no grey area about where the hell it came from.

I was looking at my BMI Repertoire page one time, and wondering why I couldn't find any of my lovely abbreviated and skillfully manipulated cue titles for something like 6,000 television cues. I called them on an actual telephone, and they explained that what I should be looking for was "roll ups" which are kind of what you're describing - a single entry that represents the entire underscore from a television episode. Lo and behold, everything was there - but rolled up into whole-episode line items.

As to how to fill out the cue sheets - I'm a little ashamed to admit that I've never filled out a cue sheet in my life. My music editor has always done this for me. Gives him something to do besides endlessly eating fun-sized Snickers bars on the dub stage I guess! But I believe that he does list each cue with its full title and then it's the BMI system (or systems like RapidCue that Fox uses) that "rolls them up" into whole-episode items. I basically tell my music editor, "Don't mess it up, man.... don't screw me on this.... or imma come find you in ten years and I'll be very broke and very angry and have nothing to lose, so...."

Another aspect to my cue naming system that we adopted right from the start was to insure that the actual "name" portion of the cue title is ALWAYS unique. The way we do this is to use a Filemaker template to create the spotting notes, instead of just using Microsoft Office or some dedicated Spotting Notes application. It was my music editor who was already doing it this way, so that he had separate data fields for every piece of data. But, since Filemaker is actually a database program, the spotting notes from everything we did together were actually part of a single, larger document - and this document could do an amazing thing: "Force Unique Values". With this checkbox turned on for the "Cue Title" field, the program would not allow us to re-use a cue title, even years later - and even across multiple concurrent series on different tv networks! He had a single Filemaker database file for all of the different shows we did together, so that if Las Vegas had a cue called "Bring It", we were prevented from naming a cue from Numb3rs with the title "Bring It". This. Is. Amazing.

On the first series I ever did, by the fifth episode I wound up with about a dozen cues called "Car Chase" - and I realized that this just ain't gonna work. So now I always think up a cool, quirky, unique, and easy-to-remember title for every cue - one that will immediately bring to mind the scene or episode it was used in. We try to do this right during the spotting session, and when my music editor types in the title, Filemaker checks that the title hasn't been used before - EVER. If the spotting session is moving too fast, I can trust him to make up titles because I know they will be unique thanks to Filemaker. So, yeah - "Force Unique Values" for the win.

And, as the inverse of the example at the top of this post, even if we delete "Ns212-3m27" from the file name, leaving just "Code Book", there will be no duplicates or confusion since there has only ever been one single cue with that title.

So, yeah... a good system and a good music editor makes for clean spotting notes that allow me to avoid confusion.


----------



## dgburns

charlieclouser said:


> I really prefer the constantly-ascending cue number method - that way, if I know there are 59 cues I've got to do, and I'm on 6m58, well... I'm almost done. Another note is that some shows have a "teaser", which is like a mini-act-one that actually comes before the main titles. Since most of the picture editors still call act one "act one", we number the cues in the teaser as "Zero-M-whatever". So on some shows I wind up with cues that are named "Ns212-0m02" etc.
> 
> Obviously, source cues and other miscellany throw off those numbers here and there, but still.... it's way more confusing for me to restart the numbers with each reel / act - if that's what they teach in film composing school, then I'm glad I never went!
> 
> As abbreviated as my file names might be, even if we delete the actual cue title, then "Ns212-3m27" is still an absolutely unique name that leaves no grey area about where the hell it came from.
> 
> I was looking at my BMI Repertoire page one time, and wondering why I couldn't find any of my lovely abbreviated and skillfully manipulated cue titles for something like 6,000 television cues. I called them on an actual telephone, and they explained that what I should be looking for was "roll ups" which are kind of what you're describing - a single entry that represents the entire underscore from a television episode. Lo and behold, everything was there - but rolled up into whole-episode line items.
> 
> As to how to fill out the cue sheets - I'm a little ashamed to admit that I've never filled out a cue sheet in my life. My music editor has always done this for me. Gives him something to do besides endlessly eating fun-sized Snickers bars on the dub stage I guess! But I believe that he does list each cue with its full title and then it's the BMI system (or systems like RapidCue that Fox uses) that "rolls" them into whole-episode items. I basically tell my music editor, "Don't mess it up, man.... don't screw me on this.... or imma come find you in ten years and I'll be very broke and very angry and have nothing to lose, so...."
> 
> Another aspect to my cue naming system that we adopted right from the start was to insure that the actual "name" portion of the cue title is ALWAYS unique. The way we do this is to use a Filemaker template to create the spotting notes, instead of just using Microsoft Office or some dedicated Spotting Notes application. It was my music editor who was already doing it this way, so that he had separate data fields for every piece of data. But, since Filemaker is actually a database program, the spotting notes from everything we did together were actually part of a single, larger document - and this document could do an amazing thing: "Force Unique Values". With this checkbox turned on for the "Cue Title" field, the program would not allow us to re-use a cue title, even years later - and even across multiple concurrent series on different tv networks! He had a single Filemaker database file for all of the different shows we did together, so that if Las Vegas had a cue called "Bring It", we were prevented from naming a cue from Numb3rs with the title "Bring It". This. Is. Amazing.
> 
> On the first series I ever did, by the fifth episode I wound up with about a dozen cues called "Car Chase" - and I realized that this just ain't gonna work. So now I always think up a cool, quirky, unique, and easy-to-remember title for every cue - one that will immediately bring to mind the scene or episode it was used in. We try to do this right during the spotting session, and when my music editor types in the title, Filemaker checks that the title hasn't been used before - EVER. If the spotting session is moving too fast, I can trust him to make up titles because I know they will be unique thanks to Filemaker. So, yeah - "Force Unique Values" for the win.
> 
> And, as the inverse of the example at the top of this post, even if we delete "Ns212-3m27" from the file name, leaving just "Code Book", there will be no duplicates or confusion since there has only ever been one single cue with that title.
> 
> So, yeah... a good system and a good music editor makes for clean spotting notes that allow me to avoid confusion.



Funny how being organized helps enable us to be creative. Good thoughts in there Charlie.


----------



## JohnG

Charlie, you think 5x as hard about pretty much everything. Sometimes I worry about you. Sometimes I worry about me....

I often put the SMPTE number into the cue names too. That can help; as long as you have the version of the film / reel you're on and a SMPTE location, it's harder for the stage to plaint, "we couldn't figure out where this went so we just put it over the scene with the helicopter exploding where you can't hear anything.

I restart numbers with each reel. I'm not sure it makes much difference.


----------



## charlieclouser

Since I usually print to a separate ProTools rig instead of bouncing inside Logic, the resulting BWAV files ARE time-stamped - and it actually works, much to my surprise - the files snap to the correct location when imported into the ProTools rig on the dub stage, as long as they push the right button or hold the right modifier key or whatever.

But when I'm doing a tv series and not printing stems, or only printing like three stems, I don't bother to boot up the ProTools print rig and I just bounce inside Logic using "region solo" to isolate the color-coded elements in each stem. This saves me the hassle of endlessly re-assigning tracks (and both / all four of their effects sends) to the appropriate stem busses. When printing tv cues as only three or four stems I don't obey any hard and fast rules about where things should go - sure, drums are supposed to go on the drum stem, but I often have cues with no drums, and in those cues I might move some other cool sound over to the empty drum stem just so it's isolated. Makes it easier for the editor and mixers to isolate or remove something, and with only four stems it's not like they'll spend much time searching for which stem that one little "zoink" sound is located.

But on those tv series I still just put the bounced files into a folder, name the folder "Ns212-3m27=01.26.12.00", zip it, and stick it in DropBox or whatever. If something goes sideways, the music editor still has the original zip file with the timecode in the folder name, and it's still in the DropBox folder, so there's a few places he can look for the timecode numbers. 

Another thing I do is to only start my files at whole-second intervals. I always leave up to one second of dead air at the top of the files (but no more than that), starting my bounce at the nearest whole-second point before the actual audio. This does two things - it means that my audio is never starting hard at the top of the file, so no clicks or chopped-off attacks - and it means that since diddling with frames is out of the equation on my end, if there's any confusion about where a cue should start, it can be moved left and right by whole-second intervals and it should click into place. Any frame-by-frame adjustment of the start point on the dub stage means that it's a music editor tweak for their purposes, not an attempt to try to find where I thought it should sit. Simplifies things a little.

All of my organizational techniques come from my record-making years, when we were generating an absolute hailstorm of drum samples, weird guitar noises, live drums recorded in various studios, a million alternate vocal takes, and then putting the song aside for a year (or three!), and eventually coming back to it and risking the Wrath of Reznor™ if something couldn't be found. Never mind the road cases full of SyQuest cartridges and floppy discs for the Akai samplers, with twelve-character filename limits .... labelling was crucial. So I got pretty good at housekeeping!


----------



## animatione

Something seems too difficult for me here.

I have Spitfire Studio strings.

HOW CAN I ROUTE C1, C2 T1 T2 A O microphones (kontakt and in Logic) I do not know how to do this

I would like fro example the O to be rerouted to the rear channel? How can assign the O output to go to Rear left and Rear Right???


----------



## dgburns

@animatione I don’t know the definitive answer, maybe @EvilDragon could shed some light. But I’ll take a stab at some possible work arounds.

The patch might not be setup to do more then stereo. If you want to keep it to one instrument, you might be able to edit the patch to send the groups(mics) to their own outputs, then setting up the output section to accommodate.

Failing that, you could also load one instance per mic in one kontakt multi all set to recieve on the same midi channel?


----------



## X-Bassist

animatione said:


> Something seems too difficult for me here.
> 
> I have Spitfire Studio strings.
> 
> HOW CAN I ROUTE C1, C2 T1 T2 A O microphones (kontakt and in Logic) I do not know how to do this
> 
> I would like fro example the O to be rerouted to the rear channel? How can assign the O output to go to Rear left and Rear Right???



First setup your kontakt outputs (you seem to have that figured out). Then click on each mic letter on the Spitfire interface for a pop-up menu where you can select it’s output. Then you may need to save and reopen the session to have the extra kontakt outputs show up in your DAW. Then you route those to surround record tracks. Until then it’s probably best to setup the kontakt ouputs as stereo pairs (L+R, SL+SR, C+Sub only if you need it). 

If using VE Pro, you’ll need to route Kontakt to VE Pro Channels to DAW tracks. It helps to set this up as a templete, then import tracks with all the routing preset.


----------



## animatione

Thank you very much for your kind help. 

1. Are you able to tell more step by step what exactly to do in the VEP Pro PLEASE? This is so important to me. I feel as entering the new era. I have a ready template so just wondering what to modify?
I use Logic

And 2. For not VEP - Wow, I never knew that you can do anything by pressing the C1 C2.
What outputs shall I create in the instance? 1|2 as St.1 and 3|4 as St2? How shall I assign those mics? I want to use only 4 channels. No central and no Subwoofer (because I do not have the subwoofer)


----------

