# Surround (5.1,Quad) worth it?



## iobaaboi (Jun 22, 2017)

I am curious to here some thoughts and experiences regarding making the jump to surround monitoring. I've been contemplating doing so myself for a while, mostly to be able to pan the "A" mics of my Spitfire libraries to the surrounds for a fully immersive experience. 

I am currently a student and do hope to one day work professional in the media scoring industry, so I know surround is where I will eventually want to end up. Is it better to start working in 5.1 earlier since the workflow/template will change accordingly? 

How many of you working composers (TV, Indie films, features) are in surround (5.1 or Quad?)? How do you normally deliver your cues?

Any personal experience on this topic from anyone who's willing to share is much appreciated!


----------



## charlieclouser (Jun 22, 2017)

For me, television is stereo and film are 5.1 - and it's been this way for 14 years or so. I am kind of sorry that I ever told any producers that I could / would do surround since the logistics are so much more of a hassle than just plain old stereo - although it does sound pretty cool to hear some of my immersive drone-scape cues happening in surround in a big theater.

When I deliver 8 stems, the difference between 16 playback tracks (for stereo) and 48 (for 5.1) can mean a whole lot more upload time, disc space, etc. And, since I'm in Logic, the logistics of outputting surround stems is not nearly as streamlined as it is for users of Cubase - but I set up my routing scheme 14 years ago, before Cubase had many of the surround features that it now has, and I've got it down cold so I don't mind. If you're in Logic, it can be a bit of a hassle and you are locked out of using the joystick-like surround panners to output to stem outputs (for now.... hint hint!). If you're in Cubase the logistic framework for outputting surround stems is already there and works well. So which DAW you're using will determine how much pain you are likely to experience.


----------



## Gerhard Westphalen (Jun 22, 2017)

For me, surround is a must for watching movies. Quad is OK but the center channel makes a big difference. Going past 5.1 has limited returns. 

I would get it to start seeing how other people are using it and learning to use it in your DAW. I've been working in surround for a few years yet still often make mistakes in terms of bussing, sends, plugins, etc. and handling it all in surround. 

I've found that even if you're willing to do it (and pushing for it), low budget films often don't care at all for it. I usually offer to do it and directors will say to not regardless of the potential for future releases using surround. In some cases they don't want me to do surround and want to only upmix the music at the dub. The only thing I say is that if I start it in stereo, I can't go back and change it (unless there's a new budget to rebuild the session and remix it). 

Keep in mind that changing your template to surround can be a big performance hit since you're doubling the number of channels from the instrument, especially if you're not already using the "A" mics.


----------



## iobaaboi (Jun 22, 2017)

Thank you both for your very detailed responses! 

I am kind of straddling Logic and Cubase, I am comfortable in both since Logic was my first DAW but these days I prefer and mostly work in Cubase. 

That's my mindset Gerhard, I might as well start learning now. I am aware of the template processing hit going to surround will have so I am planning on full surround integration to coincide with a major machine upgrade (either Vader MP or the new iMac Pro). 

So TV work is usually delivered in stereo and then up-mixed? I feel like a lot of the TV composers I've gotten into lately have fairly modest studios that pale in comparison to their film-working counterparts.


----------



## Gerhard Westphalen (Jun 22, 2017)

iobaaboi said:


> Thank you both for your very detailed responses!
> 
> I am kind of straddling Logic and Cubase, I am comfortable in both since Logic was my first DAW but these days I prefer and mostly work in Cubase.
> 
> ...



I've always found surround routing things in Logic very obscure and not flexible but I have a limited Logic knowledge so perhaps I'm missing something. Mostly things like having plugins on only certain channels of a track or having an independent surround panner for each send which Cubase does. I'm not even sure if PT is as flexible. From what I've heard Reaper is but I haven't used it.

I've only worked on film stuff so my experiences are all surrounding that. I don't know what goes on for TV. Netflix is all surround and I imagine Amazon is as well. It seems like a lot of the lower budget stuff focuses on people watching online rather than in theaters and so they only care about stereo. Personally I think they should do it in surround and just make sure if folds down nicely even if they don't think it'll ever play in a theater.

I haven't noticed a performance hit just from the extra routing channels (both coming in from VEP and just mixing in Cubase). For me it's just been the extra ram for the additional mic position and running more plugin like reverbs (although you can run hundreds of Altiverbs on a modern machine).


----------



## JaikumarS (Jun 22, 2017)

charlieclouser said:


> For me, television is stereo and film are 5.1 - and it's been this way for 14 years or so. I am kind of sorry that I ever told any producers that I could / would do surround since the logistics are so much more of a hassle than just plain old stereo - although it does sound pretty cool to hear some of my immersive drone-scape cues happening in surround in a big theater.
> 
> When I deliver 8 stems, the difference between 16 playback tracks (for stereo) and 48 (for 5.1) can mean a whole lot more upload time, disc space, etc. And, since I'm in Logic, the logistics of outputting surround stems is not nearly as streamlined as it is for users of Cubase - but I set up my routing scheme 14 years ago, before Cubase had many of the surround features that it now has, and I've got it down cold so I don't mind. If you're in Logic, it can be a bit of a hassle and you are locked out of using the joystick-like surround panners to output to stem outputs (for now.... hint hint!). If you're in Cubase the logistic framework for outputting surround stems is already there and works well. So which DAW you're using will determine how much pain you are likely to experience.



Thank you Mr.Clouser for sharing the info. Just curious to know how many stems would you deliver to the dubstage while scoring the film in 5.1? May I know what goes to the center and LFE?.


----------



## charlieclouser (Jun 23, 2017)

JaikumarS said:


> Thank you Mr.Clouser for sharing the info. Just curious to know how many stems would you deliver to the dubstage while scoring the film in 5.1? May I know what goes to the center and LFE?.



Back when I started, my Logic machine was connected to my ProTools machine via 3x ADAT connections, so 24 channels of audio. That works out to three 5.1 stems plus a 5.1 composite mix. So from 2003 until 2014 or so I was delivering only three stems per cue - drums, keys, orch. This was often not enough, so for some of the more dense cues I'd have to do two passes so that I could have drums, perc, metals, keys, orch A, orch B - and it was very clumsy to do two passes on each cue. But I delivered about fifteen features in this format.

When I moved from the silver Mac Pro towers to the new Mac Pro cylinders, I switched to the MOTU AVB 112d interface on the Logic machine and HD Native Thunderbolt with an Avid MADI interface on the ProTools machine. So now I have 64 channels going across. My current template is to print 48 tracks per cue, which works out to seven 5.1 stems plus a 5.1 composite mix. Drums, perc, metals, keys, strings, brass, wildcard. 

Part of the reason I was limited to only using 48 out of the 64 channels that the MADI format provides was because before v10.3, Logic was limited to 64 busses, and my output matrix of stem sub masters relies on busses, as do my per-stem send effects. Now that Logic supports 256 busses I can expand my output matrix and per-stem send effects to give me nine 5.1 stems plus a 5.1 composite mix, with four channels left over.

As to the center and Lfe channels, I always leave the center channels empty. (I could delete these from my print template on the ProTools machine and open up more channels.) I always ask the mixers on the dub stage if they miss having my score in the center and they always say, "No man, that's perfect. I won't tell if you don't!" If I was trying to accurately reproduce true 5.1 acoustic recordings, or feature some solo instruments in the center, the way I do it would not be ideal - but my scores are very much hybrid. My school of thought is that if you have enough stems, you don't need to put a solo cello in the center channel of an orchestra stem - you can just put the solo cello on its own stem and the mixers can rotate it into the center if needed. The mixers on the dub stage have (so far) been quite happy that I am leaving the center channel empty since they can have it all to themselves for dialog and sfx - but I do realize this is not ideal or "correct". 

But the main reason I leave the center empty is because I am using Logic, and its surround implementation leaves much to be desired. At the moment, although all internal signal paths can be 5.1, the surround panners in Logic can only address a single set of 5.1 outputs, which are configured in Preferences > Audio > I/O Assignments. This means that you can't use surround panners to address multiple sets of surround stem sub-masters. (This may change soon though - keep your fingers crossed!) So. Because of this limitation, I must build each stem's sub-masters out of two stereo Aux objects (front L+R and rear L+R) and two mono Aux objects (center and Lfe). As a result, any instrument or audio track is set to stereo (not surround) and is routed to the bus that corresponds to the desired stem's Front L+R sub master. If I want to route that signal to that stem's Center, Lfe, or Rear L+R pair, I must use a send from the individual channel to the desired bus. So that means that I can't do true L-C-R panning - and simulating it and getting an accurate balance between L-C-R can be tricky when trying to do it with sends.

In short, it's a super-hassle with Logic at the moment. I have spent some time with Clemens and Jan-Hikkert from the Logic team recently, showing them in great detail why this is a problem - and as usual, Clemens thought up a solution in about fifteen seconds, but it will probably take a little longer than that to actually implement it as a feature in Logic. Stay tuned.

As to Lfe, I do sometimes send individual elements within a stem to that stem's Lfe channel, by using a send on the individual element's channel strip. This works fine, but I only do it rarely. Things that might hit the Lfe are kick drum, synth bass, sub booms, etc. - so only a few of the stems would have stuff on that channel. Strings / brass / choir stems? Nothing on the Lfe. Many of my scores have an almost industrial feel, with lots of hammering programmed drums and hard synth bass, so that works well. 

But.

I am contemplating switching from true 5.1 stems to simple Quad stems. Front L/R + Rear L/R. (I believe this is how JunkieXL does it) This would obviously open up lots of channels so that I could do more stems, giving me 15 stems plus a composite mix in 64 channels. With the stems so split up, it would increase the likelihood that an element that the mixers want to put in the center or Lfe would be all by itself on its own stem. So instead of me sending just the kick drum from a complex drum stem to that stem's Lfe channel, I could put the kick on its own stem and the mixers could route it there without also putting any other drums in the Lfe. Which approach is more hassle has yet to be decided! 

My favorite way of working is with just three stems in 5.1. Drums, keys, orch - simple. I like it when any one or two stems still sounds like a semi-complete piece of music, and fewer stems means fewer per-stem send effects, fewer mastering limiters on each stem's sub-master, fewer files, quicker uploads, etc. But that can really limit the flexibility that the mixers (and music editor) have on the dub stage. 

But at no point have the mixers ever told me that they wished I had given them more stems with more separation, so me switching from three to seven stems (or maybe fifteen?) isn't because I or they think it's needed - it's just me wanting to push forward to the limits of what my rigs can do. Folly? Perhaps. Only time will tell.

Somewhere on this forum there are some detailed posts that I made with screenshots and more detailed explanations of my output routing matrix. Maybe someone bookmarked them and can link to them? I can't find them at the moment.


----------



## JaikumarS (Jun 23, 2017)

Thank you so much for taking time and explaining it beautifully. I am on Cubase 9 and using a separate machine for PT10.3 (for printing and the Video).
Would you recommend having a clock?
I'll definitely search for the other threads regarding this topic in this forum.

Found some of the threads below -

* http://www.vi-control.net/forum/viewtop ... t=surround

* http://www.vi-control.net/forum/viewtop ... t=surround

* http://www.vi-control.net/forum/viewtop ... t=surround


----------



## chrisr (Jun 23, 2017)

This is a very timely thread for me as I'm also considering moving to surround in some flavour or other. My work is overwhelmingly for kids TV (stereo) but I've also now had a couple of cinema releases (for the same kids title) and will have more in the coming years I hope.

My stems for the movies have been delivered in stereo: 12 stems plus mixed master. Which have then been upmixed at the dubbing stage using the Nugen upmixer resulting in a pleasing "blurry" surroundishness that was widened and narrowed at various times to be more/ less immersive across various scenes.

It might be the case that if my future surround mixes are poor/flawed (quite likely at this point I guess...) that the simple stereo upmix approach might actually yield better results, so I'd like to try and understand any surround underscoring conventions as much as I can before I set out, so that I'm not just trying to be clever whilst actually delivering something worse than I currently do(!!).

When I do make the move to surround - and I think I will at some point - I'll be following some of Charlie's previous advice and getting hold of a bluray player with analogue outs (there are several on the market) so that I can really start to analyse some movie scores and get an idea of what's actually happening in other peoples mixes. I'm aware that there are some Blurays with surround bonus features and will be picking them up also.

So, I'm really interested to hear what mixing approaches people are taking to more traditional scoring in surround, so any discussion/advice here would be really welcome. I could see a strong argument for starting with a very minimal approach of staying with quite dry stereo elements placed into a surround reverb like the Phoenix Surround, rather than starting to feed ambient mics from different sources/rooms to the surrounds, for example. On the other hand it might be that this approach is too 'roomy' and that a less natural/hyper-real approach is more engaging to listen to.

Until I take the plunge and get a surround monitoring setup in my studio I don't suppose I'll be in much of a position to adopt an approach - but the thoughts and experiences of others regarding surround mixing philosophies for traditional musical elements (specifically in underscore) would be of very great interest to me, if anyone has any advice to offer or can point me in the direction of any good resources? Typically I find that much of the material out there about surround is aimed at dealing with mixing Dx/sfx, rather than discussing aesthetic approaches to mixing the underscore itself.


----------



## chrisr (Jun 23, 2017)

JaikumarS said:


> Thank you so much for taking time and explaining it beautifully. I am on Cubase 9 and using a separate machine for PT10.3 (for printing and the Video).
> Would you recommend having a clock?
> I'll definitely search for the other threads regarding this topic in this forum.
> 
> ...



Oh thanks for finding these too !! - Will take a good look through it all this evening 

*Edit* - OK I couldn't wait and had a look already - there's some great info there - just what I was looking for thanks to all who contributed to those old threads!


----------



## Scoremixer (Jun 23, 2017)

iobaaboi said:


> I am curious to here some thoughts and experiences regarding making the jump to surround monitoring. I've been contemplating doing so myself for a while, mostly to be able to pan the "A" mics of my Spitfire libraries to the surrounds for a fully immersive experience.
> 
> I am currently a student and do hope to one day work professional in the media scoring industry, so I know surround is where I will eventually want to end up. Is it better to start working in 5.1 earlier since the workflow/template will change accordingly?



If you’re not currently working on things that require delivery in 5.1, then the question of whether to make the jump or not depends on what you want to get out of it. If your focus is just on composing, then stay away from 5.1- it’s a technical distraction and comes with additional processing, workflow and delivery burdens. 

If you want to develop technical skills though, it could be an interesting (if pricey) detour- you’ll definitely learn some things, and it’s the kind of experience that could come in useful if you ever find yourself assisting a more established composer.

There aren’t that many composers (even amongst the A-list) that use 5.1 setups and templates as part of their day-to-day workflow. For the more established guys, 5.1 comes from live recordings and a dedicated 5.1 music mix- almost everything I’ve ever worked on has started life as a stereo tracklay from the composer’s rig.

Here follow some random thoughts about mixing in 5.1... Live orchestra normally gets the decca tree mics hard routed L-C-R and the room surround mics go Ls and Rs. Overdubs tend to be recorded with a similar room mic philosophy, so there’s always some native content for each speaker channel, even if the bulk of the sound comes from close mics panned to the main left and right speakers. Aside from Tree C and some reverb there’s not much that makes it to the centre channel- certainly anything vocal, loud, bright or direct is a bad idea. Ambience, low freq and pad stuff is less likely to get nixed at the dub. 

There are a few ways to generate surround content from stereo prelays. Steady state, non-attention grabby things like pads often get pulled back a bit just with the standard surround panner, so they come out of the front and surround speakers (something like a 66/33 front-back balance could be a good starting point). Then, obviously things like surround reverbs, delays with different settings panned front and back, micropitch spreaders etc can all work well on more synthy and steady state content. Upmixers like those made by Waves or Nugen can sometimes work a bit better to surroundify real elements where needed. Things to avoid in the surrounds are lots of hi freq or direct percussive content. 

Sub really depends. As a rule of thumb a little percussion low end and other transient effects can work. Sending sub feeds though subharmonic synthesisers like the Waves LoAir or AVID Prosub generates bass content that’s less correlated to the original signal- that means fewer surprises when the 5.1 is downmixed to stereo, or when the dubbing mixer kills your sub channel entirely. Tend to avoid steady-state, pitched things (obviously caveats apply, sometimes you need a load of energy going to the sub, a la Gravity).

If in any doubt, conservative is better than extravagant in 5.1- dubbers would prefer to just get well balanced, clean, usefully separated stems rather than something they’ll have to squash and mute and mangle to make work. Very often 5.1 music mixes are further surroundified at the dub- there’s definitely a trend for pulling what would be the front L & R channels back and into the surrounds a little bit in order to open up the front for dial+fx. Bear in mind the guy mixing is also often the guy who’s spent months working on the sound effects (or is closely aligned with the effects sound team), particularly in TV. That doesn’t bode well in the battle between your carefully crafted LFE channel and his carefully crafted Oscar-contender explosion. 

To contradict what Charlie said, the trend these days is for more and more stems- 12-24 stems in 5.1 or 7.1 (most films these days are actually dubbed in 7.1, music mixing is gradually migrating there…), plus additional Dolby Atmos components if required- that’s the norm for delivery on films. The more stems you can provide, the more flexibility the dub have and the less likely it is they’ll have to do something grisly to make a cue work against dialogue/fx. 

If you want to have a play in surround and maximise the usefulness of your Spitfire mic splits to generate some natural surround content, then working in quad would definitely be a good compromise.


----------



## iobaaboi (Jun 23, 2017)

Thank you to all that have taken the time to contribute to this thread, it has already been very educational and informative. I'm glad others in similar situations to me have been able to benefit as well. 

Scoremixers post certainly gives me a lot to think about. It certainly does sound like a good amount of technical headaches and I am nowhere near a legit ProTools printing rig. 

Quad sounds like it may be the way to go for now, I would love to hear how my Spitfire libraries sound in a 3D space but don't want to unnecessarily distract myself from working on my composing. 

Any other personal insight anybody would like to share about making the switch would still be appreciated!


----------



## dgburns (Jun 24, 2017)

charlieclouser said:


> I am contemplating switching from true 5.1 stems to simple Quad stems.



Yup, I went quad, and setup Logic to be quad in the prefs. It's a little lighter on system resources too.


----------



## JaikumarS (Jun 24, 2017)

charlieclouser said:


> Back when I started, my Logic machine was connected to my ProTools machine via 3x ADAT connections, so 24 channels of audio. That works out to three 5.1 stems plus a 5.1 composite mix. So from 2003 until 2014 or so I was delivering only three stems per cue - drums, keys, orch. This was often not enough, so for some of the more dense cues I'd have to do two passes so that I could have drums, perc, metals, keys, orch A, orch B - and it was very clumsy to do two passes on each cue. But I delivered about fifteen features in this format.
> 
> When I moved from the silver Mac Pro towers to the new Mac Pro cylinders, I switched to the MOTU AVB 112d interface on the Logic machine and HD Native Thunderbolt with an Avid MADI interface on the ProTools machine. So now I have 64 channels going across. My current template is to print 48 tracks per cue, which works out to seven 5.1 stems plus a 5.1 composite mix. Drums, perc, metals, keys, strings, brass, wildcard.
> 
> ...


Mr.Clouser - Could you please share how you have the composing rig locked sample / frame accurate to your video in PT machine. I am experiencing slight latency while hosting video on PT and slaving it to Cubase connected via Ethernet - MTC.


----------



## charlieclouser (Jun 24, 2017)

JaikumarS said:


> Mr.Clouser - Could you please share how you have the composing rig locked sample / frame accurate to your video in PT machine. I am experiencing slight latency while hosting video on PT and slaving it to Cubase connected via Ethernet - MTC.



I don't host video on my ProTools machine - I use a *third* computer for that. My video machine is a Mac Mini (i7, 8gb RAM, 512 SSD) which is running VideoSlave software. This machine has an HDMI output that goes to the big tv on the wall, and the audio comes out of the headphone jack on the back into my Logic rig where I can monitor it via the CueMix software for the MOTU audio interfaces so it doesn't actually come up on any Auxes or whatever inside Logic. I send MTC to that machine from Logic over Network MIDI using Apple's CoreMIDI Network Session function. 

To check and adjust sync I put the movie inside Logic AND on the VideoSlave machine, then I compare the two while they're running. VideoSlave has some adjustments that can be made to tweak sync, but to be honest I don't think I had to adjust those settings at all. In any case, once this is adjusted I don't need to check or adjust it every time - these days I just drop the movie into VideoSlave and go. I do a quick check to verify that the audible 2-pop in the movie matches up with an audio beep that I put into Logic at the same timecode point. Boom, done.

I've run video on a third computer for many, many years, since the days when I was using G4 and G5 computers for Logic. Back then, having the video inside Logic took a serious toll on the CPU, so I would use an old G4 Mac Mini running Virtual VTR software, which is similar to VideoSlave. Now that the computers are so fast this isn't such an issue, but I still like the workflow of having the video on a third computer. This means I can leave my ProTools machine turned off through the whole composing process until it's time to print. The VideoSlave machine just hangs off of the Logic machine and just playing video is no stress at all for it, so it runs smoothly and locks up in less than a second.


----------



## JaikumarS (Jun 24, 2017)

charlieclouser said:


> I don't host video on my ProTools machine - I use a *third* computer for that. My video machine is a Mac Mini (i7, 8gb RAM, 512 SSD) which is running VideoSlave software. This machine has an HDMI output that goes to the big tv on the wall, and the audio comes out of the headphone jack on the back into my Logic rig where I can monitor it via the CueMix software for the MOTU audio interfaces so it doesn't actually come up on any Auxes or whatever inside Logic. I send MTC to that machine from Logic over Network MIDI using Apple's CoreMIDI Network Session function.
> 
> To check and adjust sync I put the movie inside Logic AND on the VideoSlave machine, then I compare the two while they're running. VideoSlave has some adjustments that can be made to tweak sync, but to be honest I don't think I had to adjust those settings at all. In any case, once this is adjusted I don't need to check or adjust it every time - these days I just drop the movie into VideoSlave and go. I do a quick check to verify that the audible 2-pop in the movie matches up with an audio beep that I put into Logic at the same timecode point. Boom, done.
> 
> I've run video on a third computer for many, many years, since the days when I was using G4 and G5 computers for Logic. Back then, having the video inside Logic took a serious toll on the CPU, so I would use an old G4 Mac Mini running Virtual VTR software, which is similar to VideoSlave. Now that the computers are so fast this isn't such an issue, but I still like the workflow of having the video on a third computer. This means I can leave my ProTools machine turned off through the whole composing process until it's time to print. The VideoSlave machine just hangs off of the Logic machine and just playing video is no stress at all for it, so it runs smoothly and locks up in less than a second.



Thank you Mr.Clouser for writing back. I'll look into VideoSlave 3


----------



## dgburns (Jun 25, 2017)

charlieclouser said:


> I don't host video on my ProTools machine - I use a *third* computer for that. My video machine is a Mac Mini (i7, 8gb RAM, 512 SSD) which is running VideoSlave software. This machine has an HDMI output that goes to the big tv on the wall, and the audio comes out of the headphone jack on the back into my Logic rig where I can monitor it via the CueMix software for the MOTU audio interfaces so it doesn't actually come up on any Auxes or whatever inside Logic. I send MTC to that machine from Logic over Network MIDI using Apple's CoreMIDI Network Session function.
> 
> To check and adjust sync I put the movie inside Logic AND on the VideoSlave machine, then I compare the two while they're running. VideoSlave has some adjustments that can be made to tweak sync, but to be honest I don't think I had to adjust those settings at all. In any case, once this is adjusted I don't need to check or adjust it every time - these days I just drop the movie into VideoSlave and go. I do a quick check to verify that the audible 2-pop in the movie matches up with an audio beep that I put into Logic at the same timecode point. Boom, done.
> 
> I've run video on a third computer for many, many years, since the days when I was using G4 and G5 computers for Logic. Back then, having the video inside Logic took a serious toll on the CPU, so I would use an old G4 Mac Mini running Virtual VTR software, which is similar to VideoSlave. Now that the computers are so fast this isn't such an issue, but I still like the workflow of having the video on a third computer. This means I can leave my ProTools machine turned off through the whole composing process until it's time to print. The VideoSlave machine just hangs off of the Logic machine and just playing video is no stress at all for it, so it runs smoothly and locks up in less than a second.



Native quicktime playback on a mac in Logic or Protools is gonna by late by a few quarter frames, fyi. There are devices that you can use to calibrate the precise latency the video is playing at. Usually somewhere between a few quarter frames. When parking your cursor at a specific place, the tc location and picture will be exactly aligned.(so spotting markers at specific places will not be affected by the playback latency). Don't trust the quicktime to be in sync just playing out of the mac. 
Late audio always looks better then early audio, especially with lip sync.

my two cents.


----------



## colony nofi (Jun 25, 2017)

I'm probably going to end up saying everything that's been said to date.... but here goes anyway.

Quad is great. If something is going to end up on netflix, TV etc - where most docos, or TV drama and the like are going... then quad solves a LOT of problems very quickly for a re-recording mixer. Both from "where should this sound go" (you've already had that decision made by the music mixer - or you, if you are mixing your compositions yourself!) AND its not getting in the way of the center dialog. 
Now - for live orchestral recordings, I too have used LCR from tree, with all spot mics generally in the front stereo field, and using various spaced arrays or far mics for the start of an enveloping surround space. This kind of arrangement is VERY possible with libs like spitfire minus the C - but the C can be easily generated if someone requires it. However, I have seen it turned down in final re-recording sessions more than needing to be "created" from the fronts... exceptions of course are featured musical moments like titles etc, where there is little to no dialog, or docos without much narration. Then it can be quite lovely. Know your material - and speak to the dubbing stage / dubbing mixer at the earliest possible opportunity as to what they want.
Don't put too much in the surrounds - but also don't be afraid of them. Once you are in a big (film) theatre, you can afford to be a little more adventurous, without breaking down the suspension of belief for the audience. A wonderful musical "hug" is only a few interesting mix decisions away. Atmos music mixing is great fun. I'm amazed at how only small changes / putting a few small amounts of audio above the audience can make things so incredibly immersive and interesting. Less is often more... but YMMV. 

Reverbs. This is the trickiest thing in the world going from a nearfield surround mix to a cinema mix. Nearfield is often GREAT - but has some limitations when it comes to how things feel in a cinema. True surround reverbs are few and far between - but there is also nothing wrong with running multiple stereo reverbs for different channels - but with slightly tweaked settings to make the room peel around you. I've recently had fun trying a (real) bricasti into anymix pro (using it to upmix) and had some wild results. Still not sure anything beats a surround reverb though. These EAT your power though.
Keep them on separate stems where possible (meaning - always!!!) Your dubbing engineer will love you for it - and be able to cover for any little mistakes / misjudgements very easily without even mentioning it to producers. These guys are your friends - and can make the whole final mix stage (of film / tv / whatever) much easier if they are on your team!

Stems = amazing - but don't go overboard. Most big projects now seem to be asking for 8 to 12 sets of stems (from stereo, to quad, 5,1, 7,1 or atmos!) and most of the time will want separate files. In this case, there is also usually an assistant around to conform everything to a session that is later imported into the mix session. Don't supply interleaved files unless you know 100% they are going to work. There are still problems going between different systems with interleaved files (how that can be in 2017, I don't know...)

Name your stems with the timecode of the start of the file. Don't rely on the meta-data in bwavs. This is also useful for when films are reconformed! 

So - its all fun - but it is also complicated. I personally find nuendo much easier for surround than protools (really!) and logic, but they can all do the job in their own way. The anymix panner in nuendo is worth its weight in gold, and dolby atmos "bed" tracks can be natively handled in nuendo just fine!

For a composer - I would whole heartedly suggest you delve in first with quad. Your room may need quite different acoustic treatment... a surround room by its nature is usually designed very differently than a stereo room even for just near field listening. It is much easier to make a drier small near field room than one with some diffusion when it comes to surround! This kinda sucks - but you also can very quickly get used to it. I have designed a few rooms for surround, and never achieved the rt60 that was wanted - but everyone has liked the rooms after getting used to their sound over time. It takes a very different approach to diffusion (indeed, you may indeed need to start with virtually none) as first reflections start to be far more complicated with more speakers around the place. And knowing whats going on accurately definitely helps for translation into larger spaces. I mix *slightly* drier in nearfield in a dry room than perhaps I would in a more diffuse room. But thats all down to personal preference / preference of composers / dubbing mixers etc. 

My cylinder mac pro copes very well with massive sessions in stereo... but I need to be much more careful with resources working in surround. It more than doubles the drain on the machine.

Hope this has helped in a small way. Cheers!


----------



## Tiko (Jun 26, 2017)

Thanks for all the insight here! I compose & deliver in quad a lot and it definitely eats up processing power. My setup can handle stereo like nobody's business but when doing surround I have to be careful, I'm considering a slave to help with that.


----------



## Sekkle (Jun 29, 2017)

Hi guys,

Some really interesting insights in this thread!

Although it's not completely on topic and more about surround mixing techniques I thought I'd share a blog post I wrote a while back - Surround Sound Techniques for Mixing and Composing for the Screen

Since I put it together, I've tried a number of other approaches/techniques and learn't more by reading articles and forum threads (like this one). It's seems to be a process that evolves with every project that can depend a lot on the dubbing mixer's requirements, however I thought there might be a few things that could be relevant/interesting..


----------



## chrisr (Jun 30, 2017)

Sekkleman said:


> I thought I'd share a blog post I wrote a while back - Surround Sound Techniques for Mixing and Composing for the Screen



Really great piece, thank you!


----------



## Sekkle (Jul 2, 2017)

chrisr said:


> Really great piece, thank you!



No worries! Great to hear you found it interesting


----------



## JaikumarS (Aug 6, 2017)

Scoremixer said:


> If you’re not currently working on things that require delivery in 5.1, then the question of whether to make the jump or not depends on what you want to get out of it. If your focus is just on composing, then stay away from 5.1- it’s a technical distraction and comes with additional processing, workflow and delivery burdens.
> 
> If you want to develop technical skills though, it could be an interesting (if pricey) detour- you’ll definitely learn some things, and it’s the kind of experience that could come in useful if you ever find yourself assisting a more established composer.
> 
> ...




@Scoremixer - Could you pls share what goes into Lss, Lsr, Rss and Rsr in a 7.1 Setup?

How does music,Foley, SFX and Dialogues gets mixed in a 7.1 Surround Setup?

Thank you

Regards,
-JS


----------



## vewilya (Aug 6, 2017)

Thanks guys for this. Interesting read! Cleared up a lot of questions I had.


----------



## Scoremixer (Aug 6, 2017)

JaikumarS said:


> @Scoremixer - Could you pls share what goes into Lss, Lsr, Rss and Rsr in a 7.1 Setup?
> 
> How does music,Foley, SFX and Dialogues gets mixed in a 7.1 Surround Setup?
> 
> ...



It's much the same as with 5.1, just with more speaker real estate to play with. 

I can't speak for the Dial + FX as that's not my area of expertise, but for music broadly the same rules as 5.1 apply.
7.1 obviously gives you more freedom to put stuff in the sides, and generally 7.1 speaker installations are better spec'd (and in Atmos spec supposedly totally full range surround speakers) so you can put more low end into them without fear of it disappearing in a cinema.

In practice, for orchestral recording engineers will tend to put out more ambient room mic options that are subsequently mixed discretely into the side channels. Apart from that, just more of the same- 7.0 reverbs, delays, pitch spreaders, slightly bolder surround panning... generally nothing radical... unless the film calls for being radical.


----------



## JaikumarS (Aug 6, 2017)

Scoremixer said:


> It's much the same as with 5.1, just with more speaker real estate to play with.
> 
> I can't speak for the Dial + FX as that's not my area of expertise, but for music broadly the same rules as 5.1 apply.
> 7.1 obviously gives you more freedom to put stuff in the sides, and generally 7.1 speaker installations are better spec'd (and in Atmos spec supposedly totally full range surround speakers) so you can put more low end into them without fear of it disappearing in a cinema.
> ...



Thank you


----------



## synthetic (Aug 14, 2017)

One thing that hasn't been mentioned yet (except by Hans, many times) is that surround is more than stereo with an extra stereo reverb going to the rear channels. Try to find some surround-ready libraries (or make your own) for a bigger sound. The Spitfire HZ and Redux percussion libraries sound great this way. I have MIDI faders for the close, mid, and far mics for Spitfire libraries and the far (for percussion at least) goes to the rear. 

It's tough to find score recordings in surround to study. One of the only ones I know of is on the Inception Blu Ray (only the 2-disc Special Edition – I had to buy it twice.) Either mute the front L/R channels or switch those monitors off and you can hear what's going to the surrounds and center (and what is not.)


----------



## Gerhard Westphalen (Aug 18, 2017)

synthetic said:


> It's tough to find score recordings in surround to study. One of the only ones I know of is on the Inception Blu Ray (only the 2-disc Special Edition – I had to buy it twice.) Either mute the front L/R channels or switch those monitors off and you can hear what's going to the surrounds and center (and what is not.)



Although not as interesting as some of the work being done in soundtracks, I've found it helpful to listen to albums in surround. You can find hundreds of DSD albums in surround that you can play with certain media players. 

If anyone is interested in the Inception surround soundtrack, I have a very detailed mix analysis (what sort of signal is sent to what speakers for all elements in the mix) up on my website of the entire soundtrack as a Cubase session.


----------

