# Composing directly in 5.1/ 7.1 / Atmos... the 2021-2022 Thread!



## IvanP (Dec 31, 2021)

Hi! First of all, Best wishes for a new year full of nice projects and free of pandemic related issues!

May I ask how many of you are programming, composing for media directly using a 5.1 template? With Atmos and surround being basically the new norm in TV Platforms and 90% of any Media project I've done since last year, I am thinking into finally switching into at least a full 5.1 Sampled based template VS Stereo.

How many of you (besides Remote Control based setups - EDIT meaning they have been doing this for years now  - ) are doing mockups and composing to picture in 5.1 (or 7.1, Atmos, etc) ? Considering that any surround mix will be done from scratch in Pro Tools later, I'm wondering if I'm just late to the party or if most of you still compose in a Stereo environment, given that cue review for approval will still be done in a Stereo based rendered .mov or similar.

Thank you and happy new year!

Ivan


----------



## Nils Neumann (Dec 31, 2021)

I‘m about 2 years in 5.1 Surround. And just recently upgraded to 7.1 and then swiftly to Atmos, actually only 7.1.2.
It’s great fun, it’s really a joy to work in and hear your composition and productions in surround/immersive.
But mainly I do it for the joy of it. Reality is still, that your work is most likely consumed in Stereo or Mono than anything else. If you are smart with setting up your template and you are wise with your placement decisions the downmixes work in any format. Nobody has time to do a separate Stereo, 5.1, 7.1, Atmos mix. At least at my (low) level. For me it just slightly increased the time it takes to finish a cue, I can justify it with the fun I have with it.

I did 2 TV movies this year, one was Stereo, one in 5.1. But the music was delivered in Stereo stems.

Also did a student film and was responsible for the sound/foley/music and final sound mix in 7.1.

So there seems to be some demand. But they don’t pay extra for it, again not at very high level in the industry. Still studying at the Conservatorium. So take everything with a grain of salt.
In my experience sounddesign is more in need of surround than music.

I often do sound as side “hustle” to get the composition gig. I don’t know if it would be that good of an idea if you only compose and don’t find any enjoyment in the enhanced sound.

From a business perspective at my stage it was big investment that doesn’t pay of. I hope the experience and skill will pay of in the long term.

Another thing about the whole Atmos and immersive world is that it completely changes the rules of mixing in some ways. It’s really exiting as everyone is trying to figure out how to mix in Atmos. Besides how to set up the Dolby Atmos render, there is almost no information on how to mix in this format. It’s like the wild west again. I like that, Stereo feels so boring to me now.

On the other hand, I had the director of the previously mentioned student film in my studio for the final mix. I switched between 7.1 and Stereo. He couldn’t hear the difference… so yeah… Consumers don’t really care. And to be honest my sound system for my beamer was dirt cheap and only one monitor is working for over one year. I live together with a Composer/Producer, nobody cared to buy a replacement. That might be because all our money is in the studio.

Really excited to hear the experience of others members of this forum.


----------



## IvanP (Dec 31, 2021)

Thank you for your nice answer, Nils! Totally agree with that you say...I'm always mixing everything with an external mixer, so until now it didn't really made a difference between composing in Stereo VS Surround.

But after 2 projects I did for platforms changed their delivery into Atmos (which is an incredibly difficult system to mix Btw...) and after having seen a few Setups at Remote Control with full 5.1 for mockuping and composing (or recent "Home Studio Atmos updates" for some AAA composers) I was wondering if I was missing something on my own by sticking with Stereo. 

Not that I would keep mixing in my writing room (which is fairly treated acoustically but I'm just a terrible mixer and will always leave this to professionals in their much better rooms or studios)...but it's true that the few surround tries I have done actually make a difference in how you perceive "mockup" space and perception of cues...(not to mention real 5.1 recordings, of course). 

With clients coming in in actively to the studio before the pandemic it would make sense for the "wow" factor, but nowadays everybody is working remotely and I only had a few clients visiting this year...so much for demoing in Surround I guess.


----------



## Nils Neumann (Jan 1, 2022)

IvanP said:


> Thank you for your nice answer, Nils! Totally agree with that you say...I'm always mixing everything with an external mixer, so until now it didn't really made a difference between composing in Stereo VS Surround.
> 
> But after 2 projects I did for platforms changed their delivery into Atmos (which is an incredibly difficult system to mix Btw...) and after having seen a few Setups at Remote Control with full 5.1 for mockuping and composing (or recent "Home Studio Atmos updates" for some AAA composers) I was wondering if I was missing something on my own by sticking with Stereo.
> 
> ...


The lack of any other comments also kind of answers the question.


----------



## CSS_SCC (Jan 1, 2022)

I am trying to dip into Dolby Atmos myself and I find it as previously said: the wild west. There is the very steep learning curve and trying to make sense of all the technologies that need to come together. But, I think, the real issue here is the initial investment.

First, there is the barier of entry: if you want to deliver any of your projects according to the official specs, you need to either buy a Mac as a second system, either buy an extremelly expensive PC from a Dolby authorised dealer (the PC system itself without the software is about £10k). See one example of authorised solution: https://www.rspeaudio.com/Dolby-Atmos-RMU-Mac-Rackmount-Dante-p/dolby-atmos-rmu-mac-dante-rack.htm

If you look closely, most of their demos are in Avid ProTools Ultimate (80$/month or permanent licence £2200). I am running Nuendo 11. I am waiting to see what Nuendo 12 is going to bring to the table before buying a - most likely - second hand Mac and the Dolby Atmos Production Tools. The Dolby Atmos Mastering Suite is about $1k.

A good monitoring solution is already quite expensive in itself. Just going from a 2/2.1 setup to, ideally 7.1.4 for monitoring is a huge step-up. Dolby recommends as a minimum 5.1.4 from the same line of speakers.

Just my example: before starting, I had a pair of Focals Trio 6BE and a Focal Sub (very nice for stereo monitoring) and, as a spare set on my second system, a pair of Focal Shape Twins. I have spoken with a few people that have more experience than me in surround/Atmos and, ideally, I will have to buy another 8 x Focal Solo 6 Be (£6500) - just to have the same sound. As I have managed to find an ex-demo Focal Solo 6 Be and I now have a (hybrid) 5.1 system, I will probably dish out initially another £1100 for 4 x Focal Shape 40 and later another £520 for another pair. Not mentioning stands, ceiling mounts, cables. Just the placement of those in my room is going to be a nightmare. As soon as I will go to 5.1.4, I will have to upgrade my interface (or buy another one and connect via ADAT: Scarlett OctoPre £370). That comes to another £2k (Thomann prices) as the cheap version for me.

If you want to start producing and monitoring content for games and other VR productions, you need something like Waves Virtual Room and Waves NX head tracker £100 or the Waves 360° Ambisonics Tools £320. Supposedly the latest version of Apple Airpods Pro have integrated head tracking but I haven't seen yet if they are integrated with anything in terms of production.

And this is before adding production/mixing plugins that support 5.1.4 or more channel configurations.

How many here have that kind of budget just to start off.

And this is just skimming the surface.


----------



## JohnG (Jan 1, 2022)

Well, I guess HZ wrote a while back that he likes to work in surround, no doubt he's working in Atmos by now (or whatever's next). Given how cool his scores sound, can't knock it.

I can see advantages, but there are a few drawbacks:

*1. Most people review tracks in stereo*; arguably therefore it makes sense to mix to the way the track will be auditioned. It's hard enough to guess what laptop speakers or ear buds or whatever they might use to review music we spend hours mixing and shaping. If you ever hear back a comment like, "I don't hear much bass," or "it doesn't sound that powerful," that could stem (get it? "stem?") from their use of tiny speakers to play the track back.

*2. Can we really get good enough at it so it's a plus, instead of potentially a negative?* It's hard to mix even in stereo, let alone surround. And that's leaving aside all the stems you have to print for delivery, LUFS and all that stuff -- I find it hard enough to write music without then losing even more sleep to learn and accurately hew to final delivery requirements for a dub stage.

*3. Does it include the live recordings or is it going to have to be remixed anyway?* Personally if I'm spending money (mine or theirs) to record an orchestra, I'm also spending money on an engineer who does this for a living. I do see it as part of my job to make my demos sound cool, but not part of my job to edit entrances and trim out noise from live-recorded tracks. Even engineers themselves have assistants who perform some of those tasks.

*4. Money; *It's not like you're going to set this up once and it'll never change. What about changes to standards like Atmos or some other thing that comes along? OS upgrades -- all that? Subscriptions, new "must-have" plugins or hardware? It could be pretty expensive to get it all up and running, but also expensive to keep it up to spec.

Don't get me wrong, it sounds fun and it's really entertaining to hear one's music in surround. But without a staff to deal with it and maintain it, I foresee more hassle than fun.


----------



## CSS_SCC (Jan 1, 2022)

Playing with the 5.1 setup that I have now is certainly very enjoyable. The hassle to set it up when I will go to 7.1.4, that's another story.

The cherry on the cake: Windows does not play nicely with Dolby Atmos and the only way to listen to your mastering outside your DAW is via a Dolby Atmos A/V receiver connected via HDMI which means your monitoring setup will have to have something like the SPL MC16 in between which in itself is crazy.

Windows -> DAW -> Audio interface -> balanced XLR -> SPL MC16 -> balanced XLR -> 7.1.4 speaker setup
Windows -> software media player -> GPU -> HDMI -> Dolby Atmos A/V receiver -> SPL MC16 -> balanced XLR -> 7.1.4 speaker setup

Fun!

P.S. I will let you have a look at the prices of a 7.1.4 Dolby Receiver and a SPL MC16.


----------



## CSS_SCC (Jan 1, 2022)

A 360° microphone for recording ambience is around £1k.
From what I understand, the rest can be recorded on close mics and rooms mics as usual and you can then mix and match as necessary. For additional realism you can create your own IR.


----------



## wunderflo (Jan 1, 2022)

I'm also about to experiment with it just for fun and out of curiosity only using headphones (since I really enjoy listening to the binaural headphone Atmos versions of songs on Tidal, and also for the aforementioned wild west feel  ). I do believe that it's quite a revolution and is here to stay, because most people consume music on headphones, and Atmos can greatly enhance that experience. For some people the standard HRTF doesn't seem to do the trick, though. I'm sure there soon will be headphones that scan your ear or whatever and automatically create an individual HRTF for you. 

For those who already tried to produce directly in this format (I didn't dare to try it myself, yet): 

How much does it increase the CPU and RAM footprint? 
At least in Nuendo you are required to use a buffer size of 512, so that introduces more latency than I'm used to. How do you deal with that?
Do you prefer to use mono or stereo sound sources that you then pan and position within the 3dimensional space, or do you prefer to make the sound source itself 3dimensional by using multiple mics that you pan across the space (if available)?
Do you use your normal stereo mixing plugins (EQ, compressor, etc.) to first treat the stereo/mono sound source before positioning it within the 3 dimensions, or do you only treat the 3dimensional sound after it has been positioned by only using Atmos/surround compatible plugins?

I feel it makes me want to commit the individual sound sources (instruments) to mono and first treat those mono sources, before I place them all around me - at least in a busy arrangement. Would that be a very bad idea or just a bad idea? :D On the contrary, if I only used a piano, for example. I'd prefer to have many different mic signals and pan those around me.

Need to test it. I have no idea what I'm talking about... exciting times!


----------



## CSS_SCC (Jan 2, 2022)

A few more bits, but as I said, I am trying to figure it out, so take it with a big grain of salt. For the moment, since I don't do anything that is close to "interactive experience", my approach is to treat everything like a multichannel mix (surround) and "sprinkle" things on top that are positioned in 3D space. That will probably be more precise once I get on a 5.1.4 setup.

But this is really for my own entertainment, I have never published anything, I don't make money out of it so my incentives are very different from somebody that needs to make a living.

Now, with regards to headphones, I have tried the demos for Dolby for headphones, DTS:X, Windows Sonic and Steinbergs' Immerse with VST AmbiDecoder and I am yet to be fully convinced. That can be due to either me being in the category of bad HRTF, either I might have been spoiled by my monitoring setup, headphones (Focal Clear Mg Pro) and my previous experiences in a few places that had trully high-end audio setups with proper sound proofing. The difference between listening to even an upmix from stereo to 5.1 on my current set of speakers and the Focal Clear Mg Pro is notable. But that might be just the open-back headphones...

For the moment my best experience of spatial audio has been a demo of a Jean-Michel Jarre limited edition release on a 250k Euros system. I would really like to get my own hands on that disc but I don't even know the name of it. I was working in IT support at the time and they were preparing a demo of the sound system for some big names. Got about 15 minutes before I was hurried out.


----------



## Mishabou (Jan 2, 2022)

I've been working in Atmos for the past couple of years and before that, forever in surround.

I do mostly music and sound design for art installations and all my gigs are done in one of the immersive audio formats. As for the odd documentary and indie film gigs, it's minimum 5.1. I haven't done a stereo gig in years.


----------



## Trevor Meier (Jan 2, 2022)

Mishabou said:


> I've been working in Atmos for the past couple of years and before that, forever in surround.
> 
> I do mostly music and sound design for art installations and all my gigs are done in one of the immersive audio formats. As for the odd documentary and indie film gigs, it's minimum 5.1. I haven't done a stereo gig in years.


What’s your preferred setup when you’ve got various possible project formats? Do you work in Atmos and downmix? Or in Ambisonics and then downmix to Atmos?

Any preferred tools? I’ve been playing with the IEM Ambisonics suite inside MetaPlugin as a spatialization tool, but MetaPlugin’s limitations mean 4th-order Ambisonics is the max at the moment


----------



## IvanP (Jan 2, 2022)

Thanks a lot for all your answers and input! As for Atmos / Ambisonic headphones, I haven't been impressed myself. It's merely a simulation of what true Atmos is, but I do imagine that there's an advantage over stereo listening, for example, of a Movie on an iPad with proper "Atmos" ready headphones.

The thing is, until very recently, all of my sampled based scores for TV had always been delivered in Stereo, since I've never been asked otherwise. Recorded scores, obviously, had a more proper budget and 5.1 / 7.1 / Atmos deliveries have been the norm.

As for the RC studios I've had the chance to visit a few times, they all were Streaming samples in Surround...I've always thought it was a way to impress clients (fool of me, I never dared to ask...) but maybe it was a common thing in order to print directly Stems for sampled based scores with proper 5.1 files and mixed inside the Daw for projects that either for budget reasons or time of delivery didn't have the needs / possibility of doing the mix in PT from scratch.

Going the 7.1 or Atmos route is definitely more complicated in terms of setup and mixing, but I found that using samples in 5.1 is not that hard to do and, mixing wise, it seems almost like an advantage, as the soundstage appears wider and, specially, more clear than working in 2.1.

And, if Stereo is the final delivery nonetheless, the DAW will take care of the automatic downmix if needed.

That's why I was wondering how many people are / where using at least a 5.1 outside the guys at Remote C.


----------



## Stephen Limbaugh (Jan 2, 2022)

My _writing_ template is in 7.1.2 now, as my sample libraries are all multimic except for some solo stuff.

I deliver 5.1, and charge extra for an ATMOS mix. Demos are sent in stereo.

Also, everyone with airpods pro, 3rd gen airpods, airpods max, and an iPhone/iPad/AppleTV now has the ability to stream immersive audio/ATMOS mixes. Home theater systems everywhere are being upgraded to support ATMOS.

There’s this lingering myth about theaters in po-dunk midwestern towns having blown LFEs, channels out, and not upgraded to ATMOS that simply isn’t true anymore. The 90s were a long time ago and it’s just not the case that the theaters outside of LA suck.

Go to Moore Oklahoma, Cape Girardeau Missouri, Mentor Ohio, Lincoln Nebraska… the theaters there are WAY nicer than the shit in LA. And you can bring booze INTO the theater.

I get other pros like Meyerson or Sands will strongly advise against aggressive panning to surrounds, or dedicating some sounds for the LFE only. No offense to either of them, but I do not think they have seen a movie in a theater outside of the coasts in a long time. 😉

The infrastructure is there for composers to fully commit to immersive on projects big and small. Composers make sure their instruments are in tune even though most people can’t hear a few cents sharp or flat… so composers should approach immersive with the same dedication to that aspect of the craft.


----------



## IvanP (Jan 2, 2022)

Stephen Limbaugh said:


> My _writing_ template is in 7.1.2 now, as my sample libraries are all multimic except for some solo stuff.


Thanks! Wanna share how do you route your mics in a surround environment? do you use directly the surround mics in the Surround outputs channels or do you load/play them all at once in the full 5.1 out in order to have the fullest surround image and then use a surround panner to give a little more spice to the front, rear, etc ?

Thanks!


----------



## Stephen Limbaugh (Jan 2, 2022)

IvanP said:


> Thanks! Wanna share how do you route your mics in a surround environment? do you use directly the surround mics in the Surround outputs channels or do you load/play them all at once in the full 5.1 out in order to have the fullest surround image and then use a surround panner to give a little more spice to the front, rear, etc ?
> 
> Thanks!


Routed as objects actually 😁… and generally positioned in front of the listener, except for the high surrounds which are in the top back corners.

VSL has a number of vids on their Youtube channel about routing and managing their mic positions.

Also, Fabfilter just updated their limiter and ProQ3 to handle ATMOS channels.


----------



## dgburns (Jan 2, 2022)

In my opinion- Hats off to Apple for adding immersive sound to Logic. Will be interesting to see how this gets used ( more likely abused )

I think it will be difficult to convince many lo to mid budget productions to go Atmos for score. Atmos for the final audio sound yes, but not score, much less licensed music.

I still think quad is a good compromise for score, for the time being…


----------



## quickbrownf0x (Jan 2, 2022)

I'm writing in 4.1, does that count? I figured it's probably worth the investment and DAW resources if that's where we're headed, even though right now most people will listen to your work in stereo and your stems will get tweaked, remixed at the dub stage anyway. 

Apart from that it's just loads of fun to mess about with.


----------



## Nils Neumann (Jan 2, 2022)

quickbrownf0x said:


> I'm writing in 4.1, does that count? I figured it's probably worth the investment and DAW resources if that's where we're headed, even though right now most people will listen to your work in stereo and your stems will get tweaked, remixed at the dub stage anyway.
> 
> Apart from that it's just loads of fun to mess about with.


Yep, I think 4.1 is the most cost-effective way for a composer to dip into that world and deliver for dubbing stages.


----------



## JohnG (Jan 2, 2022)

Mishabou said:


> I haven't done a stereo gig in years.


I expect anyone doing any kind of paid work has to deliver in at least 5.1, if not 7.1 these days, and that's been true for what? 10 years? 15?

But that's not the same as working in surround day-in, day-out. I know you know that, @Mishabou 

Personally, I don't want to be the person setting up and maintaining all that. It seems that every client has different delivery specs and I don't want to be the person fiddling with all that. I have an engineer do that, even if it's mostly or completely in-the-box.


----------



## Scoremixer (Jan 2, 2022)

Stephen Limbaugh said:


> I get other pros like Meyerson or Sands will strongly advise against aggressive panning to surrounds, or dedicating some sounds for the LFE only. No offense to either of them, but I do not think they have seen a movie in a theater outside of the coasts in a long time. 😉


Context is everything. Are you making music for music's sake, or to picture/media/library/games? If you're publishing direct to iTunes, then great, have at it, be as bold as you want, no aesthetic rules. If you're supplying stuff that others have to work on then you have a be more conservative with the layout and the tech overhead. 

A dubbing mixer has got to figure out how to budget 128 atmos channels across a whole production, and will not take kindly to half of them being used for music cues... In fact before it even gets to that stage the music editor would probably have already gone on strike trying to checkerboard two sets of full atmos tracks, conform object automation data etc etc.

The tech might be at your fingertips (and I'd strongly encourage people to experiment and get a feel for it, because it is v cool) but that doesn't mean it's necessarily appropriate for every scenario. 

If OP normally gets his stuff mixed by an external mixer, then I'd say Quad is actually a great intermediate format to work with - you get a degree of immersion whilst keeping the tech overhead relatively straightforward; it's defacto compatible with a lot of sample libs and gives your mixer something more than just plain stereo to work with, without having to supply an overwhelming number of channels. It was certainly the popular way of working for Remote Control based composers.


----------



## JohnG (Jan 2, 2022)

Stephen Limbaugh said:


> I get other pros like Meyerson or Sands will strongly advise against aggressive panning to surrounds, or dedicating some sounds for the LFE only. No offense to either of them, but I do not think they have seen a movie in a theater outside of the coasts in a long time.


Well, I am not sure that it's only in Los Angeles or New York (east/west coast USA) that the sound systems may be sub-par or, put more accurately, vary from the official specifications. 

My own experience suggests there's meaningful uncertainty in theatrical sound of how carefully calibrated, and also how well maintained everything is. Sometimes if you measure it there can be a 6 dB difference in surrounds from spec, for example. And home systems, while drastically better than before, are also not necessarily well-adjusted for the listening space.

Certainly, more and more people have decent emulation or actual Dolby setups at home, but I am very happy to leave that to Netflix or whomever to work out with the engineer.

*Nein Danke*

I prefer to hand it off to someone who is constantly abreast of the latest specs and technology. It's a preference thing -- I simply don't want to fuss with it and don't really want to guess whether what I'm hearing is going to match 100% what they do on the dub stage. Accordingly, I leave it to an engineer who focuses on all of that. 

Just looked up the "official" specs, and they are quite detailed. So detailed, it makes me wonder how rigorously those specs actually get implemented world-wide. With 80% of box office outside the US and a huge upswing in China and elsewhere, it's certainly a topic that is of interest.

Here are the specs I found in a quick search:



https://professional.dolby.com/siteassets/cinema-products---documents/dolby-atmos-specifications.pdf


----------



## Stephen Limbaugh (Jan 2, 2022)

@Scoremixer and @JohnG yes… but I’m an idealist you see 😉🤠


----------



## JohnG (Jan 2, 2022)

Stephen Limbaugh said:


> @Scoremixer and @JohnG yes… but I’m an idealist you see 😉🤠


I like ambition! And audacity, Stephen. They make the world go round.

Still, I thought @Scoremixer offered helpful perspective for the practical world of music editors and dub stages.

Certainly a lot of food for thought in the thread.


----------



## Gerhard Westphalen (Jan 2, 2022)

A lot of composers I work with (who you might call "A listers") still only work and deliver in stereo which the scoring mixer then takes up to surround.

IMO Atmos is now much more useful to composers simply because it's possible to release in Atmos. In terms of delivering a score in Atmos I don't think it's worth it for composers go to past 5.1 since it can easily go up to Atmos by the scoring mixer or at the dub. As a composer, trying to deliver in Atmos is just a headache and source of many potential problems. If I were a composer I'd leave that to someone else to handle and make sure it all works properly in a theatre.

If you're releasing your score as a soundtrack album then it's definitely worth going up to Atmos. Even if you're just working on a short or YouTube series that's just stereo. You can then re-mix into Atmos and release the album in Atmos. Just make sure you hire an Atmos mastering engineer (like me  ) to make sure it works on all release formats. In this case you could even do it just on headphones (as long as you're hiring a mastering engineer with a full speaker system).

I hadn't really put much thought into this before but this does seem to be a great option for composers - up to 5.1 for the main project deliverables and then do a separate Atmos mix on headphones for album release.


----------



## charlieclouser (Jan 2, 2022)

I've been mixing and delivering 5.1 stems forever, but I recently delivered my first score that was to be mixed in Atmos. In pre-delivery discussions with the re-recording engineers on the dub stage, they expressed a strong preference for me to simply deliver stereo stems which they would then distribute into the immersive field. Since each stem would now take only 2 channels instead of my usual 6, this let me spread things out across more stems, giving the mixers more flexibility to spray things around the room, and apply immersive reverbs or panning to elements that were more separated than before.

It went well.

In an off-the-record sidebar with one of the mixers, he basically told me that if I had delivered 5.1 or quad stems they probably would have just deleted everything but the front L+R pair from each stem and made do with that. But this was a quick mix for a cable feature, not a AAA movie mix, so this situation might not be typical when projects are bigger and schedules are more generous.

So I'm glad that I delivered more+narrower stems than my usual fewer+wider package.


----------



## Manaberry (Jan 3, 2022)

I have not yet started to mix in atmos (even if I have all the tools for it, just not got the time) but I was thinking to keep composing in stereo, mixing in stereo, and then properly spreading to atmos afterward. I feel like it is not a bad idea after reading your post @charlieclouser ! Thanks for sharing your experience.

The sad part is I only can mix in binaural. So, as @Gerhard Westphalen said, I must send it to an atmos-ready mixing engineer (got a few on my contact list, thank god).
By the way, Gerhard, do you know how I can monitor atmos music (from Tidal for instance) for reference? I feel likes it's almost impossible to find those file or a proper way to monitor them (especially when you only have the binaural renderer). Thanks!


----------



## Rctec (Jan 3, 2022)

IvanP said:


> Hi! First of all, Best wishes for a new year full of nice projects and free of pandemic related issues!
> 
> May I ask how many of you are programming, composing for media directly using a 5.1 template? With Atmos and surround being basically the new norm in TV Platforms and 90% of any Media project I've done since last year, I am thinking into finally switching into at least a full 5.1 Sampled based template VS Stereo.
> 
> ...


… I was just going to tell you of the joys of writing in surround, how much more space you have to let your imagination fly, how your productions will be refined and more effective … when I saw “(besides RCP based setups)”... and I felt sad to be excluded for doing something for the last thirty years. :-(
it back to mono for me, mate!
-Hz-


----------



## IvanP (Jan 3, 2022)

Rctec said:


> … I was just going to tell you of the joys of writing in surround, how much more space you have to let your imagination fly, how your productions will be refined and more effective … when I saw “(besides RCP based setups)”... and I felt sad to be excluded for doing something for the last thirty years. :-(
> it back to mono for me, mate!
> -Hz-


Sorry Hans...lost in translation then! I meant the opposite: I was referring that I've seen you guys working in Surround for years now and see you definitely enjoying it! That's the point of my thread, essentially! 
When I was lucky to get to visit a few RC studios a few times I didn't dare ask why you were mockuping in Surround), but it seems I can't escape the question now with surround and Atmos getting everywhere now! 

Happy new Year Btw and thanks a lot for you input! We would all definitely love reading about the benefits of doing mockups in Quad / Surround, etc


----------



## charlieclouser (Jan 3, 2022)

Manaberry said:


> I have not yet started to mix in atmos (even if I have all the tools for it, just not got the time) but I was thinking to keep composing in stereo, mixing in stereo, and then properly spreading to atmos afterward. I feel like it is not a bad idea after reading your post @charlieclouser ! Thanks for sharing your experience.


Well, my recent switch to delivering stereo stems for an Atmos mix was kind of a one-off, and the decision to go that way was due primarily to the fact that it was a cable feature (for the Epix movie channel) and they needed to mix a 90-minute horror film, packed with crazy fx and wall-to-wall score, in only three days on the stage. So it was sort of a case of "pick your battles" more than a new default way of working.

I'll probably stay in 5.1 mode as my default, and moving forward I'll decide on a case-by-case basis how to deal with immersive formats - and I'll deliver in whatever width the mixers want to see.


----------



## IvanP (Jan 3, 2022)

charlieclouser said:


> I've been mixing and delivering 5.1 stems forever, but I recently delivered my first score that was to be mixed in Atmos. In pre-delivery discussions with the re-recording engineers on the dub stage, they expressed a strong preference for me to simply deliver stereo stems which they would then distribute into the immersive field. Since each stem would now take only 2 channels instead of my usual 6, this let me spread things out across more stems, giving the mixers more flexibility to spray things around the room, and apply immersive reverbs or panning to elements that were more separated than before.
> 
> It went well.
> 
> ...


Thanks a lot @charlieclouser ! Are you talking about the objets or the whole music being delivered in Stereo Stems?

We had this conversation between my mixer and the Dubbing mixer for an Final Atmos delivery in my last movie. We decided to deliver the music in 7.2.4 and then give stereo Stems for the objets only and for the objet slots they gave us permission to use (as the rest were going to be used for Foley and Sound Design).

Don't you think that by giving stereo stems for everything the music could end up being misrepresented if placed elsewhere or differently by the dub mixer? Not to mention losing the dedicated surround reverbs placement and all...since dubbing mixers will mostly focus on foley, dialogue and Sound Design and have little to no time dealing with music.


JohnG said:


> I prefer to hand it off to someone who is constantly abreast of the latest specs and technology. It's a preference thing -- I simply don't want to fuss with it and don't really want to guess whether what I'm hearing is going to match 100% what they do on the dub stage. Accordingly, I leave it to an engineer who focuses on all of that.
> 
> Just looked up the "official" specs, and they are quite detailed. So detailed, it makes me wonder how rigorously those specs actually get implemented world-wide. With 80% of box office outside the US and a huge upswing in China and elsewhere, it's certainly a topic that is of interest.


Absolutely! And some platforms have even more strict limitations in terms of dynamics and final delivery than Theatrical Dolby...I will always use dedicated people for this.


----------



## AR (Jan 3, 2022)

hey guys, gonna chime in, too.

Made the switch to 7.1.2 last year. Before that I was working in Quad+Lfe. Then my studio moved to a new location and I thought to myself: Why not go future-proof? So, getting a few more speakers (especially mouting those on the ceiling :S ) wasn't enough. Cubase (unfortunately) had to be replaced. I went with Nuendo (since I grew up with Steinberg). My template (which is still in building phase since June '21) has to be routed in a new way. It takes a whole lot more RAM and CPU. Right now I'm finishing the Orchestral Tools (or Teldex template as I call it) template. Main PC handles just the disabled template (consisting of 3 different orchestras from following rooms Teldex, Lyndhurst and Zlin + Production Suite like Scoring Synths, LA Modern Percussion and stuff that isn't really "room" depending like all my hardware synths and so on). The DAW is connected to 5 PCs that are running VEPro audio only! (which handle the incoming 7.1 audio or LCR+Surrounds+Sides). These stubborn slaves process it with cinematic rooms, gulfoss, decapitator, kramer tape and the likes to a near 95% mix and then it goes back to Nuendo from where it's upmixed with stuff like Halo and little bit of symphony 3D and some feather hall to a 7.1.2 mix. I had my room treated by Hofa-Akustik for this new surround setup. All in all it was a big step up (finacially and soundwise improving). And like HZ just said it's so much joy. I comfirm that. Plus, giving you an extra hint... Everybody's telling on gearslutz "oh well, as soon as you open up the surround can, you'll get in trouble with room setup etc.". I tell you what, as soon as you open up the box of pandora it will be so much more forgiving on your mixes (as long as you keep some rules in mind -> watch Alan Meyerson's videos on MWTM therefor) and I'm saying the following with some precaution.... surround will make your "inner Stravinsky" a better orchestrator.


----------



## IvanP (Jan 3, 2022)

AR said:


> hey guys, gonna chime in, too.
> 
> Made the switch to 7.1.2 last year. Before that I was working in Quad+Lfe. Then my studio moved to a new location and I thought to myself: Why not go future-proof? So, getting a few more speakers (especially mouting those on the ceiling :S ) wasn't enough. Cubase (unfortunately) had to be replaced. I went with Nuendo (since I grew up with Steinberg). My template (which is still in building phase since June '21) has to be routed in a new way. It takes a whole lot more RAM and CPU. Right now I'm finishing the Orchestral Tools (or Teldex template as I call it) template. Main PC handles just the disabled template (consisting of 3 different orchestras from following rooms Teldex, Lyndhurst and Zlin + Production Suite like Scoring Synths, LA Modern Percussion and stuff that isn't really "room" depending like all my hardware synths and so on). The DAW is connected to 5 PCs that are running VEPro audio only! (which handle the incoming 7.1 audio or LCR+Surrounds+Sides). These stubborn slaves process it with cinematic rooms, gulfoss, decapitator, kramer tape and the likes to a near 95% mix and then it goes back to Nuendo from where it's upmixed with stuff like Halo and little bit of symphony 3D and some feather hall to a 7.1.2 mix. I had my room treated by Hofa-Akustik for this new surround setup. All in all it was a big step up (finacially and soundwise improving). And like HZ just said it's so much joy. I comfirm that. Plus, giving you an extra hint... Everybody's telling on gearslutz "oh well, as soon as you open up the surround can, you'll get in trouble with room setup etc.". I tell you what, as soon as you open up the box of pandora it will be so much more forgiving on your mixes (as long as you keep some rules in mind -> watch Alan Meyerson's videos on MWTM therefor) and I'm saying the following with some precaution.... surround will make your "inner Stravinsky" a better orchestrator.


Wow! Amazing setup! 

Do you take your mixes out of the room and do it with a dedicated mixer in another studio? 
If so, do you print 7.1.2 stems for mixing? (effects and reverbs stems on separate tracks, I guess?) or you give all the stereo / mono / whatever files from from scratch in order to let the mixer have all the freedom? 

Thanks!


----------



## Rctec (Jan 3, 2022)

IvanP said:


> Sorry Hans...lost in translation then! I meant the opposite: I was referring that I've seen you guys working in Surround for years now and see you definitely enjoying it! That's the point of my thread, essentially!
> When I was lucky to get to visit a few RC studios a few times I didn't dare ask why you were mockuping in Surround), but it seems I can't escape the question now with surround and Atmos getting everywhere now!
> 
> Happy new Year Btw and thanks a lot for you input! We would all definitely love reading about the benefits of doing mockups in Quad / Surround, etc


Ok! Got it! … first the bad reasons:

a piece of music can easily fall apart in ATMOS. What I mean by that (and it comes from experience) is that since you can now separate all the elements and distance them from each other, it often plays havoc with the harmonic cohesion of a piece. (Plus, of course, the loudness of each musical element is effected by the proximity of which speaker your sitting nearest to).
Simply put, if your harmony is up above the listener and the tune is separated into the front speakers, the tune becomes some sort of harmonic orphan, since your ears and your brain can not ‘time-align’ the different points of musical source. So you have to really know what your doing as the creator and composer and not leave it to the dubbing engineer who will either err on the side of caution and narrow everything or go on a wild space trip. Not his fault. He’s not the composer. You are.
It’s really quite astonishing what happens to music when you rip the sections apart without planning it. There are reasons for how an orchestra becomes a single source of sound hitting you, with only minimal time delay between the sections. And remember - their spacing is left to right and front to back. Ususally. Gustav Mahler was very deliberate when he moved the brass into the balcony in his second symphony, for example. But he was as much a producer and sound engineer as he was a composer and conductor. Just look at the thousands of footnotes trying to be detailed about dynamics in his scores…
There are technical problems that still need solving to go from ATMOS to Apple‘s Spatial Audio.
We did the “Dune” soundtrack cd’s in Dolby ATMOS - before Apple finished their software for Spatial Audio. When we finally heard it, it was sort of a disaster, but the good people at Apple engineering wrote some code for us to fix it. 
Here is one “How To” document on how to deal with the formats:









If You Are Mixing Dolby Atmos For Apple Music - Read This Now | Production Expert


These are the early days of immersive music streaming, and there are many questions about Spatial Audio coming from both consumers and professionals. Nathaniel Reichman doesn’t claim to have all the answers, but he has already mixed and mastered several albums in Spatial Audio and has learned from t




www.pro-tools-expert.com





now the Good Reasons:

ive always believed that audio should be a 360 experience. When all those years ago we built our proprietary sampler, it was very much out of the believe that we should record - and therefore reproduce - our samples in full surround.
it literally opened up a sonic world twice as big as stereo, which is obvious. Yes, it made Mixing over a thousand tracks on - for example (because I remember the moaning) - “The Dark Knight” a bit of a challenge for Alan and the Music editors. 
But it let me work in a much wider sonic world. Always hearing when things where too far apart, too much in the rears (it can get really distracting fast…) and harmonically not cohesive any more.
but sometimes you want that to startle. There are whole sections in that score where only the subwoofers are playing. And it’s a real shock when the main speakers cut out…

so, yes, it’s good for having fun with. But beyond this, you can get a far truer representation of an immersive world. And I love having an audience Inside - not Outside - of the music…


----------



## IvanP (Jan 3, 2022)

Rctec said:


> Ok! Got it! … first the bad reasons:
> 
> a piece of music can easily fall apart in ATMOS. What I mean by that (and it comes from experience) is that since you can now separate all the elements and distance them from each other, it often plays havoc with the harmonic cohesion of a piece. (Plus, of course, the loudness of each musical element is effected by the proximity of which speaker your sitting nearest to).
> Simply put, if your harmony is up above the listener and the tune is separated into the front speakers, the tune becomes some sort of harmonic orphan, since your ears and your brain can not ‘time-align’ the different points of musical source. So you have to really know what your doing as the creator and composer and not leave it to the dubbing engineer who will either err on the side of caution and narrow everything or go on a wild space trip. Not his fault. He’s not the composer. You are.
> ...


Thank you Hans, Charlie, John and everybody! All these answers are Gems! Thank you, all, for your time!

Is it fairly to assume that, considering that for most projects everybody will be using a dedicated music mixer, the composing part in quad, 5.1 or Atmos part seems essential to the creative aspect that it brings, but you guys are still doing mixes from zero (with our stereo or surround premixes as reference, I guess), meaning that everything is still printed into PT in mono, stereo, files?


----------



## Nils Neumann (Jan 3, 2022)

Rctec said:


> Ok! Got it! … first the bad reasons:
> 
> a piece of music can easily fall apart in ATMOS. What I mean by that (and it comes from experience) is that since you can now separate all the elements and distance them from each other, it often plays havoc with the harmonic cohesion of a piece. (Plus, of course, the loudness of each musical element is effected by the proximity of which speaker your sitting nearest to).
> Simply put, if your harmony is up above the listener and the tune is separated into the front speakers, the tune becomes some sort of harmonic orphan, since your ears and your brain can not ‘time-align’ the different points of musical source. So you have to really know what your doing as the creator and composer and not leave it to the dubbing engineer who will either err on the side of caution and narrow everything or go on a wild space trip. Not his fault. He’s not the composer. You are.
> ...


So are you still using 5.1 in your studio or did you upgrade to Atmos?

I remember your comments on 5.1 inspired me to upgrade to surround.


----------



## AR (Jan 3, 2022)

IvanP said:


> Wow! Amazing setup!
> 
> Do you take your mixes out of the room and do it with a dedicated mixer in another studio?
> If so, do you print 7.1.2 stems for mixing? (effects and reverbs stems on separate tracks, I guess?) or you give all the stereo / mono / whatever files from from scratch in order to let the mixer have all the freedom?
> ...


I actually did have a dedicated mixing room, where my colleague and I sat down in the past and discussed what could be filtered in the lows and highs for some sections. But since I'm working with the same samples (recorded in the same room) we started to convert the mixing room into a recording room and went back to the main room (or "cave" as my wife calls it) and started mixing in the box. The good thing about having samples coming from the same room is that you have balanced loudness and same room build up. That helps a lot during mixing. Also for soloist coming here to record their stuff we have a dedicated Cubase machine with MIR Pro that "beams" the musician directly into Teldex studios or Zlin hall. Funny stuff. We had a violinst here (she also recorded some stuff with her Guarneri violin at Teldex for Orchestral Tools). And I gave here the MIR Teldex Room on her headphones and she immediately started reacting to the room space, instead of having a dry room where it feels uninspiring. Pitty, I don't have Lyndhurst hall as a multi impulse response, yet )))). What brings me back to your question. These processed mixes go do the dubbing stage as they are. I'm careful with using experimental stuff. I don't use metallic stuff for the surrounds. My head is always in conflict with Dolby X-Curve and raising the highs. I also think about stuff like, when you're having a dedicated Bass Drum LFE track and a Taiko LFE track and they're playing at the same time. Do I need both LFE tracks on? Stuff like that. Also, how much level is going to the sides? To me, the side speaker are good to give the audience an "live concert feeling", but I always keep in mind that if it is so, the orchestra would still sit on the stage, so your "main information" comes from the fronts. Route to the center? Yes, but only contrabasses, low winds or boom perc. Of course rules are meant to be broken if we are talking about a movie like "Gravity" where immersive gets raised to next level.


----------



## IvanP (Jan 3, 2022)

AR said:


> I actually did have a dedicated mixing room, where my colleague and I sat down in the past and discussed what could be filtered in the lows and highs for some sections. But since I'm working with the same samples (recorded in the same room) we started to convert the mixing room into a recording room and went back to the main room (or "cave" as my wife calls it) and started mixing in the box. The good thing about having samples coming from the same room is that you have balanced loudness and same room build up. That helps a lot during mixing. Also for soloist coming here to record their stuff we have a dedicated Cubase machine with MIR Pro that "beams" the musician directly into Teldex studios or Zlin hall. Funny stuff. We had a violinst here (she also recorded some stuff with her Guarneri violin at Teldex for Orchestral Tools). And I gave here the MIR Teldex Room on her headphones and she immediately started reacting to the room space, instead of having a dry room where it feels uninspiring. Pitty, I don't have Lyndhurst hall as a multi impulse response, yet )))). What brings me back to your question. These processed mixes go do the dubbing stage as they are. I'm careful with using experimental stuff. I don't use metallic stuff for the surrounds. My head is always in conflict with Dolby X-Curve and raising the highs. I also think about stuff like, when you're having a dedicated Bass Drum LFE track and a Taiko LFE track and they're playing at the same time. Do I need both LFE tracks on? Stuff like that. Also, how much level is going to the sides? To me, the side speaker are good to give the audience an "live concert feeling", but I always keep in mind that if it is so, the orchestra would still sit on the stage, so your "main information" comes from the fronts. Route to the center? Yes, but only contrabasses, low winds or boom perc. Of course rules are meant to be broken if we are talking about a movie like "Gravity" where immersive gets raised to next level.


Amazing @AR! Thanks a lot for your insight and sharing your workflow!


----------



## Trevor Meier (Jan 3, 2022)

AR said:


> And I gave here the MIR Teldex Room on her headphones and she immediately started reacting to the room space, instead of having a dry room where it feels uninspiring. Pitty, I don't have Lyndhurst hall as a multi impulse response, yet )))).


This is a great tip, I'm going to try this in the next tracking session.


AR said:


> These processed mixes go do the dubbing stage as they are.


Do you send fully mixed stems (reverb and all) to the dub? Or mix- verb?


AR said:


> I'm careful with using experimental stuff. I don't use metallic stuff for the surrounds. My head is always in conflict with Dolby X-Curve and raising the highs. I also think about stuff like, when you're having a dedicated Bass Drum LFE track and a Taiko LFE track and they're playing at the same time. Do I need both LFE tracks on? Stuff like that. Also, how much level is going to the sides? To me, the side speaker are good to give the audience an "live concert feeling", but I always keep in mind that if it is so, the orchestra would still sit on the stage, so your "main information" comes from the fronts. Route to the center? Yes, but only contrabasses, low winds or boom perc. Of course rules are meant to be broken if we are talking about a movie like "Gravity" where immersive gets raised to next level.


Is there an example of your work that in your mind a successful blend of the available tech & techniques, post-dub? These are great tips for someone just wading in to this creative space. I'm building up a reference library to keep my head in check as I try new things.

Thanks again @Rctec @charlieclouser @AR and everyone for the tips. It's fun to wade into these new possibilities, and great to be able to share tips & pitfalls as we go. Love VI-C for this.


----------



## AR (Jan 3, 2022)

Trevor Meier said:


> This is a great tip, I'm going to try this in the next tracking session.
> 
> Do you send fully mixed stems (reverb and all) to the dub? Or mix- verb?
> 
> ...


Always keeping the original samples & recordings separate from the reverb. Reverb tracks and low instruments are the most edited tracks at the dubbing stage btw. That means for you lot of data. So for example the brass section consists of Hi+Mid+Lo and their equal reverb stem. 6 stems of Dolby Atmos. Lot of data, which regulary consists of empty data. But to keep the director happy and stay friends with the dubbing mixer you gotta deliver it that way. Sometimes composers even separate Brass sections into long Hi/Mid/Lo and short Hi/Mid/Low Brass. Thats even crazier on CPU, data space and data prep. Some others go just for 1 stem for whole brass section. Hi & Lo Brass would be a good starting point and is not so heavy on CPU.


----------



## AR (Jan 3, 2022)

This for example is my VePro Brass rig. Not so much processing going on here. But every stem has it's own Cinematic Room or Exponential Audio. My goal was being able to hear a near finished mix while composing. My main DAW is able to play back a whole composition of several minutes with lots of instruments (200+) that have expression mapped articulations in it. What my computer isn't able to do, is handling all the mixed stems. So Junkie XL gave me that idea and I took it little bit further and offload all the processing in VePro computers. (I think that many composers that work in stereo don't have that problem.)
It takes a bunch of computers for some processing. And here I wouldn't recommend by going with just one 3950x as a VePro slave to handle all your orchestra. The problem is network latency. Don't get fooled by the picture below ip address shown on the upper taskbar. This is just the communication port for the onboard lan card to work with remote desktop. VePro has it's own dedicated 10G network. And although I connected every computer via 10G, I limited the incoming channels to 1G. But that's another story I'm happy to share my trail and errors to. So, as you can see, surround sucks up lot of CPU. You can try staying itb by using lot of dsp cards. But you will limit yourself to a system that will run out of CPU the sooner or later. And then you will have the problem that all your pci slots are occupied. Of course you can rebuild the whole setup. And later on put the UAD cards into a different computer. Give it a little thought first. These uad system are very expensive compared to a slave computer with some nice surround plugins. I also tried a system with used rme cards (bought on ebay) in each slave. Boy that was still expensive and great and lo latency. Unfortunately I ran out of outputs, hahaaaaa. So that was a shoot in my own foot.
PM me if you got further questions about this and that. I'm happy to help out.


----------



## Living Fossil (Jan 3, 2022)

Rctec said:


> Gustav Mahler was very deliberate when he moved the brass into the balcony in his second symphony, for example.


The problem with these offstage instruments is (which where not unusual e.g. in operas quite a while before Mahler) that they usually work much better in the composer's imagination than in the actual performances. I remember when as a student i went to a concert in high expectations of how the off stage instruments would sound after having seen them when studying the score (iirc it was a Mahler symphony, probably Nr. 6) and it was quite disappointing, almost cringe. I though a lot about that (and similar experiences) and until today, i don't see a solution which could make this effect work better in concerts. It always has this taste of "yeah, the intention behind it is clear, but still..." 

Nevertheless, imho there is one piece of music where the offstage instruments work extremely well; it's in the _Dies irae_ of Berlioz' _"Grande messe des morts" _(better known as "Requiem") and there are four sections involved (one from each geographic direction; it's symbolic for the last judgement).

(the passage happens around 15 secondes after the start of the video)









On a more general side (regarding surround concepts for music) one issue is that the human's hardware (i.e. the ears) _are_ stereo. The reason why 3D setups still have a different impact relies only on the interaction of the brain with small movements of the head.
And then, in addition there is the fact that the brain, if it's really trying to analyze music for harmonic and contrapuntal aspects, has the tendency to even narrow the heard image down to mono (that's basically what "focussing" literally means).
These aspects also conclude my personal view on the topic: i think spatial audio is a great thing where the movement in space is an essential part of the music (i.e. like in sound installations or Virtual Reality projects). 
And in film it really works well – like in the Berlioz example – when it's about putting the viewer in the middle of the action.
If you see a hurricane in a movie and suddenly you're in its middle; that's a great experience.
When watching two people in a romantic comedy, i rather keep some distance...


----------



## quickbrownf0x (Jan 3, 2022)

AR said:


> This for example is my VePro Brass rig. Not so much processing going on here. But every stem has it's own Cinematic Room or Exponential Audio. My goal was being able to hear a near finished mix while composing. My main DAW is able to play back a whole composition of several minutes with lots of instruments (200+) that have expression mapped articulations in it. What my computer isn't able to do, is handling all the mixed stems. So Junkie XL gave me that idea and I took it little bit further and offload all the processing in VePro computers. (I think that many composers that work in stereo don't have that problem.)
> It takes a bunch of computers for some processing. And here I wouldn't recommend by going with just one 3950x as a VePro slave to handle all your orchestra. The problem is network latency. Don't get fooled by the picture below ip address shown on the upper taskbar. This is just the communication port for the onboard lan card to work with remote desktop. VePro has it's own dedicated 10G network. And although I connected every computer via 10G, I limited the incoming channels to 1G. But that's another story I'm happy to share my trail and errors to. So, as you can see, surround sucks up lot of CPU. You can try staying itb by using lot of dsp cards. But you will limit yourself to a system that will run out of CPU the sooner or later. And then you will have the problem that all your pci slots are occupied. Of course you can rebuild the whole setup. And later on put the UAD cards into a different computer. Give it a little thought first. These uad system are very expensive compared to a slave computer with some nice surround plugins. I also tried a system with used rme cards (bought on ebay) in each slave. Boy that was still expensive and great and lo latency. Unfortunately I ran out of outputs, hahaaaaa. So that was a shoot in my own foot.
> PM me if you got further questions about this and that. I'm happy to help out.


Good to find myself not alone in this, apparently. Ran into many of the same issues. Even now, with 3 slaves and a brand new maxed out DAW pc on a 10gb network I still need to switch to a buffer size of 1024 with my current Quad.1 template, or Cubase will start to shit its pants. Was wondering how others are handling this. I'm moving from Focusrite to UAD this week, so maybe that'll help _somewhat_.


----------



## JohnG (Jan 3, 2022)

Living Fossil said:


> there are four sections involved (one from each geographic direction; it's symbolic for the last judgement)


That Berlioz! Four sections; and how many timpani players is that?? Five? Six? Eight???

enjoy the extravagance and what a venue, with Dudamel conducting. Too bad he [Berlioz] wasn’t alive to see it.


----------



## Living Fossil (Jan 3, 2022)

JohnG said:


> That Berlioz! Four sections; and how many timpani players is that?? Five? Six? Eight???


According to the score it's ten players for 8 pairs (16 timpani in total)...


----------



## Mishabou (Jan 3, 2022)

Trevor Meier said:


> What’s your preferred setup when you’ve got various possible project formats? Do you work in Atmos and downmix? Or in Ambisonics and then downmix to Atmos?
> 
> Any preferred tools? I’ve been playing with the IEM Ambisonics suite inside MetaPlugin as a spatialization tool, but MetaPlugin’s limitations mean 4th-order Ambisonics is the max at the moment


With my template, i can work/deliver in any format. I usually work in Atmos and downmix.


----------



## Nils Neumann (Jan 3, 2022)

Mishabou said:


> With my template, i can work/deliver in any format. I usually work in Atmos and downmix.


Would you mind sharing some thoughts about your downmix process? Especially what do you do in the composing/mixing stage so the mix translates well to all formats in the end?


----------



## Mishabou (Jan 3, 2022)

Nils Neumann said:


> Would you mind sharing some thoughts about your downmix process? Especially what do you do in the composing/mixing stage so the mix translates well to all formats in the end?


I will answer your question as soon as i can.

I'm in the midst of setting up a second studio at my house as i can't access my main studio located in a commercial building, thanks to Covid. Quite a challenge trying to meet Atmos specs in a less than ideal small space.


----------



## CSS_SCC (Jan 3, 2022)

Living Fossil said:


> The problem with these offstage instruments is (which where not unusual e.g. in operas quite a while before Mahler) that they usually work much better in the composer's imagination than in the actual performances. I remember when as a student i went to a concert in high expectations of how the off stage instruments would sound after having seen them when studying the score (iirc it was a Mahler symphony, probably Nr. 6) and it was quite disappointing, almost cringe. I though a lot about that (and similar experiences) and until today, i don't see a solution which could make this effect work better in concerts. It always has this taste of "yeah, the intention behind it is clear, but still..."
> 
> Nevertheless, imho there is one piece of music where the offstage instruments work extremely well; it's in the _Dies irae_ of Berlioz' _"Grande messe des morts" _(better known as "Requiem") and there are four sections involved (one from each geographic direction; it's symbolic for the last judgement).
> 
> ...



Just a side note, here is a better recording:


----------



## Stephen Limbaugh (Jan 3, 2022)

re @Rctec and the “pulled apart” music.

I ran a bunch of experiments a couple years ago at the ATMOS room above The Wiltern off Wilshire (can’t remember the name of the studio).

Using woodwind instruments, since they are a family most unique in their individual colors and require more careful balancing than other families, I reconstructed chords directly out of Rimsky-Korsakov’s orchestrion book.

I then placed those instruments around the room, equal distances, and the chords maintained their cohesiveness, and was cool. Even shifting from woodwinds “on the stage” to the four corners of the room yielded results that could absolutely provide aesthetic value when used correctly.

Changing the distances is what fumbles the balance of the chords due to volume/dynamics, and when movement is introduced, there’s a very special phenomenon that takes place which is a trade secret I can’t give up yet (though you might know it since you work in ATMOS all the time haha).

Do you have the same experience in your room regarding spaced out instruments that are still the same distance?


----------



## charlieclouser (Jan 3, 2022)

IvanP said:


> Thanks a lot @charlieclouser ! Are you talking about the objets or the whole music being delivered in Stereo Stems?


I didn't intend for my stereo stems to be thought of as objects, but because the score was fairly minimalistic, with basically no real orchestra or attempts at simulation, each stereo stem could be dealt with as an object more or less. Most stems would have so few elements that this approach worked. And like I said, it was a super quick-n-dirty mix date. 

Usually when I deliver in 5.1 or Quad there's not even any "legitimate" surround reverbs or imaging - it's all special fx, like tracking an instrument four times for quad (just like you'd double rhythm guitars in stereo but with four instead of two), having four delays to scary ambiences to ping-pong around in quad, or making jump scares that start in the front and splash to the back (or the opposite, which is a fun effect). It's things like that which will probably make me continue to mix and deliver stems in at least a Quad configuration, and force the dubbing mixers to figure out where to put the rear pair that corresponds to each front pair.

Since I mix my own scores as I go, instead of handing off to a proper score mixer, this is the most practical solution.... for now.


----------



## IvanP (Jan 4, 2022)

charlieclouser said:


> I didn't intend for my stereo stems to be thought of as objects, but because the score was fairly minimalistic, with basically no real orchestra or attempts at simulation, each stereo stem could be dealt with as an object more or less. Most stems would have so few elements that this approach worked. And like I said, it was a super quick-n-dirty mix date.
> 
> Usually when I deliver in 5.1 or Quad there's not even any "legitimate" surround reverbs or imaging - it's all special fx, like tracking an instrument four times for quad (just like you'd double rhythm guitars in stereo but with four instead of two), having four delays to scary ambiences to ping-pong around in quad, or making jump scares that start in the front and splash to the back (or the opposite, which is a fun effect). It's things like that which will probably make me continue to mix and deliver stems in at least a Quad configuration, and force the dubbing mixers to figure out where to put the rear pair that corresponds to each front pair.
> 
> Since I mix my own scores as I go, instead of handing off to a proper score mixer, this is the most practical solution.... for now.


Thank you! What an amazing creative mind you have  Thanks a lot for sharing!


----------



## colony nofi (Jan 4, 2022)

I can chime in coming to all this from a different angle.
I'd say more than 50% of my work is for immersive audio.
But not cinema. Think opening ceremonies / world expo pavilions / lots of museum work / dance theatre / experiential theatre / zoos (!) etc etc.

So for 15+ years I've been working in object based audio paradigms - at the start I didn't even realise that's what I was doing... it was REALLY super janky back then.

And more and more its work where the speaker system is different to what is being composed on due to complexities. Just starting a 1000m2 space with 64 to 72 speaker channels with full score + sound design. In this case it will be written in stereo (well actually 4.1 but only the synths will make the final score, and often I throw them back to mono/stereo). Orchestra will be recorded with spots, surround tree and a bunch of ambience mics. The mix will be handled down to stereo stems that will then be placed around the room. Indeed, in this way the music WRITING is extremely complex, as its designed to give different emotive reactions in different parts of the room - almost completely different pieces of music - but harmonically the same (and same tempo map) so spill is like morphing. 

The actual composition pipe-line is closer to an A-list game using different mixes to up the action, except going much much further in the breadth of change between mixes. 


We are also going to explore wave field synthesis in three of the sections of the room... and that's another whole ball game. (Side note : There's been recent explorations / tests of wave field synthesis using smaller arrays than originally thought necessary - with some incredible results. Think 7 x hung line arrays at the front of a room. It's a version of immersive audio that has to be experienced to be believed. Not 3D but often supplemented with surrounds / height arrays). 

Where was I? . There is very little point composing in an atmos room - aside from the BIG one - which is that it is fun and can be extremely inspiring. They're technically hard to setup / takes a tonne of tech to get right. I would be encouraging folk to get an excellent 5.1 system and start in 4.1 even. HZ mentioned earlier some of the issues with music in 3d / immersive formats - and these issues become bigger when rooms (cinemas) get bigger. Where as sound design in my experience translates extremely well from room to room in atmos, music has many more ways for things to go wrong. Now, that's not to say that its not incredibly fun - but its hard. There are just so many tech pitfalls that getting an emotionally satisfying result in your own room is one thing, but to have it work in the cinema is another (not to mention the difficulty for the dubbing engineer to handle your atmos stems!)

I remember the first time I heard Ben Frost perform in an 8.8 "in the round" scenario and be astounded by the musical possibilities. I'm sure he spent a head of time re-working his songs to work in that environment. And I was equally blown away by Cardiff / Miller's reworking of Tallis 16th century work for 40 voices and 40 speakers. (it worked especially well in the tanks @ the tate modern!)

I very much am on the side of delivering 5.1 stems to a mix theatre. I feel like there is much more likelihood of your music sounding right in the end. Unless you have an amazing music mix engineer experienced in atmos... and even then (!!!). Now, its not stupid to mix in atmos, export 5.1 for the mix stage and keep the atmos masters for the final music release...

Now - imagine a space where things were mixed immersively on the fly (real time) using audio that is triggered in real-time from interaction. We're currently building an immersive audio server that will run outside of Unreal to enable just this - into any of the big 4 immersive renderers (Nexo, Iosono, Spat Revolution and Atmos). Step 2 will be finding someone to build it out for big 3D audio theatre shows - thats beyond the scope of our project, but it'll happen soon enough. 

Its a WILD world out there.....


----------



## Stephen Limbaugh (Jan 4, 2022)

colony nofi said:


> wave field synthesis in three of the sections of the room... and that's another whole ball game.


Can you expand on this? What is wave field synthesis?


----------



## Dietz (Jan 4, 2022)

charlieclouser said:


> they probably would have just deleted everything but the front L+R pair from each stem


Would you have been willing to tolerate such meddling?


----------



## Dietz (Jan 4, 2022)

Stephen Limbaugh said:


> Can you expand on this? What is wave field synthesis?


In short: The kind of "suround sound" we have been envisioning as children. 

-> https://en.wikipedia.org/wiki/Wave_field_synthesis

... that's the decisive sentence: _"Contrary to traditional spatialization techniques such as stereo or surround sound, the localization of virtual sources in WFS does not depend on or change with the listener's position."_


----------



## Stephen Limbaugh (Jan 4, 2022)

Dietz said:


> In short: The kind of "suround sound" we have been envisioning as children.
> 
> -> https://en.wikipedia.org/wiki/Wave_field_synthesis
> 
> ... that's the decisive sentence: _"Contrary to traditional spatialization techniques such as stereo or surround sound, the localization of virtual sources in WFS does not depend on or change with the listener's position."_


….if I understand correctly, a person seated far left or far right in a large theater would have the same listening experience? Basically?


----------



## Dietz (Jan 4, 2022)

Stephen Limbaugh said:


> ….if I understand correctly, a person seated far left or far right in a large theater would have the same listening experience? Basically?


It depends. On the one hand, there can be sources that seem to stay in their position within the defined soundstage - you can even walk around them. On the other hand, there are sources in "infinite" distance: Sound derived from these sources will always seem to follow the listener and reach them from the same side (... as a simple comparison: Sunbeams seem to come in in parallel, not angled).

... I had the possibility to work in a scientific setup created by IOSONO (a spin-off from Fraunhofer Institute Illmenau) about 15 years ago. They developed the first (somewhat) commercial version of WFS, and we did some tests with an early pre-release version of Vienna MIR there.


----------



## Nils Neumann (Jan 4, 2022)

Dietz said:


> In short: The kind of "suround sound" we have been envisioning as children.
> 
> -> https://en.wikipedia.org/wiki/Wave_field_synthesis
> 
> ... that's the decisive sentence: _"Contrary to traditional spatialization techniques such as stereo or surround sound, the localization of virtual sources in WFS does not depend on or change with the listener's position."_


This is amazing, I guess the speaker lobby will push for this eventually😂

But seriously interesting!


----------



## gsilbers (Jan 4, 2022)

charlieclouser said:


> I've been mixing and delivering 5.1 stems forever, but I recently delivered my first score that was to be mixed in Atmos. In pre-delivery discussions with the re-recording engineers on the dub stage, they expressed a strong preference for me to simply deliver stereo stems which they would then distribute into the immersive field. Since each stem would now take only 2 channels instead of my usual 6, this let me spread things out across more stems, giving the mixers more flexibility to spray things around the room, and apply immersive reverbs or panning to elements that were more separated than before.
> 
> It went well.
> 
> ...




so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries? 

I remember QCing the american horror story masters for fox back in the day and was amazed at the main tittle music has such cool stuff going on in the back surrounds. With atmos, how would you do that sort of stuff now if u have to deliver stereo stems?


----------



## IvanP (Jan 4, 2022)

gsilbers said:


> so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries?


All of my 2021 Netflix originals so far have requested Atmos as final delivery. In fact a project I started in 2020 for them changed from initial 5.1 delivery to Atmos during 2021. 

For Prime Video (Amazon), only 5.1 so far but they remixed 3-4 tracks of my Soundtrack in Atmos to be released as something exclusive for premium Prime music users, so it's a matter of time until everything will be delivered in Atmos. 

For the time being, it's a leap from 5.1 where they understand it's an extra, but who knows if it will be something compulsory in the next 5 years hehe...


----------



## gsilbers (Jan 4, 2022)

IvanP said:


> All of my 2021 Netflix originals so far have requested Atmos as final delivery. In fact a project I started in 2020 for them changed from initial 5.1 delivery to Atmos during 2021.
> 
> For Prime Video (Amazon), only 5.1 so far but they remixed 3-4 tracks of my Soundtrack in Atmos to be released as something exclusive for premium Prime music users, so it's a matter of time until everything will be delivered in Atmos.
> 
> For the time being, it's a leap from 5.1 where they understand it's an extra, but who knows if it will be something compulsory in the next 5 years hehe...



interesting. but the final delivery in atmos is for the dub stage and the re recording engineers just add it to their atmos mix or is the atmos final delivery a standaline delivery for netflix/amazon to use it for music only purposes? 

Do you also delivery music in normal 5.1/stereo as well as atmos for netflix?


----------



## IvanP (Jan 4, 2022)

gsilbers said:


> interesting. but the final delivery in atmos is for the dub stage and the re recording engineers just add it to their atmos mix or is the atmos final delivery a standaline delivery for netflix/amazon to use it for music only purposes?
> 
> Do you also delivery music in normal 5.1/stereo as well as atmos for netflix?


From what I've heard it's final dub in Atmos now, but depending on budget the score can be in Atmos as well or they can accept 5.1 

On each project they have a list of deliveries for the score, it may include indeed everything (even PT Mix sessions, Stereo and Surround Mixes + Stems, etc. ). It always depend on the platform and the project (if it's an original, an exclusive, etc)


----------



## gsilbers (Jan 4, 2022)

dgburns said:


> In my opinion- Hats off to Apple for adding immersive sound to Logic. Will be interesting to see how this gets used ( more likely abused )
> 
> I think it will be difficult to convince many lo to mid budget productions to go Atmos for score. Atmos for the final audio sound yes, but not score, much less licensed music.
> 
> I still think quad is a good compromise for score, for the time being…



Im a little puzzled by logics atmos implementation since its basically like $300 (just buying logic pro)... while i see every studio out there with dolby, pro tools and some extremely expensive setups using dante, mxtrx etc. 
My friends does mix movies and shows for netflxi etc and telling me the equipment almost gave me a hart atack on how expensive it was. 

Like... coudnt you just do your score or mix your film like normal, then import to logic, setup a few objects/etc and export it and thats your final atmos delivery... in the cheap? 

Logic's 5.1 surround implementation is pretty bad but for this sort of "trick" using it as an export for atmos w/o having to deal with stems boucnes (which is one of the big drawbacks of logics surrounds issues)... just to export a main atmos mix... maybe using some stereo stems etc might work? 

Thats my thought at least.. and still wonder why such discrepancy between the pro tools guys aka real egineers and this sort of "toy" thats logic with atmos.. Like,,, is it the same spec output that would work across netflix specs vs amazon etc. 
Does it make sense? maybe im not seeing something (or not seeing a lot of things lol)
Do your mix like always, export stems, import to logic, do some of the "atmos" stuff like what goes in what speakers/object, what moves around in certain spots, export done. only needed is enough speakers and an interface w enough outs (maybe adat expandable etc)
All studios still have to delivery 5.1 and stereo no matter what. Now along an atmos file. and atmos is not really good for foldwons from what i understand.


----------



## gsilbers (Jan 4, 2022)

gsilbers said:


> Im a little puzzled by logics atmos implementation since its basically like $300 (just buying logic pro)... while i see every studio out there with dolby, pro tools and some extremely expensive setups using dante, mxtrx etc.
> My friends does mix movies and shows for netflxi etc and telling me the equipment almost gave me a hart atack on how expensive it was.
> 
> Like... coudnt you just do your score or mix your film like normal, then import to logic, setup a few objects/etc and export it and thats your final atmos delivery... in the cheap?
> ...



I guess ill have to watch a tutorial about it 



I just saw the netflix specs and the only thing so far thats throwing me off is the delivery of an IMF... but i think they mean IAB. 
And the dolby renderer does have folddown options. 

hopefully pro tools can come up w an easier and cost effective way to create atmos mixes.


----------



## charlieclouser (Jan 4, 2022)

Dietz said:


> Would you have been willing to tolerate such meddling?


Of course. I am very much non-precious when it comes to what happens to my deliveries down-stream. It's all just raw material for the producers, director, and mixer to mold and shape as they need to get the results they want. Being precious about whether your rear channels are loud enough on the stage is about the same as being precious about whether the dialog and fx are drowning out the score. One or the other is bound to happen pretty often.

I usually tell the mixers / producers / director some variation of this: "I don't care if you discard three of the seven stems, pitch-shift one of them down an octave, and play the other two in reverse. Good luck, have fun, let me know if you need anything more." This usually results in a sigh of relief from mixers who have dealt with too many too-precious composers.

Knowing that liberties are likely to be taken, I don't put anything in the rears that I can't live without, so if they are too quiet or just not there, no harm no foul. And I give detailed notes like, "This cue has the jump scares swooping from the rears to the fronts, and if you just play the fronts then it may sound like they're too short and starting too late, so if you're not using the rears then maybe fold the rears into the fronts if it's sounding weird." - or - "This cue has an extra-unsettling array of those atonal string effects quadruple-tracked in surround, so if there's one spot in the whole score where the rears should be good and loud, this is it."

Another thing that I do which is maybe not typical (or even advisable!) is NOT to allocate elements into the stems by any hard and fast rules like, brass low, brass high, etc. This is partly because I'm almost never working with a conventional layout of orchestral elements - it's more likely to be stuff like: StemA = two sub-bass booms and low hits on jump scares, StemB = just a synth pulse, StemC = evil low synth drones, StemD = distorted midrange textures, StemE = guitar feedback echoes, StemF = icy high strings sustains and high string shrieks on jump scares, StemG = that weird dissonant choir at the start and the dissonant woodwinds in the middle.

So I try to make each stem as close to a complete a "piece of music" as possible, while still trying to have as little overlapping of elements within each stem, and grouping things in sonic categories. So, thin-high-sustainy stuff doesn't go on the same stem as sub-bass booms, but if there's room to split the layers of that thin-high-sustainy stuff across 2 or 3 stems that are otherwise empty at that spot, then I will. If all the high-long strings are on the same stem then they can't do anything with it except turn it up or down. But if the atonal layer is on StemE and the sul-pont sustains are on StemF and the high synth that doubles it is on StemG then they can sculpt things more easily if needed.

This approach might not work for folks who are doing more conventional orchestral layouts, but for my weird smorgasbord of hybrid elements it works pretty well. Because the director usually isn't going to say, "Let's lower the sul-pont strings and raise the behind-the-bridge tremolos", they're going to say, "Can we turn up all that high chaotic screeching? I love that stuff." So for me, obeying conventional "rules" about what sounds go on which stems would be more limiting, both for me and for them. If I did it that way, most of the stems would be empty a lot of the time, with too many elements crammed onto the few stems that they were "supposed to" be on.

Anyway, if my deliveries solve the problem as-is, then... great. But if they don't, what am I gonna do? Sit there on the stage and complain that the explosions are drowning out my precious symphony? That's no way to make friends. And I've had plenty of situations where the music editor has taken snippets or zingers from one cue and laid them across a different cue to solve some problem. Huge relief to have them solve the problem on the stage, on the day, rather than have me scramble to slap a band-aid on the thing. On my last project the music editor took 30 seconds of a pulse+drums bed from one cue, and used good old Serato Pitch-N-Time to speed it up from 110bpm to 122pbm and pitch it down three semitones. Sounded fine, problem solved, note addressed, scratch that one off the list.

Of course, on some tv series I've done I was just delivering stereo mixes, so if it didn't work as-is then the only solution was to go looking for an ALT of the cue. But when I deliver stems I let 'em go wild on the stage if they want. Fine by me.


----------



## charlieclouser (Jan 4, 2022)

gsilbers said:


> so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries?
> 
> I remember QCing the american horror story masters for fox back in the day and was amazed at the main tittle music has such cool stuff going on in the back surrounds. With atmos, how would you do that sort of stuff now if u have to deliver stereo stems?


American Horror Story theme was a fun one for me. It's nice to do something super-minimal like that where you can actually hear in between the elements and appreciate weird stuff going on in the rears.

But, I haven't solved the objects-vs-stems issue yet. The difference between delivering "objects" as stereo pairs vs delivering 5.1 stems that need to sit as-is involves some tricky decisions. It's really about what the mixers prefer given the schedule they'll be working against. That's why this last quickie score was stereo stems only. I knew they'd be absolutely racing through the mix, and the mixers told me that they'd be able to do more in the immersive field if my stems were simple stereo pairs, and the minimal aspect of the score meant that this approach could work. Since each stem usually only contained a few elements that could function as a unit, they could throw a whole stereo stem to the tops / rears without accidentally throwing some other elements with it. 

But I prefer to, and will continue to, deliver as 5.1 stems whenever practical (practical for the mixers, not practical for me). After my next big upgrade I may widen each of my stems to 7.1 or wider, but if I do I'd want to figure out a way to simultaneously print stereo objects. It would be a dream to be able to, in a single bounce pass, output eight 7.1 stems AND 32 stereo objects to print as 64 channels of surround stems and 64 channels of stereo objects. That way I'd be covered no matter how the mixers wanted to deal with things.

My Logic bussing + stem sub-masters array is going to look insane!


----------



## Trevor Meier (Jan 4, 2022)

charlieclouser said:


> Another thing that I do which is maybe not typical (or even advisable!) is NOT to allocate elements into the stems by any hard and fast rules like, brass low, brass high, etc. This is partly because I'm almost never working with a conventional layout of orchestral elements - it's more likely to be stuff like: StemA = two sub-bass booms and low hits on jump scares, StemB = just a synth pulse, StemC = evil low synth drones, StemD = distorted midrange textures, StemE = guitar feedback echoes, StemF = icy high strings sustains and high string shrieks on jump scares, StemG = that weird dissonant choir at the start and the dissonant woodwinds in the middle.


With this unconventional stem strategy, do you have to plan ahead to fit the whole film’s sonic palette into six stems? Or do you switch up how you use the stems for each cue or reel?


----------



## colony nofi (Jan 4, 2022)

Nils Neumann said:


> This is amazing, I guess the speaker lobby will push for this eventually😂
> 
> But seriously interesting!


Yeah - and universities are doing more and more research into it.

I've been involved in the periphery consulting for a uni that is putting in a 24.4 system that can be used for anything from ambisonic research to WFS (wave field synthesis) to n-based panning (what I've been doing mostly). There's increasing academic work being done solving the many problems of WFS (which started with the enormous number of speakers required to create the immersive environment) - its like in the last 5 years the research from 15-20 years ago has been picked up again after a few not insignificant breakthroughs were made. The fact that concert sized systems with only 5 to 7 line arrays have been deployed and tested - and used - is amazing in my book. The big guys are ALL looking into it. Expect the next Radiohead type show to be doing this in 2 years time.

While the 384 speaker system in germany (is that the one you played with @Dietz ?) is awesome, it is not the final solution. There needs to be ways of generating the wave field without having to resort to speakers being within very close proximity to each other. And recent developments are encouraging in this regard. Which has again piqued my interest and may well work for smaller scale prime time installs with caveats. 

Iosono is unfortunately no longer being developed actively by Barco (Yamaha) - but another spatial audio company has grabbed the IP and is doing interesting things with it still. Its a little outdated in some of its approaches (which happens when so much R&D dollars are invested at a time before many of the pitfalls of the systems are ironed out. They were true pioneers but perhaps ahead of their time. The new system by Nexo (again, Yamaha) looks incredibly interesting, although they are tying it in with virtual acoustics in a single integrated system, which is less good for the types of things I use it for. I had a chat with the developers a few weeks ago, and they didn't make any noises regarding implementations for WFS. 

More and more, the Spat Revolution approach seems the best for museums / installations / immersive theatre etc, as it is the most open to different approaches / ways of doing things. However, it has the more technical knowledge required to use, and is not as stable as others. (Spat for MaxMSP is an awesome way to explore, but Spat Revolution is a no brainer for actual installs if it is stable - and there's been times when that hasn't been true!)

Right now for install there is a BIG problem with driving any of these technologies in real time. It is relatively simple (in the scheme of things) to create mixes that playback into the system. However, anything that changes with input (which is increasingly important for museums, theatre, etc) is bloody complex. Unreal is an awesome way forward, but has no "knowledge" of object based audio outputs that can be rendered externally. And its internal rendering engine while incredible in some instances has severe limitations in others. There is work afoot to turn it into an audio media server in its own right, but our current approach is to create an external piece of software to do the heavy lifting, allowing anything that can talk a bit of OSC or even XML over IP to drive an immersive environment.


----------



## charlieclouser (Jan 4, 2022)

Trevor Meier said:


> With this unconventional stem strategy, do you have to plan ahead to fit the whole film’s sonic palette into six stems? Or do you switch up how you use the stems for each cue or reel?


Well, I do switch things up a bit from cue to cue. Like, if cue A has nothing but drums, a pulse, and one stem's worth of high stuff, then I'll spread everything around so if they need to thin out the drums they can. Then, if cue B has no drums at all but has eight stem's worth of dissonant high layers, they'll get spread out a bit as well.

But I generally work in a logical fashion, so you start with drums / low things and move to strings / high things, so it's not just totally random where things will appear. And there's usually a piano on StemD, so that serves as a center point of sorts. 

With the ever-changing nature of elements in my cues, the only way to obey the rules more stringently would be to use a LOT more stems, many of which would be empty a lot of the time. Which I may do on my next re-configure.


----------



## AR (Jan 4, 2022)

quickbrownf0x said:


> Good to find myself not alone in this, apparently. Ran into many of the same issues. Even now, with 3 slaves and a brand new maxed out DAW pc on a 10gb network I still need to switch to a buffer size of 1024 with my current Quad.1 template, or Cubase will start to shit its pants. Was wondering how others are handling this. I'm moving from Focusrite to UAD this week, so maybe that'll help _somewhat_.


I hope so for you. That's why I only use RME pcie cards (although I have some Motu's lying around here). Did you try to work with 512 buffers and put x4 buffers on the VePros?


----------



## jononotbono (Jan 4, 2022)

I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer


----------



## Gerhard Westphalen (Jan 4, 2022)

Manaberry said:


> By the way, Gerhard, do you know how I can monitor atmos music (from Tidal for instance) for reference? I feel likes it's almost impossible to find those file or a proper way to monitor them (especially when you only have the binaural renderer). Thanks!


Depends. 

If all you want to do is monitor binarual Atmos then it's easy. Buy an Apple TV and a receiver with a headphone out (just make sure it supports Atmos on the headphone out). You could also just use an Apple device and airpods. Apple Music uses spatial audio which sounds pretty horrendous but I think Tidal might now allow you to stream the proper binaural.

If you want to play Atmos content out through your speakers then you need an Apple TV plus decoder with whatever I/O you need to interface with your rig. If you can only do analog ins then it's easy and you just get either any cheap receiver with preamp outs or you can go up to something nicer like the Emotiva decoders. The new unit they're releasing is only $1000. If you have digital I/O and so can bypass the extra stages of conversion then there are units like the ones from Arvus and the JBL Synthesis. Those are more focused towards Dante. Right now I'm using a cheap receiver going into some Presonus converters while I wait for my Arvus unit to ship at which point it'll be going AES into my system.


----------



## quickbrownf0x (Jan 4, 2022)

AR said:


> I hope so for you. That's why I only use RME pcie cards (although I have some Motu's lying around here). Did you try to work with 512 buffers and put x4 buffers on the VePros?


Cheers, good point. Let me check and get back to you.


----------



## quickbrownf0x (Jan 4, 2022)

jononotbono said:


> I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer


Funny how you mention _Rocket Man_. Just finished listening to Andrew Scheps and friends talk about mixing in Atmos. At one point Steve Genewick (half-jokingly) says the best thing you can do to sort of get your listener/artist prepared for Atmos is to have him/her listen to _Rocket Man_ first, before you play them their own work (unmixed from stereo), or they'll think it's going to sound shitty.

You just got Rocket-Manned, mate.


----------



## jononotbono (Jan 4, 2022)

quickbrownf0x said:


> Funny how you mention _Rocket Man_. Just finished listening to Andrew Scheps and friends talk about mixing in Atmos. At one point Steve Genewick (half-jokingly) says the best thing you can do to sort of get your listener/artist prepared for Atmos is to have him/her listen to _Rocket Man_ first, before you play them their own work (unmixed from stereo), or they'll think it's going to sound shitty.
> 
> You just got Rocket-Manned, mate.


And now you can start to understand why he mentioned Rocket Man. It was demo mix to privately showcase Atmos music (not Atmos cinema) a long time ago. The devil is in the details. Above I said “an interesting experience for sure”. Hardly me getting “Rocket Manned”. I could tell you what Elton thought of the mix but that would be telling. 😂 

Oh, and I listened to it on PMC monitors costing over 7 figures just in case there was any doubt that the quality of the listening environment jeopardised my first experience of said mix.


----------



## quickbrownf0x (Jan 4, 2022)

jononotbono said:


> And now you can start to understand why he mentioned Rocket Man. It was demo mix to privately showcase Atmos music (not Atmos cinema) a long time ago. The devil is in the details. Above I said “an interesting experience for sure”. Hardly me getting “Rocket Manned”. I could tell you what Elton thought of the mix but that would be telling. 😂
> 
> Oh, and I listened to it on PMC monitors costing over 7 figures just in case there was any doubt that the quality of the listening environment jeopardised my first experience of said mix.


😂 Nice.


----------



## charlieclouser (Jan 5, 2022)

jononotbono said:


> I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer


I bet the Rocket Man mix you heard is the one done by Greg Penny? He's an old friend who produced a bunch of great records by k.d. lang, Elton, and many others, and now he's booked two years out just remixing classic albums for immersive at his setup in Ojai. I paid him a visit and he played me his immersive mix of Glen Campbell's "Wichita Lineman" and it'll bring tears to your eyes. Amazing. 

He's still using an ancient Dynaudio AIR setup (just like me!) with a LOT of AIR 6's hanging from trusses, but he's got so much work backed up that he's got his son Felix set up in another room with all Dynaudio Core speakers. Both rigs sound amazing, and it absolutely sold me on using height channels!


----------



## jononotbono (Jan 5, 2022)

charlieclouser said:


> I bet the Rocket Man mix you heard is the one done by Greg Penny? He's an old friend who produced a bunch of great records by k.d. lang, Elton, and many others, and now he's booked two years out just remixing classic albums for immersive at his setup in Ojai. I paid him a visit and he played me his immersive mix of Glen Campbell's "Wichita Lineman" and it'll bring tears to your eyes. Amazing.
> 
> He's still using an ancient Dynaudio AIR setup (just like me!) with a LOT of AIR 6's hanging from trusses, but he's got so much work backed up that he's got his son Felix set up in another room with all Dynaudio Core speakers. Both rigs sound amazing, and it absolutely sold me on using height channels!


Oh man, yeah, the height channels are amazing. Its incredible how much space there is. True full range. I loved hearing the difference without them. The studio I was just working at has 26 PMC speakers and is a 9.1.6 set up. Its quite a thing. There's also a one of a kind analogue mixing console designed by Paul Wolff that allows audio to be panned into any speaker without software (there's also no pan law with the panners). Where a typical EQ section is in a mixing console, its actually a Panner section. Custom designed frame to hold all the upper plane and surround speakers. Its also been designed to instantly switch between Atmos, Auro3D, Sony360 and everything Surround and below which is why there are so many speakers. They are all in their exact positions for each format. Everything has been calibrated by Dolby.

Its basically all completely ruined audio for me, in the sense that I no longer lust after speakers. I realise I will never be able to afford some PMC XBDA QB1s ($200k each... They have 5 (2 x Left 1 Centre 2 x Right) although when buying at this end "family" discounts are likely 😂) so I now just pretend what I have is good. Except for my LCDX headphones. They are truly great.

The best thing I experienced was Sony360. It was literally the best thing I have experienced sonically. I had an amazing time with someone called Gus Skinas and he stuck some tiny microphones in my ears to take a measurement/snapshot of my inner ears and he measured the room and he basically played a mix of a track that was massively exaggerated with panning (you know, Bass Guitar in far rears, drum kit hard right, stuff that the human ear isn't really used to) just so everyone could hear where everything was (if you're not used to listening to music like this its definitely an interesting experience to start with and some people hate it). The test was to listen to the track through this monitoring environment and then Gus handed me some headphones and there is literally no difference between the monitors and the headphones. And when you remove your headphones, because you are in complete disbelief that a pair of shitty headphones could sound identical (not a little bit like, or something is slightly different) but EXACTLY the same, there is no audio coming from the monitors and you suddenly realise the audio is just the headphones.

Here's a public screen shot from Paul Wolff who actually did this test a year after I did (he wasn't at the studio I was working at whilst I experienced this although he was when I helped him put his console together) and what his thoughts are on it. Honestly, its so exciting people are gonna lose their fucking minds. Well, people that work on headphones anyway.






I share his excitement because it was incredible. Anyway, I digress. Feels strange talking about any of this stuff as its all been so hush hush since I took a job at that studio. Lonely old world. 😂 I can only imagine how lonely HZ is. He's probably got this listening environment in his toilet let alone what's in his lab. 😂 

As for the mix of Rocket Man, I'd have to check with the engineers. At the time it was very much a "Jono, get the fuck over here and listen to this. And this. And then half a day of guinea pigging someone (me) that had never experienced immersive audio like that before and I just forgot to even ask. I'm not surprised that Greg Penny has been booked up for the next 2 years. Covid has literally held back Atmos so badly. I knew Apple were going to launch Atmos on all their devices (which is incredibly clever) about a year before it happened and Apple were also going to release a subscription service "Atmos Music" (songs/classic tracks - not Atmos for Cinema) which has now finally been launched but its free (obviously I am only saying stuff that is all public knowledge now as I don't want to be hunted down with a pack of dogs 😂). And for good reason. Atmos will be a total failure if the public don't get behind it and the only way the public are going to get behind it is if they can at least try it. And the public are not gonna be buying anything just to try it, so Headphones and a free service (on the tech they already own but didn't realise they already had Atmos capability) was a smart move. 

There are countless engineers out there remixing so much music in Atmos but Covid slowed down everything. Also in this time, Dolby licensed Atmos to literally everyone which makes it even more accessible for people getting on board with it which also included releasing "in the box" solutions for atmos mixing with Pro Tools etc and gives everyone in small studios, home set ups, composers, sound designers etc the ability to get on board with learning how to work in Atmos... thus once again, putting bigger studios out of business eventually. 

Its an exciting time for sure. I dream of having an immersive setup one day!


----------



## wunderflo (Jan 5, 2022)

I know everything already became way more accessible, but since there are so many industry insiders in this thread, would anyone mind to please help me to understand why for example the Dolby Atmos Production Suite isn't available on Windows, why there's no real Atmos binaural in Nuendo or why I can't simply listen to Atmos (binaural) through the Tidal app on my Windows PC? Will this change anytime soon? Are there technological hurdles or is this rather related to some business deals?


----------



## charlieclouser (Jan 5, 2022)

jononotbono said:


> Oh man, yeah, the height channels are amazing. Its incredible how much space there is. True full range. I loved hearing the difference without them. The studio I was just working at has 26 PMC speakers and is a 9.1.6 set up. Its quite a thing. There's also a one of a kind analogue mixing console designed by Paul Wolff that allows audio to be panned into any speaker without software (there's also no pan law with the panners). Where a typical EQ section is in a mixing console, its actually a Panner section. Custom designed frame to hold all the upper plane and surround speakers. Its also been designed to instantly switch between Atmos, Auro3D, Sony360 and everything Surround and below which is why there are so many speakers. They are all in their exact positions for each format. Everything has been calibrated by Dolby.
> 
> Its basically all completely ruined audio for me, in the sense that I no longer lust after speakers. I realise I will never be able to afford some PMC XBDA QB1s ($200k each... They have 5 (2 x Left 1 Centre 2 x Right) although when buying at this end "family" discounts are likely 😂) so I now just pretend what I have is good. Except for my LCDX headphones. They are truly great.


I think I know what studio and console that is....  The "FIX" console, right? Very cool thing.


jononotbono said:


> The best thing I experienced was Sony360. It was literally the best thing I have experienced sonically. I had an amazing time with someone called Gus Skinas and he stuck some tiny microphones in my ears to take a measurement/snapshot of my inner ears and he measured the room and he basically played a mix of a track that was massively exaggerated with panning (you know, Bass Guitar in far rears, drum kit hard right, stuff that the human ear isn't really used to) just so everyone could hear where everything was (if you're not used to listening to music like this its definitely an interesting experience to start with and some people hate it). The test was to listen to the track through this monitoring environment and then Gus handed me some headphones and there is literally no difference between the monitors and the headphones. And when you remove your headphones, because you are in complete disbelief that a pair of shitty headphones could sound identical (not a little bit like, or something is slightly different) but EXACTLY the same, there is no audio coming from the monitors and you suddenly realise the audio is just the headphones.


I want that Sony thing. Is it a zillion dollars?


jononotbono said:


> As for the mix of Rocket Man, I'd have to check with the engineers. At the time it was very much a "Jono, get the fuck over here and listen to this. And this. And then half a day of guinea pigging someone (me) that had never experienced immersive audio like that before and I just forgot to even ask. I'm not surprised that Greg Penny has been booked up for the next 2 years. Covid has literally held back Atmos so badly. I knew Apple were going to launch Atmos on all their devices (which is incredibly clever) about a year before it happened and Apple were also going to release a subscription service "Atmos Music" (songs/classic tracks - not Atmos for Cinema) which has now finally been launched but its free (obviously I am only saying stuff that is all public knowledge now as I don't want to be hunted down with a pack of dogs 😂). And for good reason. Atmos will be a total failure if the public don't get behind it and the only way the public are going to get behind it is if they can at least try it. And the public are not gonna be buying anything just to try it, so Headphones and a free service (on the tech they already own but didn't realise they already had Atmos capability) was a smart move.


Greg started just remixing Elton's catalog for him but when the labels heard the results they basically just opened the vaults and said, "Start at Abba and let us know when you get to ZZ Top". So the entire Sony and Capitol back catalogs are in his inbox right now! They've been grinding on it for three years and there's no end in sight. He loves it too, it's just him and his son in two huge rooms in an industrial park in Ojai - no artists knocking over mic stands, no label bigwigs dropping by. Sounds like heaven actually.... Interestingly, their rooms are almost completely untreated. Both are just huge cubes of truss work with speakers and absorptive panels hanging from them, but outside the cubes it's just a huge concrete warehouse. All Dynaudio speakers. 

I heard the PMC Atmos demo at NAMM and I did not like the speakers (too "clanky"), but that was in one of those temporary listening rooms the throw up at the trade shows so I can't really blame PMC. Their big skinny stacks that Tommy Lee has sound god-like, but at $80k per stereo pair they'd better! But the Genelec Atmos demo at NAMM was in free-field and sounded beautiful using all "The Ones" speakers. 8351b across the LCR, 8341's for all the immersive, and a pair of 7380 subs. Wow do they sound great. That demo is what's got me looking in the direction of The Ones.


----------



## charlieclouser (Jan 5, 2022)

wunderflo said:


> I know everything already became way more accessible, but since there are so many industry insiders in this thread, would anyone mind to please help me to understand why for example the Dolby Atmos Production Suite isn't available on Windows, why there's no real Atmos binaural in Nuendo or why I can't simply listen to Atmos (binaural) through the Tidal app on my Windows PC? Will this change anytime soon? Are there technological hurdles or is this rather related to some business deals?


I don't have any real information on that, but it would not surprise me a bit if Apple strong-armed their way into some deal with Dolby that prevents (or delays) Microsoft from being able to license the tech.

If I was Tim Cook and sitting on $200 billion in cash reserves, that's what I'd do... at least if I couldn't just buy Dolby outright. The fact that Logic has what appears to be the Dolby Atmos renderer built-in... FOR FREE.... is bonkers.


----------



## gsilbers (Jan 5, 2022)

jononotbono said:


> I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer


I heard about that. they used the same studio where they recorded those albums, placed speakers to reamp and capture the room sound so its both mixed and atmos and feels like you are in the same room with them.


----------



## gsilbers (Jan 5, 2022)

wunderflo said:


> I know everything already became way more accessible, but since there are so many industry insiders in this thread, would anyone mind to please help me to understand why for example the Dolby Atmos Production Suite isn't available on Windows, why there's no real Atmos binaural in Nuendo or why I can't simply listen to Atmos (binaural) through the Tidal app on my Windows PC? Will this change anytime soon? Are there technological hurdles or is this rather related to some business deals?



Well, maybe it has something to do with the dolby building and the avid building being a few minutes from each other and both in california where apple has a strong presence. Most, if not all studios in Los angeles only use mac and pro tools for post.

I know the main talk right now is how behind pro tools is with an easy atmos intergration after logic pro launced their version. so it seems like a big expection for the next pro tools update.

Its still the early days of atmos and everyone is confused. Apple wants to push it to beat spotify in the music service and distributors want to offer a better movie experience in the home entertainment world.
And apple doesnt even call it atmos and push it for their headphones. Ive heard mixed reviews of the quality of these atmos mixes. And i just learned theres a sony spec!

I know studios spending a gazliion in updating more rooms for atmos and its a big gamble if the specs change or you dont need that much hardware. Or netflix or another big player decides to drop atmos all together.

I think now everyone is trying to figure it out. Theres also back end licenses all these companies have to pay to use the tech, not only to implemented into their software but the gear distributors need to make the deliverables, like IMF, pro res for their amberfinn rigs or digital rapids, which might be expensive or too new or not enough demand except for a select few. Imaging being the developer of those apps and suddenly dolby changes something big cuz of apple or something. or the competition comes out with a new and easier way to stream or who knows. a lot of invisible webs we dont see right now. which sucks.


----------



## jononotbono (Jan 6, 2022)

charlieclouser said:


> I think I know what studio and console that is....  The "FIX" console, right? Very cool thing.


It is indeed. Some photos at various stages of helping to put this studio together. The console arrive in parts and we assembled it in the studio.




























charlieclouser said:


> I want that Sony thing. Is it a zillion dollars?


I have no idea about costs on that man. All I know is it was amazing. 
Also I forgot to add. Although Sony360 is a headphone format, this magical ear measuring software allows for people mix in their immersive set ups and it will translate perfectly to cans. I'm just saying that in case anyone reads any of this thinking its a format that you must only use headphones with.


----------



## alcorey (Jan 6, 2022)

Here's a peek at Greg's studio in Ojai


----------



## jononotbono (Jan 6, 2022)

alcorey said:


> Here's a peek at Greg's studio in Ojai


Like it. Looks like he’s jacked into the Matrix


----------



## CyberPunk (Jan 14, 2022)

Nils Neumann said:


> I‘m about 2 years in 5.1 Surround. And just recently upgraded to 7.1 and then swiftly to Atmos, actually only 7.1.2.
> It’s great fun, it’s really a joy to work in and hear your composition and productions in surround/immersive.
> But mainly I do it for the joy of it. Reality is still, that your work is most likely consumed in Stereo or Mono than anything else. If you are smart with setting up your template and you are wise with your placement decisions the downmixes work in any format. Nobody has time to do a separate Stereo, 5.1, 7.1, Atmos mix. At least at my (low) level. For me it just slightly increased the time it takes to finish a cue, I can justify it with the fun I have with it.
> 
> ...


What is really crazy and groundbreaking about ATMOS, for example in the case of logic pro x is that you can actually mix ATMOS with your headphones, using the Binaural Simulation, so no need to purchase the 2 million dollar speakers, you can mix your beds and objects with headphones, and when you bounce your ADM BWF file the wav file will keep the properties of the Atmos file (11 channels ) pretty dope all this on LOGIC pro with the latest update (free).

So in other words wether you have Airpods pro (set with Atmos Spatialization), or multi-channel 7.1.4 equipped theater, your mix from Logic using Dolby Atmos will automatically get spatialized over that network of speakers.


----------



## babylonwaves (Jan 15, 2022)

CyberPunk said:


> So in other words wether you have Airpods pro (set with Atmos Spatialization), or multi-channel 7.1.4 equipped theater, your mix from Logic using Dolby Atmos will automatically get spatialized over that network of speakers.


@CyberPunk
you can use any headphone for that. AirPods (max and pro) give you dynamic head tracking though - but Logic does not support this with its binaural audio output. 
In any case, mixing immersive music is something I can recommend for every logic 10.7 user to experiment with. if nothing else, it's great fun.


----------



## Zanshin (Jan 15, 2022)

The Dolby Atmos Music Panner is Mac only, does that make Atmos on PC a non-starter? Or is there another way I am missing?


----------



## CyberPunk (Jan 15, 2022)

babylonwaves said:


> @CyberPunk
> you can use any headphone for that. AirPods (max and pro) give you dynamic head tracking though - but Logic does not support this with its binaural audio output.
> In any case, mixing immersive music is something I can recommend for every logic 10.7 user to experiment with. if nothing else, it's great fun.


Yeah,


----------



## KEM (Jan 15, 2022)

charlieclouser said:


> Anyway, if my deliveries solve the problem as-is, then... great. But if they don't, what am I gonna do? Sit there on the stage and complain that the explosions are drowning out my precious symphony? That's no way to make friends. And I've had plenty of situations where the music editor has taken snippets or zingers from one cue and laid them across a different cue to solve some problem. Huge relief to have them solve the problem on the stage, on the day, rather than have me scramble to slap a band-aid on the thing. On my last project the music editor took 30 seconds of a pulse+drums bed from one cue, and used good old Serato Pitch-N-Time to speed it up from 110bpm to 122pbm and pitch it down three semitones. Sounded fine, problem solved, note addressed, scratch that one off the list.



Ever since I started doing this whole film scoring thing I’ve been able to tell when the music editors are putting the cues together and it seems like they actually do a ton of work, usually it’s something like a slowed down/sped up theme or crossfading between different cues used in other parts of the film. It’s an interesting thing to take note of because we usually give all the credit to the composers but the music editors seem to be the ones doing most of the picture work, which to me is like 80% of the battle


----------



## KEM (Jan 15, 2022)

My biggest question that I still can’t find a solid answer for is what interface you all are using for more than stereo? Or are you using a sound card instead? That’s been my biggest concern


----------

