What's new

Composing directly in 5.1/ 7.1 / Atmos... the 2021-2022 Thread!

so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries?
All of my 2021 Netflix originals so far have requested Atmos as final delivery. In fact a project I started in 2020 for them changed from initial 5.1 delivery to Atmos during 2021.

For Prime Video (Amazon), only 5.1 so far but they remixed 3-4 tracks of my Soundtrack in Atmos to be released as something exclusive for premium Prime music users, so it's a matter of time until everything will be delivered in Atmos.

For the time being, it's a leap from 5.1 where they understand it's an extra, but who knows if it will be something compulsory in the next 5 years hehe...
 
All of my 2021 Netflix originals so far have requested Atmos as final delivery. In fact a project I started in 2020 for them changed from initial 5.1 delivery to Atmos during 2021.

For Prime Video (Amazon), only 5.1 so far but they remixed 3-4 tracks of my Soundtrack in Atmos to be released as something exclusive for premium Prime music users, so it's a matter of time until everything will be delivered in Atmos.

For the time being, it's a leap from 5.1 where they understand it's an extra, but who knows if it will be something compulsory in the next 5 years hehe...

interesting. but the final delivery in atmos is for the dub stage and the re recording engineers just add it to their atmos mix or is the atmos final delivery a standaline delivery for netflix/amazon to use it for music only purposes?

Do you also delivery music in normal 5.1/stereo as well as atmos for netflix?
 
interesting. but the final delivery in atmos is for the dub stage and the re recording engineers just add it to their atmos mix or is the atmos final delivery a standaline delivery for netflix/amazon to use it for music only purposes?

Do you also delivery music in normal 5.1/stereo as well as atmos for netflix?
From what I've heard it's final dub in Atmos now, but depending on budget the score can be in Atmos as well or they can accept 5.1

On each project they have a list of deliveries for the score, it may include indeed everything (even PT Mix sessions, Stereo and Surround Mixes + Stems, etc. ). It always depend on the platform and the project (if it's an original, an exclusive, etc)
 
In my opinion- Hats off to Apple for adding immersive sound to Logic. Will be interesting to see how this gets used ( more likely abused )

I think it will be difficult to convince many lo to mid budget productions to go Atmos for score. Atmos for the final audio sound yes, but not score, much less licensed music.

I still think quad is a good compromise for score, for the time being…

Im a little puzzled by logics atmos implementation since its basically like $300 (just buying logic pro)... while i see every studio out there with dolby, pro tools and some extremely expensive setups using dante, mxtrx etc.
My friends does mix movies and shows for netflxi etc and telling me the equipment almost gave me a hart atack on how expensive it was.

Like... coudnt you just do your score or mix your film like normal, then import to logic, setup a few objects/etc and export it and thats your final atmos delivery... in the cheap?

Logic's 5.1 surround implementation is pretty bad but for this sort of "trick" using it as an export for atmos w/o having to deal with stems boucnes (which is one of the big drawbacks of logics surrounds issues)... just to export a main atmos mix... maybe using some stereo stems etc might work?

Thats my thought at least.. and still wonder why such discrepancy between the pro tools guys aka real egineers and this sort of "toy" thats logic with atmos.. Like,,, is it the same spec output that would work across netflix specs vs amazon etc.
Does it make sense? maybe im not seeing something (or not seeing a lot of things lol)
Do your mix like always, export stems, import to logic, do some of the "atmos" stuff like what goes in what speakers/object, what moves around in certain spots, export done. only needed is enough speakers and an interface w enough outs (maybe adat expandable etc)
All studios still have to delivery 5.1 and stereo no matter what. Now along an atmos file. and atmos is not really good for foldwons from what i understand.
 
Im a little puzzled by logics atmos implementation since its basically like $300 (just buying logic pro)... while i see every studio out there with dolby, pro tools and some extremely expensive setups using dante, mxtrx etc.
My friends does mix movies and shows for netflxi etc and telling me the equipment almost gave me a hart atack on how expensive it was.

Like... coudnt you just do your score or mix your film like normal, then import to logic, setup a few objects/etc and export it and thats your final atmos delivery... in the cheap?

Logic's 5.1 surround implementation is pretty bad but for this sort of "trick" using it as an export for atmos w/o having to deal with stems boucnes (which is one of the big drawbacks of logics surrounds issues)... just to export a main atmos mix... maybe using some stereo stems etc might work?

Thats my thought at least.. and still wonder why such discrepancy between the pro tools guys aka real egineers and this sort of "toy" thats logic with atmos.. Like,,, is it the same spec output that would work across netflix specs vs amazon etc.
Does it make sense? maybe im not seeing something (or not seeing a lot of things lol)
Do your mix like always, export stems, import to logic, do some of the "atmos" stuff like what goes in what speakers/object, what moves around in certain spots, export done. only needed is enough speakers and an interface w enough outs (maybe adat expandable etc)
All studios still have to delivery 5.1 and stereo no matter what. Now along an atmos file. and atmos is not really good for foldwons from what i understand.

I guess ill have to watch a tutorial about it



I just saw the netflix specs and the only thing so far thats throwing me off is the delivery of an IMF... but i think they mean IAB.
And the dolby renderer does have folddown options.

hopefully pro tools can come up w an easier and cost effective way to create atmos mixes.
 
Would you have been willing to tolerate such meddling?
Of course. I am very much non-precious when it comes to what happens to my deliveries down-stream. It's all just raw material for the producers, director, and mixer to mold and shape as they need to get the results they want. Being precious about whether your rear channels are loud enough on the stage is about the same as being precious about whether the dialog and fx are drowning out the score. One or the other is bound to happen pretty often.

I usually tell the mixers / producers / director some variation of this: "I don't care if you discard three of the seven stems, pitch-shift one of them down an octave, and play the other two in reverse. Good luck, have fun, let me know if you need anything more." This usually results in a sigh of relief from mixers who have dealt with too many too-precious composers.

Knowing that liberties are likely to be taken, I don't put anything in the rears that I can't live without, so if they are too quiet or just not there, no harm no foul. And I give detailed notes like, "This cue has the jump scares swooping from the rears to the fronts, and if you just play the fronts then it may sound like they're too short and starting too late, so if you're not using the rears then maybe fold the rears into the fronts if it's sounding weird." - or - "This cue has an extra-unsettling array of those atonal string effects quadruple-tracked in surround, so if there's one spot in the whole score where the rears should be good and loud, this is it."

Another thing that I do which is maybe not typical (or even advisable!) is NOT to allocate elements into the stems by any hard and fast rules like, brass low, brass high, etc. This is partly because I'm almost never working with a conventional layout of orchestral elements - it's more likely to be stuff like: StemA = two sub-bass booms and low hits on jump scares, StemB = just a synth pulse, StemC = evil low synth drones, StemD = distorted midrange textures, StemE = guitar feedback echoes, StemF = icy high strings sustains and high string shrieks on jump scares, StemG = that weird dissonant choir at the start and the dissonant woodwinds in the middle.

So I try to make each stem as close to a complete a "piece of music" as possible, while still trying to have as little overlapping of elements within each stem, and grouping things in sonic categories. So, thin-high-sustainy stuff doesn't go on the same stem as sub-bass booms, but if there's room to split the layers of that thin-high-sustainy stuff across 2 or 3 stems that are otherwise empty at that spot, then I will. If all the high-long strings are on the same stem then they can't do anything with it except turn it up or down. But if the atonal layer is on StemE and the sul-pont sustains are on StemF and the high synth that doubles it is on StemG then they can sculpt things more easily if needed.

This approach might not work for folks who are doing more conventional orchestral layouts, but for my weird smorgasbord of hybrid elements it works pretty well. Because the director usually isn't going to say, "Let's lower the sul-pont strings and raise the behind-the-bridge tremolos", they're going to say, "Can we turn up all that high chaotic screeching? I love that stuff." So for me, obeying conventional "rules" about what sounds go on which stems would be more limiting, both for me and for them. If I did it that way, most of the stems would be empty a lot of the time, with too many elements crammed onto the few stems that they were "supposed to" be on.

Anyway, if my deliveries solve the problem as-is, then... great. But if they don't, what am I gonna do? Sit there on the stage and complain that the explosions are drowning out my precious symphony? That's no way to make friends. And I've had plenty of situations where the music editor has taken snippets or zingers from one cue and laid them across a different cue to solve some problem. Huge relief to have them solve the problem on the stage, on the day, rather than have me scramble to slap a band-aid on the thing. On my last project the music editor took 30 seconds of a pulse+drums bed from one cue, and used good old Serato Pitch-N-Time to speed it up from 110bpm to 122pbm and pitch it down three semitones. Sounded fine, problem solved, note addressed, scratch that one off the list.

Of course, on some tv series I've done I was just delivering stereo mixes, so if it didn't work as-is then the only solution was to go looking for an ALT of the cue. But when I deliver stems I let 'em go wild on the stage if they want. Fine by me.
 
Last edited:
so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries?

I remember QCing the american horror story masters for fox back in the day and was amazed at the main tittle music has such cool stuff going on in the back surrounds. With atmos, how would you do that sort of stuff now if u have to deliver stereo stems?
American Horror Story theme was a fun one for me. It's nice to do something super-minimal like that where you can actually hear in between the elements and appreciate weird stuff going on in the rears.

But, I haven't solved the objects-vs-stems issue yet. The difference between delivering "objects" as stereo pairs vs delivering 5.1 stems that need to sit as-is involves some tricky decisions. It's really about what the mixers prefer given the schedule they'll be working against. That's why this last quickie score was stereo stems only. I knew they'd be absolutely racing through the mix, and the mixers told me that they'd be able to do more in the immersive field if my stems were simple stereo pairs, and the minimal aspect of the score meant that this approach could work. Since each stem usually only contained a few elements that could function as a unit, they could throw a whole stereo stem to the tops / rears without accidentally throwing some other elements with it.

But I prefer to, and will continue to, deliver as 5.1 stems whenever practical (practical for the mixers, not practical for me). After my next big upgrade I may widen each of my stems to 7.1 or wider, but if I do I'd want to figure out a way to simultaneously print stereo objects. It would be a dream to be able to, in a single bounce pass, output eight 7.1 stems AND 32 stereo objects to print as 64 channels of surround stems and 64 channels of stereo objects. That way I'd be covered no matter how the mixers wanted to deal with things.

My Logic bussing + stem sub-masters array is going to look insane!
 
Last edited:
Another thing that I do which is maybe not typical (or even advisable!) is NOT to allocate elements into the stems by any hard and fast rules like, brass low, brass high, etc. This is partly because I'm almost never working with a conventional layout of orchestral elements - it's more likely to be stuff like: StemA = two sub-bass booms and low hits on jump scares, StemB = just a synth pulse, StemC = evil low synth drones, StemD = distorted midrange textures, StemE = guitar feedback echoes, StemF = icy high strings sustains and high string shrieks on jump scares, StemG = that weird dissonant choir at the start and the dissonant woodwinds in the middle.
With this unconventional stem strategy, do you have to plan ahead to fit the whole film’s sonic palette into six stems? Or do you switch up how you use the stems for each cue or reel?
 
This is amazing, I guess the speaker lobby will push for this eventually😂

But seriously interesting!
Yeah - and universities are doing more and more research into it.

I've been involved in the periphery consulting for a uni that is putting in a 24.4 system that can be used for anything from ambisonic research to WFS (wave field synthesis) to n-based panning (what I've been doing mostly). There's increasing academic work being done solving the many problems of WFS (which started with the enormous number of speakers required to create the immersive environment) - its like in the last 5 years the research from 15-20 years ago has been picked up again after a few not insignificant breakthroughs were made. The fact that concert sized systems with only 5 to 7 line arrays have been deployed and tested - and used - is amazing in my book. The big guys are ALL looking into it. Expect the next Radiohead type show to be doing this in 2 years time.

While the 384 speaker system in germany (is that the one you played with @Dietz ?) is awesome, it is not the final solution. There needs to be ways of generating the wave field without having to resort to speakers being within very close proximity to each other. And recent developments are encouraging in this regard. Which has again piqued my interest and may well work for smaller scale prime time installs with caveats.

Iosono is unfortunately no longer being developed actively by Barco (Yamaha) - but another spatial audio company has grabbed the IP and is doing interesting things with it still. Its a little outdated in some of its approaches (which happens when so much R&D dollars are invested at a time before many of the pitfalls of the systems are ironed out. They were true pioneers but perhaps ahead of their time. The new system by Nexo (again, Yamaha) looks incredibly interesting, although they are tying it in with virtual acoustics in a single integrated system, which is less good for the types of things I use it for. I had a chat with the developers a few weeks ago, and they didn't make any noises regarding implementations for WFS.

More and more, the Spat Revolution approach seems the best for museums / installations / immersive theatre etc, as it is the most open to different approaches / ways of doing things. However, it has the more technical knowledge required to use, and is not as stable as others. (Spat for MaxMSP is an awesome way to explore, but Spat Revolution is a no brainer for actual installs if it is stable - and there's been times when that hasn't been true!)

Right now for install there is a BIG problem with driving any of these technologies in real time. It is relatively simple (in the scheme of things) to create mixes that playback into the system. However, anything that changes with input (which is increasingly important for museums, theatre, etc) is bloody complex. Unreal is an awesome way forward, but has no "knowledge" of object based audio outputs that can be rendered externally. And its internal rendering engine while incredible in some instances has severe limitations in others. There is work afoot to turn it into an audio media server in its own right, but our current approach is to create an external piece of software to do the heavy lifting, allowing anything that can talk a bit of OSC or even XML over IP to drive an immersive environment.
 
With this unconventional stem strategy, do you have to plan ahead to fit the whole film’s sonic palette into six stems? Or do you switch up how you use the stems for each cue or reel?
Well, I do switch things up a bit from cue to cue. Like, if cue A has nothing but drums, a pulse, and one stem's worth of high stuff, then I'll spread everything around so if they need to thin out the drums they can. Then, if cue B has no drums at all but has eight stem's worth of dissonant high layers, they'll get spread out a bit as well.

But I generally work in a logical fashion, so you start with drums / low things and move to strings / high things, so it's not just totally random where things will appear. And there's usually a piano on StemD, so that serves as a center point of sorts.

With the ever-changing nature of elements in my cues, the only way to obey the rules more stringently would be to use a LOT more stems, many of which would be empty a lot of the time. Which I may do on my next re-configure.
 
Good to find myself not alone in this, apparently. Ran into many of the same issues. Even now, with 3 slaves and a brand new maxed out DAW pc on a 10gb network I still need to switch to a buffer size of 1024 with my current Quad.1 template, or Cubase will start to shit its pants. Was wondering how others are handling this. I'm moving from Focusrite to UAD this week, so maybe that'll help somewhat.
I hope so for you. That's why I only use RME pcie cards (although I have some Motu's lying around here). Did you try to work with 512 buffers and put x4 buffers on the VePros?
 
I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer ;)
 
Last edited:
By the way, Gerhard, do you know how I can monitor atmos music (from Tidal for instance) for reference? I feel likes it's almost impossible to find those file or a proper way to monitor them (especially when you only have the binaural renderer). Thanks!
Depends.

If all you want to do is monitor binarual Atmos then it's easy. Buy an Apple TV and a receiver with a headphone out (just make sure it supports Atmos on the headphone out). You could also just use an Apple device and airpods. Apple Music uses spatial audio which sounds pretty horrendous but I think Tidal might now allow you to stream the proper binaural.

If you want to play Atmos content out through your speakers then you need an Apple TV plus decoder with whatever I/O you need to interface with your rig. If you can only do analog ins then it's easy and you just get either any cheap receiver with preamp outs or you can go up to something nicer like the Emotiva decoders. The new unit they're releasing is only $1000. If you have digital I/O and so can bypass the extra stages of conversion then there are units like the ones from Arvus and the JBL Synthesis. Those are more focused towards Dante. Right now I'm using a cheap receiver going into some Presonus converters while I wait for my Arvus unit to ship at which point it'll be going AES into my system.
 
I hope so for you. That's why I only use RME pcie cards (although I have some Motu's lying around here). Did you try to work with 512 buffers and put x4 buffers on the VePros?
Cheers, good point. Let me check and get back to you.
 
I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer ;)
Funny how you mention Rocket Man. Just finished listening to Andrew Scheps and friends talk about mixing in Atmos. At one point Steve Genewick (half-jokingly) says the best thing you can do to sort of get your listener/artist prepared for Atmos is to have him/her listen to Rocket Man first, before you play them their own work (unmixed from stereo), or they'll think it's going to sound shitty.

You just got Rocket-Manned, mate. :rofl:
 
Last edited:
Funny how you mention Rocket Man. Just finished listening to Andrew Scheps and friends talk about mixing in Atmos. At one point Steve Genewick (half-jokingly) says the best thing you can do to sort of get your listener/artist prepared for Atmos is to have him/her listen to Rocket Man first, before you play them their own work (unmixed from stereo), or they'll think it's going to sound shitty.

You just got Rocket-Manned, mate. :rofl:
And now you can start to understand why he mentioned Rocket Man. It was demo mix to privately showcase Atmos music (not Atmos cinema) a long time ago. The devil is in the details. Above I said “an interesting experience for sure”. Hardly me getting “Rocket Manned”. I could tell you what Elton thought of the mix but that would be telling. 😂

Oh, and I listened to it on PMC monitors costing over 7 figures just in case there was any doubt that the quality of the listening environment jeopardised my first experience of said mix. ;)
 
And now you can start to understand why he mentioned Rocket Man. It was demo mix to privately showcase Atmos music (not Atmos cinema) a long time ago. The devil is in the details. Above I said “an interesting experience for sure”. Hardly me getting “Rocket Manned”. I could tell you what Elton thought of the mix but that would be telling. 😂

Oh, and I listened to it on PMC monitors costing over 7 figures just in case there was any doubt that the quality of the listening environment jeopardised my first experience of said mix. ;)
😂 Nice. :thumbsup:
 
I was just working at a pretty big studio that specialises in Immersive Audio (Auro3D, Atmos, Sony360 and everything in between) and got to hear original Elton John Rocket Man Stems remixed in Atmos and the whole album Kind of Blue Album (Miles Davis) remixed in Atmos. A very interesting experience for sure. Can't wait to one day have an immersive set up but man, you gotta have a decent sized room. Also, Nuendo 11 and its multi panner is killer ;)
I bet the Rocket Man mix you heard is the one done by Greg Penny? He's an old friend who produced a bunch of great records by k.d. lang, Elton, and many others, and now he's booked two years out just remixing classic albums for immersive at his setup in Ojai. I paid him a visit and he played me his immersive mix of Glen Campbell's "Wichita Lineman" and it'll bring tears to your eyes. Amazing.

He's still using an ancient Dynaudio AIR setup (just like me!) with a LOT of AIR 6's hanging from trusses, but he's got so much work backed up that he's got his son Felix set up in another room with all Dynaudio Core speakers. Both rigs sound amazing, and it absolutely sold me on using height channels!
 
I bet the Rocket Man mix you heard is the one done by Greg Penny? He's an old friend who produced a bunch of great records by k.d. lang, Elton, and many others, and now he's booked two years out just remixing classic albums for immersive at his setup in Ojai. I paid him a visit and he played me his immersive mix of Glen Campbell's "Wichita Lineman" and it'll bring tears to your eyes. Amazing.

He's still using an ancient Dynaudio AIR setup (just like me!) with a LOT of AIR 6's hanging from trusses, but he's got so much work backed up that he's got his son Felix set up in another room with all Dynaudio Core speakers. Both rigs sound amazing, and it absolutely sold me on using height channels!
Oh man, yeah, the height channels are amazing. Its incredible how much space there is. True full range. I loved hearing the difference without them. The studio I was just working at has 26 PMC speakers and is a 9.1.6 set up. Its quite a thing. There's also a one of a kind analogue mixing console designed by Paul Wolff that allows audio to be panned into any speaker without software (there's also no pan law with the panners). Where a typical EQ section is in a mixing console, its actually a Panner section. Custom designed frame to hold all the upper plane and surround speakers. Its also been designed to instantly switch between Atmos, Auro3D, Sony360 and everything Surround and below which is why there are so many speakers. They are all in their exact positions for each format. Everything has been calibrated by Dolby.

Its basically all completely ruined audio for me, in the sense that I no longer lust after speakers. I realise I will never be able to afford some PMC XBDA QB1s ($200k each... They have 5 (2 x Left 1 Centre 2 x Right) although when buying at this end "family" discounts are likely 😂) so I now just pretend what I have is good. Except for my LCDX headphones. They are truly great.

The best thing I experienced was Sony360. It was literally the best thing I have experienced sonically. I had an amazing time with someone called Gus Skinas and he stuck some tiny microphones in my ears to take a measurement/snapshot of my inner ears and he measured the room and he basically played a mix of a track that was massively exaggerated with panning (you know, Bass Guitar in far rears, drum kit hard right, stuff that the human ear isn't really used to) just so everyone could hear where everything was (if you're not used to listening to music like this its definitely an interesting experience to start with and some people hate it). The test was to listen to the track through this monitoring environment and then Gus handed me some headphones and there is literally no difference between the monitors and the headphones. And when you remove your headphones, because you are in complete disbelief that a pair of shitty headphones could sound identical (not a little bit like, or something is slightly different) but EXACTLY the same, there is no audio coming from the monitors and you suddenly realise the audio is just the headphones.

Here's a public screen shot from Paul Wolff who actually did this test a year after I did (he wasn't at the studio I was working at whilst I experienced this although he was when I helped him put his console together) and what his thoughts are on it. Honestly, its so exciting people are gonna lose their fucking minds. Well, people that work on headphones anyway.

The Wolff.png

I share his excitement because it was incredible. Anyway, I digress. Feels strange talking about any of this stuff as its all been so hush hush since I took a job at that studio. Lonely old world. 😂 I can only imagine how lonely HZ is. He's probably got this listening environment in his toilet let alone what's in his lab. 😂

As for the mix of Rocket Man, I'd have to check with the engineers. At the time it was very much a "Jono, get the fuck over here and listen to this. And this. And then half a day of guinea pigging someone (me) that had never experienced immersive audio like that before and I just forgot to even ask. I'm not surprised that Greg Penny has been booked up for the next 2 years. Covid has literally held back Atmos so badly. I knew Apple were going to launch Atmos on all their devices (which is incredibly clever) about a year before it happened and Apple were also going to release a subscription service "Atmos Music" (songs/classic tracks - not Atmos for Cinema) which has now finally been launched but its free (obviously I am only saying stuff that is all public knowledge now as I don't want to be hunted down with a pack of dogs 😂). And for good reason. Atmos will be a total failure if the public don't get behind it and the only way the public are going to get behind it is if they can at least try it. And the public are not gonna be buying anything just to try it, so Headphones and a free service (on the tech they already own but didn't realise they already had Atmos capability) was a smart move.

There are countless engineers out there remixing so much music in Atmos but Covid slowed down everything. Also in this time, Dolby licensed Atmos to literally everyone which makes it even more accessible for people getting on board with it which also included releasing "in the box" solutions for atmos mixing with Pro Tools etc and gives everyone in small studios, home set ups, composers, sound designers etc the ability to get on board with learning how to work in Atmos... thus once again, putting bigger studios out of business eventually.

Its an exciting time for sure. I dream of having an immersive setup one day!
 
I know everything already became way more accessible, but since there are so many industry insiders in this thread, would anyone mind to please help me to understand why for example the Dolby Atmos Production Suite isn't available on Windows, why there's no real Atmos binaural in Nuendo or why I can't simply listen to Atmos (binaural) through the Tidal app on my Windows PC? Will this change anytime soon? Are there technological hurdles or is this rather related to some business deals?
 
Top Bottom