What's new

Composing directly in 5.1/ 7.1 / Atmos... the 2021-2022 Thread!

This for example is my VePro Brass rig. Not so much processing going on here. But every stem has it's own Cinematic Room or Exponential Audio. My goal was being able to hear a near finished mix while composing. My main DAW is able to play back a whole composition of several minutes with lots of instruments (200+) that have expression mapped articulations in it. What my computer isn't able to do, is handling all the mixed stems. So Junkie XL gave me that idea and I took it little bit further and offload all the processing in VePro computers. (I think that many composers that work in stereo don't have that problem.)
It takes a bunch of computers for some processing. And here I wouldn't recommend by going with just one 3950x as a VePro slave to handle all your orchestra. The problem is network latency. Don't get fooled by the picture below ip address shown on the upper taskbar. This is just the communication port for the onboard lan card to work with remote desktop. VePro has it's own dedicated 10G network. And although I connected every computer via 10G, I limited the incoming channels to 1G. But that's another story I'm happy to share my trail and errors to. So, as you can see, surround sucks up lot of CPU. You can try staying itb by using lot of dsp cards. But you will limit yourself to a system that will run out of CPU the sooner or later. And then you will have the problem that all your pci slots are occupied. Of course you can rebuild the whole setup. And later on put the UAD cards into a different computer. Give it a little thought first. These uad system are very expensive compared to a slave computer with some nice surround plugins. I also tried a system with used rme cards (bought on ebay) in each slave. Boy that was still expensive and great and lo latency. Unfortunately I ran out of outputs, hahaaaaa. So that was a shoot in my own foot.
PM me if you got further questions about this and that. I'm happy to help out.
 

Attachments

  • Composing directly in 5.1/ 7.1 / Atmos... the 2021-2022 Thread!
    20220103_144942.jpg
    405.3 KB · Views: 69
Gustav Mahler was very deliberate when he moved the brass into the balcony in his second symphony, for example.
The problem with these offstage instruments is (which where not unusual e.g. in operas quite a while before Mahler) that they usually work much better in the composer's imagination than in the actual performances. I remember when as a student i went to a concert in high expectations of how the off stage instruments would sound after having seen them when studying the score (iirc it was a Mahler symphony, probably Nr. 6) and it was quite disappointing, almost cringe. I though a lot about that (and similar experiences) and until today, i don't see a solution which could make this effect work better in concerts. It always has this taste of "yeah, the intention behind it is clear, but still..."

Nevertheless, imho there is one piece of music where the offstage instruments work extremely well; it's in the Dies irae of Berlioz' "Grande messe des morts" (better known as "Requiem") and there are four sections involved (one from each geographic direction; it's symbolic for the last judgement).

(the passage happens around 15 secondes after the start of the video)



Composing directly in 5.1/ 7.1 / Atmos... the 2021-2022 Thread!


On a more general side (regarding surround concepts for music) one issue is that the human's hardware (i.e. the ears) are stereo. The reason why 3D setups still have a different impact relies only on the interaction of the brain with small movements of the head.
And then, in addition there is the fact that the brain, if it's really trying to analyze music for harmonic and contrapuntal aspects, has the tendency to even narrow the heard image down to mono (that's basically what "focussing" literally means).
These aspects also conclude my personal view on the topic: i think spatial audio is a great thing where the movement in space is an essential part of the music (i.e. like in sound installations or Virtual Reality projects).
And in film it really works well – like in the Berlioz example – when it's about putting the viewer in the middle of the action.
If you see a hurricane in a movie and suddenly you're in its middle; that's a great experience.
When watching two people in a romantic comedy, i rather keep some distance... ;)
 
This for example is my VePro Brass rig. Not so much processing going on here. But every stem has it's own Cinematic Room or Exponential Audio. My goal was being able to hear a near finished mix while composing. My main DAW is able to play back a whole composition of several minutes with lots of instruments (200+) that have expression mapped articulations in it. What my computer isn't able to do, is handling all the mixed stems. So Junkie XL gave me that idea and I took it little bit further and offload all the processing in VePro computers. (I think that many composers that work in stereo don't have that problem.)
It takes a bunch of computers for some processing. And here I wouldn't recommend by going with just one 3950x as a VePro slave to handle all your orchestra. The problem is network latency. Don't get fooled by the picture below ip address shown on the upper taskbar. This is just the communication port for the onboard lan card to work with remote desktop. VePro has it's own dedicated 10G network. And although I connected every computer via 10G, I limited the incoming channels to 1G. But that's another story I'm happy to share my trail and errors to. So, as you can see, surround sucks up lot of CPU. You can try staying itb by using lot of dsp cards. But you will limit yourself to a system that will run out of CPU the sooner or later. And then you will have the problem that all your pci slots are occupied. Of course you can rebuild the whole setup. And later on put the UAD cards into a different computer. Give it a little thought first. These uad system are very expensive compared to a slave computer with some nice surround plugins. I also tried a system with used rme cards (bought on ebay) in each slave. Boy that was still expensive and great and lo latency. Unfortunately I ran out of outputs, hahaaaaa. So that was a shoot in my own foot.
PM me if you got further questions about this and that. I'm happy to help out.
Good to find myself not alone in this, apparently. Ran into many of the same issues. Even now, with 3 slaves and a brand new maxed out DAW pc on a 10gb network I still need to switch to a buffer size of 1024 with my current Quad.1 template, or Cubase will start to shit its pants. Was wondering how others are handling this. I'm moving from Focusrite to UAD this week, so maybe that'll help somewhat.
 
What’s your preferred setup when you’ve got various possible project formats? Do you work in Atmos and downmix? Or in Ambisonics and then downmix to Atmos?

Any preferred tools? I’ve been playing with the IEM Ambisonics suite inside MetaPlugin as a spatialization tool, but MetaPlugin’s limitations mean 4th-order Ambisonics is the max at the moment
With my template, i can work/deliver in any format. I usually work in Atmos and downmix.
 
With my template, i can work/deliver in any format. I usually work in Atmos and downmix.
Would you mind sharing some thoughts about your downmix process? Especially what do you do in the composing/mixing stage so the mix translates well to all formats in the end?
 
Would you mind sharing some thoughts about your downmix process? Especially what do you do in the composing/mixing stage so the mix translates well to all formats in the end?
I will answer your question as soon as i can.

I'm in the midst of setting up a second studio at my house as i can't access my main studio located in a commercial building, thanks to Covid. Quite a challenge trying to meet Atmos specs in a less than ideal small space.
 
The problem with these offstage instruments is (which where not unusual e.g. in operas quite a while before Mahler) that they usually work much better in the composer's imagination than in the actual performances. I remember when as a student i went to a concert in high expectations of how the off stage instruments would sound after having seen them when studying the score (iirc it was a Mahler symphony, probably Nr. 6) and it was quite disappointing, almost cringe. I though a lot about that (and similar experiences) and until today, i don't see a solution which could make this effect work better in concerts. It always has this taste of "yeah, the intention behind it is clear, but still..."

Nevertheless, imho there is one piece of music where the offstage instruments work extremely well; it's in the Dies irae of Berlioz' "Grande messe des morts" (better known as "Requiem") and there are four sections involved (one from each geographic direction; it's symbolic for the last judgement).

(the passage happens around 15 secondes after the start of the video)



Berlioz_Req_DI.png


On a more general side (regarding surround concepts for music) one issue is that the human's hardware (i.e. the ears) are stereo. The reason why 3D setups still have a different impact relies only on the interaction of the brain with small movements of the head.
And then, in addition there is the fact that the brain, if it's really trying to analyze music for harmonic and contrapuntal aspects, has the tendency to even narrow the heard image down to mono (that's basically what "focussing" literally means).
These aspects also conclude my personal view on the topic: i think spatial audio is a great thing where the movement in space is an essential part of the music (i.e. like in sound installations or Virtual Reality projects).
And in film it really works well – like in the Berlioz example – when it's about putting the viewer in the middle of the action.
If you see a hurricane in a movie and suddenly you're in its middle; that's a great experience.
When watching two people in a romantic comedy, i rather keep some distance... ;)

Just a side note, here is a better recording:
 
re @Rctec and the “pulled apart” music.

I ran a bunch of experiments a couple years ago at the ATMOS room above The Wiltern off Wilshire (can’t remember the name of the studio).

Using woodwind instruments, since they are a family most unique in their individual colors and require more careful balancing than other families, I reconstructed chords directly out of Rimsky-Korsakov’s orchestrion book.

I then placed those instruments around the room, equal distances, and the chords maintained their cohesiveness, and was cool. Even shifting from woodwinds “on the stage” to the four corners of the room yielded results that could absolutely provide aesthetic value when used correctly.

Changing the distances is what fumbles the balance of the chords due to volume/dynamics, and when movement is introduced, there’s a very special phenomenon that takes place which is a trade secret I can’t give up yet (though you might know it since you work in ATMOS all the time haha).

Do you have the same experience in your room regarding spaced out instruments that are still the same distance?
 
Thanks a lot @charlieclouser ! Are you talking about the objets or the whole music being delivered in Stereo Stems?
I didn't intend for my stereo stems to be thought of as objects, but because the score was fairly minimalistic, with basically no real orchestra or attempts at simulation, each stereo stem could be dealt with as an object more or less. Most stems would have so few elements that this approach worked. And like I said, it was a super quick-n-dirty mix date.

Usually when I deliver in 5.1 or Quad there's not even any "legitimate" surround reverbs or imaging - it's all special fx, like tracking an instrument four times for quad (just like you'd double rhythm guitars in stereo but with four instead of two), having four delays to scary ambiences to ping-pong around in quad, or making jump scares that start in the front and splash to the back (or the opposite, which is a fun effect). It's things like that which will probably make me continue to mix and deliver stems in at least a Quad configuration, and force the dubbing mixers to figure out where to put the rear pair that corresponds to each front pair.

Since I mix my own scores as I go, instead of handing off to a proper score mixer, this is the most practical solution.... for now.
 
I didn't intend for my stereo stems to be thought of as objects, but because the score was fairly minimalistic, with basically no real orchestra or attempts at simulation, each stereo stem could be dealt with as an object more or less. Most stems would have so few elements that this approach worked. And like I said, it was a super quick-n-dirty mix date.

Usually when I deliver in 5.1 or Quad there's not even any "legitimate" surround reverbs or imaging - it's all special fx, like tracking an instrument four times for quad (just like you'd double rhythm guitars in stereo but with four instead of two), having four delays to scary ambiences to ping-pong around in quad, or making jump scares that start in the front and splash to the back (or the opposite, which is a fun effect). It's things like that which will probably make me continue to mix and deliver stems in at least a Quad configuration, and force the dubbing mixers to figure out where to put the rear pair that corresponds to each front pair.

Since I mix my own scores as I go, instead of handing off to a proper score mixer, this is the most practical solution.... for now.
Thank you! What an amazing creative mind you have :) Thanks a lot for sharing!
 
I can chime in coming to all this from a different angle.
I'd say more than 50% of my work is for immersive audio.
But not cinema. Think opening ceremonies / world expo pavilions / lots of museum work / dance theatre / experiential theatre / zoos (!) etc etc.

So for 15+ years I've been working in object based audio paradigms - at the start I didn't even realise that's what I was doing... it was REALLY super janky back then.

And more and more its work where the speaker system is different to what is being composed on due to complexities. Just starting a 1000m2 space with 64 to 72 speaker channels with full score + sound design. In this case it will be written in stereo (well actually 4.1 but only the synths will make the final score, and often I throw them back to mono/stereo). Orchestra will be recorded with spots, surround tree and a bunch of ambience mics. The mix will be handled down to stereo stems that will then be placed around the room. Indeed, in this way the music WRITING is extremely complex, as its designed to give different emotive reactions in different parts of the room - almost completely different pieces of music - but harmonically the same (and same tempo map) so spill is like morphing.

The actual composition pipe-line is closer to an A-list game using different mixes to up the action, except going much much further in the breadth of change between mixes.


We are also going to explore wave field synthesis in three of the sections of the room... and that's another whole ball game. (Side note : There's been recent explorations / tests of wave field synthesis using smaller arrays than originally thought necessary - with some incredible results. Think 7 x hung line arrays at the front of a room. It's a version of immersive audio that has to be experienced to be believed. Not 3D but often supplemented with surrounds / height arrays).

Where was I? :). There is very little point composing in an atmos room - aside from the BIG one - which is that it is fun and can be extremely inspiring. They're technically hard to setup / takes a tonne of tech to get right. I would be encouraging folk to get an excellent 5.1 system and start in 4.1 even. HZ mentioned earlier some of the issues with music in 3d / immersive formats - and these issues become bigger when rooms (cinemas) get bigger. Where as sound design in my experience translates extremely well from room to room in atmos, music has many more ways for things to go wrong. Now, that's not to say that its not incredibly fun - but its hard. There are just so many tech pitfalls that getting an emotionally satisfying result in your own room is one thing, but to have it work in the cinema is another (not to mention the difficulty for the dubbing engineer to handle your atmos stems!)

I remember the first time I heard Ben Frost perform in an 8.8 "in the round" scenario and be astounded by the musical possibilities. I'm sure he spent a head of time re-working his songs to work in that environment. And I was equally blown away by Cardiff / Miller's reworking of Tallis 16th century work for 40 voices and 40 speakers. (it worked especially well in the tanks @ the tate modern!)

I very much am on the side of delivering 5.1 stems to a mix theatre. I feel like there is much more likelihood of your music sounding right in the end. Unless you have an amazing music mix engineer experienced in atmos... and even then (!!!). Now, its not stupid to mix in atmos, export 5.1 for the mix stage and keep the atmos masters for the final music release...

Now - imagine a space where things were mixed immersively on the fly (real time) using audio that is triggered in real-time from interaction. We're currently building an immersive audio server that will run outside of Unreal to enable just this - into any of the big 4 immersive renderers (Nexo, Iosono, Spat Revolution and Atmos). Step 2 will be finding someone to build it out for big 3D audio theatre shows - thats beyond the scope of our project, but it'll happen soon enough.

Its a WILD world out there.....
 
In short: The kind of "suround sound" we have been envisioning as children. :)

-> https://en.wikipedia.org/wiki/Wave_field_synthesis

... that's the decisive sentence: "Contrary to traditional spatialization techniques such as stereo or surround sound, the localization of virtual sources in WFS does not depend on or change with the listener's position."
….if I understand correctly, a person seated far left or far right in a large theater would have the same listening experience? Basically?
 
….if I understand correctly, a person seated far left or far right in a large theater would have the same listening experience? Basically?
It depends. On the one hand, there can be sources that seem to stay in their position within the defined soundstage - you can even walk around them. On the other hand, there are sources in "infinite" distance: Sound derived from these sources will always seem to follow the listener and reach them from the same side (... as a simple comparison: Sunbeams seem to come in in parallel, not angled).

... I had the possibility to work in a scientific setup created by IOSONO (a spin-off from Fraunhofer Institute Illmenau) about 15 years ago. They developed the first (somewhat) commercial version of WFS, and we did some tests with an early pre-release version of Vienna MIR there. :)
 
Last edited:
Last edited:
I've been mixing and delivering 5.1 stems forever, but I recently delivered my first score that was to be mixed in Atmos. In pre-delivery discussions with the re-recording engineers on the dub stage, they expressed a strong preference for me to simply deliver stereo stems which they would then distribute into the immersive field. Since each stem would now take only 2 channels instead of my usual 6, this let me spread things out across more stems, giving the mixers more flexibility to spray things around the room, and apply immersive reverbs or panning to elements that were more separated than before.

It went well.

In an off-the-record sidebar with one of the mixers, he basically told me that if I had delivered 5.1 or quad stems they probably would have just deleted everything but the front L+R pair from each stem and made do with that. But this was a quick mix for a cable feature, not a AAA movie mix, so this situation might not be typical when projects are bigger and schedules are more generous.

So I'm glad that I delivered more+narrower stems than my usual fewer+wider package.


so you think more and more tv/films will be deliver in atmos and therefore no more 5.1/surrounds deliveries?

I remember QCing the american horror story masters for fox back in the day and was amazed at the main tittle music has such cool stuff going on in the back surrounds. With atmos, how would you do that sort of stuff now if u have to deliver stereo stems?
 
Top Bottom