rohandelivera
New Member
I haven’t seen much chat here on using SWAM or Sample Modeling instruments in sections, and thought i’d pipe in with my process for anyone who might find it interesting.
Over the last couple of months I’ve evolved my orchestral template away from sampled instruments to modeled instruments by Sample Modeling and Audio Modeling. So for example, instead of a sectional French Horn library I have 4 instances of a Sample Modeled Horn; and 10 instances of a SWAM Cello would be my Cello section.
Modeled instruments in sections tend to be problematic. Phasing is the biggie. Multiple modeled instruments playing the same musical material tend to make pretty uniform spectra. Particularly if you give them non-complex controller input. If you track two or more instances of any Sample Modeling or SWAM instrument in unison with just an expression, modulation controller, and keyboard as your Midi input, you WILL phase. The spectra of both instruments will be too similar.
So how come this doesn’t happen in real life? That’s because identical twins playing the same tune on identical instruments aren’t really playing the same thing. The sound of two real instruments in ensemble is defined by very many more control vectors on top of which you have the physics of the room they’re in. We have far fewer vectors to play with and we’ve got to milk those for every thing they’re worth.
Here’s an example of a sectional string unison played on SWAM strings with no phasing. This is an excerpt from my mockup of John Williams’s Escapades.
.
I’ve got 16 SWAM first violins, 14 second violins, 10 violas, 10 Cellos, and 7 Basses playing the same unison note (in octaves). Please listen up to the crescendo and come back.
For starters, none of my violins, violas, cellos and bases are really the same instrument. Audio modeling has kindly included several modeled body types named after their favorite vacation spots and each of these bodies is making it’s own unique resonances.
Each player in my section is one of two Cremonas, Firenzes, or Venezia bodies. Also, no one is bang in tune except for the concertmaster. Everyone else is a couple of cents flat or sharp.
Next, none of these instruments are playing exactly the same thing (even if it’s the one note). Each instrument is getting different controller input altering bow pressure, bow position, bow speed, pitch (bend), vibrato speed, and depth to name a few. Yes they’re all tracked separately. This sounds like a lot of work you say, but if you’ve got each part in your head or are reading from a score it's not that big of a deal. Really!
An orchestral section is several players each with different backgrounds and personalities, but with a common goal. The conductor is their focal point. Her role is to homogenize each player, the more similarly they perform the tighter (and better) the ensemble will sound. This is where the real world and virtual worlds differ.
We need to strive for the exact opposite. Given that we have fewer parameters to control, it’s easy for us to sound the same when tracking different instances of the same instrument. We need to work at making players in our virtual modeled ensemble sound as different as possible. Our anchor is that our default state is a strong enough point where messing things up isn't going to change things radically.
Control - and lots of it.
Modeled engines need a lot of input. Most composers today are quite adept at that two finger expression + modulation fader ride. That’s about enough for samples, but with modeled instruments you need to get a lot more midi data into your instrument. You’re definitely going to need something more than a keyboard and two faders. You’ve probably noticed a lot of SWAM marketing material in conjunction with Geo-Shreds, Seaboards Linnstruments and other alt-controllers. There’s a reason for that.
You’re going to need to be able to generate between 5-10 MIDI CC’s simultaneously.
My go-to controllers are a €120 Tec Control Breath controller and a $70 Leap Motion VR controller. Both of these let me pump about 10 ccs worth of Midi data into a SWAM / Sam Mod instance in real time.
Here’s a clip from the John Williams mockup again in which I’ve included the Leap visualizer following my hand, a midi monitor and the controller data stream in Logic.
.
This is for the violin part in the background. You can see the midi monitor flooding with data coming off my hand.
While performing unison it’s critical that each virtual instrumentalist varies its tone. You need to know what all those knobs do, and how they contribute in a musical context to making a realistic musical performance. A Sam Mod / SWAM instrument is not going to hand this to you on a plate like a sampled instrument does. Most importantly each instance of a modeled instrument in an ensemble needs to be unique.
It’s actually pretty easy to sound different each time. The differences in midi cc data coming off a Leap controller, or the different pressures off a breath controller would make it hard to sound uniform on every track.
Like in a real orchestra, I also vary my performance depending on the chair position in my ensemble. So for instance an eighth stand violin would play a little more out of tune and not quite as in time as a first stand player.
The other advantage of multi-tracking a section is you immediately avoid one of the key failings of most sampled instrumental libraries—Homogenous Gating.
Every note in a sampled section starts at the same time. Playing a run or a fast non-staccato passage on any sampled instrumental section and this will become pretty evident.
Orchestras tend to have a herd mentality with the second stand player reacting to the first stand player, the lower stand reacting to the higher stand and the whole section following the section principal and the concertmaster and everyone following the conductor. The end result is that no one is 100% in time. The end result is a slight smearing of the musical line that is a major characteristic of an ensemble sound.
Here’s an example:
The big string crescendo that rises through the brass and into the woodwinds before the vibraphone solo.
And for comparison here’s the same passage performed by the man himself
.
pls wait for the ad break.
But sometimes you just don’t have the time
Multi tracking a complex and fast moving line sometimes is not ideal. Sometimes you’ve got to copy and paste. If I am copying and pasting, it’s only the note data. Never the controller information.
To do this I perform a line in a single instrument as per normal, then go back and strip all the cc data out of it so it’s just the notes. This is my copy source, which gets pasted into all the subsequent tracks.
I’m careful as to when I use this technique. Slow easy passages tend just to get multi-tracked. Fast passages with a lot of accidentals that would need some rehearsal to get right are the most likely candidates.
After I paste note data onto a track. I select all the copied regions and randomize each note’s start position and duration with a mid transform by a couple of frames. If you want to be really clever, increase these values the further down the section you go.
You’ve now got your notes down, now you need to go back to each region and perform all the controller data again. It’s still multi-tracking but a whole lot faster. I solo each track as I overlay the controller data so I hear each individual performance.
The previous example (the big cresc) used copy and paste.
Putting it all together. The Mix.
SWAM and Sampled Mod instruments don’t have any ambience or moisture content whatsoever. This is a good thing because to make an ensemble you need to stage by hand.
This is a big part of what makes an ensemble sound different to a bunch of disparate soloists. Previously my channel strip went something like this: Instrument > ER send (short reverb) > Direction Mixer > Hall (long reverb). I positioned my instrument on the Z axis by varying the wet/dry ratio on the ER, and moved it left and right with the direction mixer. This was pretty crude but reasonably effective. I now use Parallax Audio’s excellent Virtual Sound Stage 2.
Each section has it’s own VSS2 instance. Big sections like the strings are divided up into sub-groups. Each subgroup has 4 to 6 players with a VSS2.
VSS2 adds a different early reflection to each section and positions each source beautifully on a virtual stage. Each of my sections has a depth that panning and an ER reverb instance cannot do by itself. You could have an instance of VSS2 per instrument but that would un-necessarily bog your machine down and more importantly the differences between close positioning is way too subtle, and you run the risk of re-homogenizing each instrumental instance.
Just to make sure, I have a Direction Mixer on each instrument. This lets me to bias each instruments stereo position before it hits VSS2
So that’s it - that’s how I make sections with modeled instruments. I hope this helps. Please leave any questions in the comments below.
Thank you for reading. Here’s my whole John Williams mockup from the top.
This took a weekend. Both for punching it in and the music video.
Over the last couple of months I’ve evolved my orchestral template away from sampled instruments to modeled instruments by Sample Modeling and Audio Modeling. So for example, instead of a sectional French Horn library I have 4 instances of a Sample Modeled Horn; and 10 instances of a SWAM Cello would be my Cello section.
Modeled instruments in sections tend to be problematic. Phasing is the biggie. Multiple modeled instruments playing the same musical material tend to make pretty uniform spectra. Particularly if you give them non-complex controller input. If you track two or more instances of any Sample Modeling or SWAM instrument in unison with just an expression, modulation controller, and keyboard as your Midi input, you WILL phase. The spectra of both instruments will be too similar.
So how come this doesn’t happen in real life? That’s because identical twins playing the same tune on identical instruments aren’t really playing the same thing. The sound of two real instruments in ensemble is defined by very many more control vectors on top of which you have the physics of the room they’re in. We have far fewer vectors to play with and we’ve got to milk those for every thing they’re worth.
Here’s an example of a sectional string unison played on SWAM strings with no phasing. This is an excerpt from my mockup of John Williams’s Escapades.
.
I’ve got 16 SWAM first violins, 14 second violins, 10 violas, 10 Cellos, and 7 Basses playing the same unison note (in octaves). Please listen up to the crescendo and come back.
For starters, none of my violins, violas, cellos and bases are really the same instrument. Audio modeling has kindly included several modeled body types named after their favorite vacation spots and each of these bodies is making it’s own unique resonances.
Each player in my section is one of two Cremonas, Firenzes, or Venezia bodies. Also, no one is bang in tune except for the concertmaster. Everyone else is a couple of cents flat or sharp.
Next, none of these instruments are playing exactly the same thing (even if it’s the one note). Each instrument is getting different controller input altering bow pressure, bow position, bow speed, pitch (bend), vibrato speed, and depth to name a few. Yes they’re all tracked separately. This sounds like a lot of work you say, but if you’ve got each part in your head or are reading from a score it's not that big of a deal. Really!
An orchestral section is several players each with different backgrounds and personalities, but with a common goal. The conductor is their focal point. Her role is to homogenize each player, the more similarly they perform the tighter (and better) the ensemble will sound. This is where the real world and virtual worlds differ.
We need to strive for the exact opposite. Given that we have fewer parameters to control, it’s easy for us to sound the same when tracking different instances of the same instrument. We need to work at making players in our virtual modeled ensemble sound as different as possible. Our anchor is that our default state is a strong enough point where messing things up isn't going to change things radically.
Control - and lots of it.
Modeled engines need a lot of input. Most composers today are quite adept at that two finger expression + modulation fader ride. That’s about enough for samples, but with modeled instruments you need to get a lot more midi data into your instrument. You’re definitely going to need something more than a keyboard and two faders. You’ve probably noticed a lot of SWAM marketing material in conjunction with Geo-Shreds, Seaboards Linnstruments and other alt-controllers. There’s a reason for that.
You’re going to need to be able to generate between 5-10 MIDI CC’s simultaneously.
My go-to controllers are a €120 Tec Control Breath controller and a $70 Leap Motion VR controller. Both of these let me pump about 10 ccs worth of Midi data into a SWAM / Sam Mod instance in real time.
Here’s a clip from the John Williams mockup again in which I’ve included the Leap visualizer following my hand, a midi monitor and the controller data stream in Logic.
.
This is for the violin part in the background. You can see the midi monitor flooding with data coming off my hand.
While performing unison it’s critical that each virtual instrumentalist varies its tone. You need to know what all those knobs do, and how they contribute in a musical context to making a realistic musical performance. A Sam Mod / SWAM instrument is not going to hand this to you on a plate like a sampled instrument does. Most importantly each instance of a modeled instrument in an ensemble needs to be unique.
It’s actually pretty easy to sound different each time. The differences in midi cc data coming off a Leap controller, or the different pressures off a breath controller would make it hard to sound uniform on every track.
Like in a real orchestra, I also vary my performance depending on the chair position in my ensemble. So for instance an eighth stand violin would play a little more out of tune and not quite as in time as a first stand player.
The other advantage of multi-tracking a section is you immediately avoid one of the key failings of most sampled instrumental libraries—Homogenous Gating.
Every note in a sampled section starts at the same time. Playing a run or a fast non-staccato passage on any sampled instrumental section and this will become pretty evident.
Orchestras tend to have a herd mentality with the second stand player reacting to the first stand player, the lower stand reacting to the higher stand and the whole section following the section principal and the concertmaster and everyone following the conductor. The end result is that no one is 100% in time. The end result is a slight smearing of the musical line that is a major characteristic of an ensemble sound.
Here’s an example:
The big string crescendo that rises through the brass and into the woodwinds before the vibraphone solo.
And for comparison here’s the same passage performed by the man himself
.
pls wait for the ad break.
But sometimes you just don’t have the time
Multi tracking a complex and fast moving line sometimes is not ideal. Sometimes you’ve got to copy and paste. If I am copying and pasting, it’s only the note data. Never the controller information.
To do this I perform a line in a single instrument as per normal, then go back and strip all the cc data out of it so it’s just the notes. This is my copy source, which gets pasted into all the subsequent tracks.
I’m careful as to when I use this technique. Slow easy passages tend just to get multi-tracked. Fast passages with a lot of accidentals that would need some rehearsal to get right are the most likely candidates.
After I paste note data onto a track. I select all the copied regions and randomize each note’s start position and duration with a mid transform by a couple of frames. If you want to be really clever, increase these values the further down the section you go.
You’ve now got your notes down, now you need to go back to each region and perform all the controller data again. It’s still multi-tracking but a whole lot faster. I solo each track as I overlay the controller data so I hear each individual performance.
The previous example (the big cresc) used copy and paste.
Putting it all together. The Mix.
SWAM and Sampled Mod instruments don’t have any ambience or moisture content whatsoever. This is a good thing because to make an ensemble you need to stage by hand.
This is a big part of what makes an ensemble sound different to a bunch of disparate soloists. Previously my channel strip went something like this: Instrument > ER send (short reverb) > Direction Mixer > Hall (long reverb). I positioned my instrument on the Z axis by varying the wet/dry ratio on the ER, and moved it left and right with the direction mixer. This was pretty crude but reasonably effective. I now use Parallax Audio’s excellent Virtual Sound Stage 2.
Each section has it’s own VSS2 instance. Big sections like the strings are divided up into sub-groups. Each subgroup has 4 to 6 players with a VSS2.
VSS2 adds a different early reflection to each section and positions each source beautifully on a virtual stage. Each of my sections has a depth that panning and an ER reverb instance cannot do by itself. You could have an instance of VSS2 per instrument but that would un-necessarily bog your machine down and more importantly the differences between close positioning is way too subtle, and you run the risk of re-homogenizing each instrumental instance.
Just to make sure, I have a Direction Mixer on each instrument. This lets me to bias each instruments stereo position before it hits VSS2
So that’s it - that’s how I make sections with modeled instruments. I hope this helps. Please leave any questions in the comments below.
Thank you for reading. Here’s my whole John Williams mockup from the top.
This took a weekend. Both for punching it in and the music video.
Last edited: