# My process for building realistic instrumental sections with SWAM and Sample Modeling instruments.



## rohandelivera (Mar 21, 2018)

I haven’t seen much chat here on using SWAM or Sample Modeling instruments in sections, and thought i’d pipe in with my process for anyone who might find it interesting.

Over the last couple of months I’ve evolved my orchestral template away from sampled instruments to modeled instruments by Sample Modeling and Audio Modeling. So for example, instead of a sectional French Horn library I have 4 instances of a Sample Modeled Horn; and 10 instances of a SWAM Cello would be my Cello section.

Modeled instruments in sections tend to be problematic. Phasing is the biggie. Multiple modeled instruments playing the same musical material tend to make pretty uniform spectra. Particularly if you give them non-complex controller input. If you track two or more instances of any Sample Modeling or SWAM instrument in unison with just an expression, modulation controller, and keyboard as your Midi input, *you WILL phase. *The spectra of both instruments will be too similar.

So how come this doesn’t happen in real life? That’s because identical twins playing the same tune on identical instruments aren’t really playing the same thing. The sound of two real instruments in ensemble is defined by very many more control vectors on top of which you have the physics of the room they’re in. We have far fewer vectors to play with and we’ve got to milk those for every thing they’re worth.

Here’s an example of a sectional string unison played on SWAM strings with *no phasing*. This is an excerpt from my mockup of John Williams’s Escapades.

.

I’ve got 16 SWAM first violins, 14 second violins, 10 violas, 10 Cellos, and 7 Basses playing the same unison note (in octaves). Please listen up to the crescendo and come back.

For starters, none of my violins, violas, cellos and bases are really the same instrument. Audio modeling has kindly included several modeled body types named after their favorite vacation spots and each of these bodies is making it’s own unique resonances.

Each player in my section is one of two Cremonas, Firenzes, or Venezia bodies. Also, no one is bang in tune except for the concertmaster. Everyone else is a couple of cents flat or sharp.

Next, none of these instruments are playing exactly the same thing (even if it’s the one note). Each instrument is getting different controller input altering bow pressure, bow position, bow speed, pitch (bend), vibrato speed, and depth to name a few. Yes they’re all tracked separately. This sounds like a lot of work you say, but if you’ve got each part in your head or are reading from a score it's not that big of a deal. Really!

An orchestral section is several players each with different backgrounds and personalities, but with a common goal. The conductor is their focal point. Her role is to homogenize each player, the more similarly they perform the tighter (and better) the ensemble will sound. This is where the real world and virtual worlds differ.

We need to strive for the exact opposite. Given that we have fewer parameters to control, it’s easy for us to sound the same when tracking different instances of the same instrument. We need to work at making players in our virtual modeled ensemble sound as different as possible. Our anchor is that our default state is a strong enough point where messing things up isn't going to change things radically.

Control - and lots of it.

Modeled engines need a lot of input. Most composers today are quite adept at that two finger expression + modulation fader ride. That’s about enough for samples, but with modeled instruments you need to get a lot more midi data into your instrument. You’re definitely going to need something more than a keyboard and two faders. You’ve probably noticed a lot of SWAM marketing material in conjunction with Geo-Shreds, Seaboards Linnstruments and other alt-controllers. There’s a reason for that.

You’re going to need to be able to generate between 5-10 MIDI CC’s simultaneously.

My go-to controllers are a €120 Tec Control Breath controller and a $70 Leap Motion VR controller. Both of these let me pump about 10 ccs worth of Midi data into a SWAM / Sam Mod instance in real time.

Here’s a clip from the John Williams mockup again in which I’ve included the Leap visualizer following my hand, a midi monitor and the controller data stream in Logic.

.

This is for the violin part in the background. You can see the midi monitor flooding with data coming off my hand.

While performing unison it’s critical that each virtual instrumentalist varies its tone. You need to know what all those knobs do, and how they contribute in a musical context to making a realistic musical performance. A Sam Mod / SWAM instrument is not going to hand this to you on a plate like a sampled instrument does. Most importantly each instance of a modeled instrument in an ensemble needs to be unique.

It’s actually pretty easy to sound different each time. The differences in midi cc data coming off a Leap controller, or the different pressures off a breath controller would make it hard to sound uniform on every track.

Like in a real orchestra, I also vary my performance depending on the chair position in my ensemble. So for instance an eighth stand violin would play a little more out of tune and not quite as in time as a first stand player.

The other advantage of multi-tracking a section is you immediately avoid one of the key failings of most sampled instrumental libraries—Homogenous Gating.

Every note in a sampled section starts at the same time. Playing a run or a fast non-staccato passage on any sampled instrumental section and this will become pretty evident.

Orchestras tend to have a herd mentality with the second stand player reacting to the first stand player, the lower stand reacting to the higher stand and the whole section following the section principal and the concertmaster and everyone following the conductor. The end result is that no one is 100% in time. The end result is a slight smearing of the musical line that is a major characteristic of an ensemble sound.

Here’s an example:



The big string crescendo that rises through the brass and into the woodwinds before the vibraphone solo.

And for comparison here’s the same passage performed by the man himself

.

pls wait for the ad break.

But sometimes you just don’t have the time

Multi tracking a complex and fast moving line sometimes is not ideal. Sometimes you’ve got to copy and paste. If I am copying and pasting, it’s only the note data. Never the controller information.

To do this I perform a line in a single instrument as per normal, then go back and strip all the cc data out of it so it’s just the notes. This is my copy source, which gets pasted into all the subsequent tracks.

I’m careful as to when I use this technique. Slow easy passages tend just to get multi-tracked. Fast passages with a lot of accidentals that would need some rehearsal to get right are the most likely candidates.

After I paste note data onto a track. I select all the copied regions and randomize each note’s start position and duration with a mid transform by a couple of frames. If you want to be really clever, increase these values the further down the section you go.

You’ve now got your notes down, now you need to go back to each region and perform all the controller data again. It’s still multi-tracking but a whole lot faster. I solo each track as I overlay the controller data so I hear each individual performance.

The previous example (the big cresc) used copy and paste.

Putting it all together. The Mix.

SWAM and Sampled Mod instruments don’t have any ambience or moisture content whatsoever. This is a good thing because to make an ensemble you need to stage by hand.

This is a big part of what makes an ensemble sound different to a bunch of disparate soloists. Previously my channel strip went something like this: Instrument > ER send (short reverb) > Direction Mixer > Hall (long reverb). I positioned my instrument on the Z axis by varying the wet/dry ratio on the ER, and moved it left and right with the direction mixer. This was pretty crude but reasonably effective. I now use Parallax Audio’s excellent Virtual Sound Stage 2.

Each section has it’s own VSS2 instance. Big sections like the strings are divided up into sub-groups. Each subgroup has 4 to 6 players with a VSS2.

VSS2 adds a different early reflection to each section and positions each source beautifully on a virtual stage. Each of my sections has a depth that panning and an ER reverb instance cannot do by itself. You could have an instance of VSS2 per instrument but that would un-necessarily bog your machine down and more importantly the differences between close positioning is way too subtle, and you run the risk of re-homogenizing each instrumental instance.

Just to make sure, I have a Direction Mixer on each instrument. This lets me to bias each instruments stereo position before it hits VSS2

So that’s it - that’s how I make sections with modeled instruments. I hope this helps. Please leave any questions in the comments below.

Thank you for reading. Here’s my whole John Williams mockup from the top.



This took a weekend. Both for punching it in and the music video.


----------



## LHall (Mar 21, 2018)

Excellent post and work Rohan. Very similar to the way I work as well, except I use 15 tracks of LASS for my strings - each played independently. Plus I sometimes add a little AM, Strad, or Chris Hein violin to get a little extra "1st chair". Thanks for sharing!


----------



## stevenson-again (Mar 21, 2018)

Very interesting post and the mock-ups are just incredible. One point though I am not entirely in agreement with is this:



> Orchestras tend to have a herd mentality with the second stand player reacting to the first stand player, the lower stand reacting to the higher stand and the whole section following the section principal and the concertmaster and everyone following the conductor. The end result is that no one is 100% in time. The end result is a slight smearing of the musical line that is a major characteristic of an ensemble sound.



My main problem with sampled orchestras is that they are rhythmically not as tight as real players, in direct contradiction to your above point. The problem is that in a sampled session, the players have no musical context in which to time their playing. So the attacks vary without any musical context making them sound rocky and a bit arhythmic. I often have to bounce out and tighten the audio, but it rarely works. Some libs are worse than others.

Actually what happens is that a good band lock onto a groove or feel that they all experience together, so rather than a cascading effect like you describe, you get a coherent performance in a way that is almost impossible to achieve with samples - except by accident.

Which isn't to say that you are 100% correct, those individual variations in timbre and performance create the ensemble effect. It's amazing what you have done with these modelled instruments and your hand modulator is incredible! Truly fantastic work and an extremely interesting and meaty post.....I note that you are also a "Rohan". Correct spelling too I'm glad to see.


----------



## zadillo (Mar 21, 2018)

This is amazingly informative, thanks for writing it up


----------



## Saxer (Mar 21, 2018)

Great work! Thanks for sharing!


----------



## garylionelli (Mar 21, 2018)

Great! Thanks for sharing. I'd love to hear the strings by themselves in a different context, maybe playing lyrical legato, say quarter and half notes.


----------



## robgb (Mar 21, 2018)

I've found that it helps to layer in a legato cello ensemble track (I use 8Dio) with the audio modeling cellos. Gives it a richer, more realistic sound.


----------



## muziksculp (Mar 21, 2018)

Hi @rohandelivera ,

Thanks for posting this. Very useful, and interesting info. 

imho. There are a lot of advantages that Modeled Instruments offer when it comes to expressiveness, and realism, provided we can control, and transmit to them the various midi cc-data in a practical, natural, and logical manner that translate to expressive and musical phrases. 

I feel new generation of midi controllers, and physical modeled instruments have a bright future, they go hand-in-hand, and we are just witnessing the start of this new generation of music production tools. Exciting times ahead ! 

Cheers,
Muziksculp


----------



## burp182 (Mar 21, 2018)

A WEEKEND?

I hate you.....

I saw the video of the full performance with JW conducting and Dan Higgins playing the Alto part and was struck with what an SOB putting this thing together with live players must have been. How on Earth did you enter all the parts individually, add all the expression data, mix AND do that lovely video in a weekend? By the end of the second day, I might have had a performance of the Alto part I was satisfied with. And kudos to, and our deepest thoughts and prayers go out to, your poor computer! That's some workload to do in real time.

Congratulations on an impressive piece of work. And have I mentioned I hate you......?
Wow.


----------



## d.healey (Mar 21, 2018)

> I’ve evolved my orchestral template away from sampled instruments to modeled instruments by Sample Modeling and Audio Modeling.


The sample modeling instruments are sample based btw, not sure about the SWAM ones. Excellent work though!


----------



## robgb (Mar 21, 2018)

d.healey said:


> The sample modeling instruments are sample based btw, not sure about the SWAM ones. Excellent work though!


I could be wrong about this, but I believe the SWAM strings are modeled and the SWAM saxes/woodwinds are sample based.


----------



## Shubus (Mar 21, 2018)

This is one of the most interesting AND USEFUL posts I've ever read and the results are outstanding. I've printed out this out for FREQUENT reference. I was never sure of the utility of the Leap Motion Controller.....but now the fog is lifting thanks to this great post.


----------



## NoamL (Mar 21, 2018)

Would you be open to posting just your string ensemble audio stem + the MIDI? I really honestly believe you could get a better result using traditional string libraries, and you'd have less work to do! Would like to try...

I think the SWAM instruments are very nice for the winds in this mockup, especially the alto and bs cl... but the strings... I'm still a skeptic


----------



## Erik (Mar 23, 2018)

burp182 said:


> And kudos to, and our deepest thoughts and prayers go out to, your poor computer!



Very very funny!

Aside from this, thanks for sharing your ideas here Rohan. BTW what about the specs of your PC? In fact I am quite interested, did you do a lot of freezing?


----------



## Vardaro (Mar 23, 2018)

Anothe rythmic aspect is the minute sounds which build up to the sounds we seem to hear: if hey are included in the samples, the notes sound late; if they are absent, it sounds synthetic. In the final mix, we may want to shift the track content slightly to the left


----------



## rohandelivera (Mar 23, 2018)

My main problem with sampled orchestras is that they are rhythmically not as tight as real players,​
Rohan - you're absolutely right. They have no way of knowing how much bow to use for a sampled 'short' so they tend to average it out. Hence you never get it as short as you need. With the modeled section I bow as fast as I want with single, double and triple tonguing a breath controller. The SWAM engine is quite good at making a bow stroke with a short sharp burst of CC11 expression data. Loure, jete no problem.

in direct contradiction to your above point. The problem is that in a sampled session, the players have no musical context in which to time their playing. So the attacks vary without any musical context making them sound rocky and a bit arhythmic. I often have to bounce out and tighten the audio, but it rarely works. Some libs are worse than others.​
Actually what happens is that a good band lock onto a groove or feel that they all experience together, so rather than a cascading effect like you describe, you get a coherent performance in a way that is almost impossible to achieve with samples - except by accident.​My point was mainly with respect to runs. The blurred effect without totally flubbing the run is only possible with a bunch of single instruments as opposed to a large sampled section. But the feeling of playing in ensemble does contribute I agree. I find with my approach I have to be careful not to sound like myself, as in I can pre-empt my timings, I don't react to myself as it were.

.....I note that you are also a "Rohan". Correct spelling too I'm glad to see. [/QUOTE]​
We are kind of unique.


----------



## rohandelivera (Mar 23, 2018)

garylionelli said:


> Great! Thanks for sharing. I'd love to hear the strings by themselves in a different context, maybe playing lyrical legato, say quarter and half notes.



Here's a mockup with the strings alone


----------



## rohandelivera (Mar 23, 2018)

robgb said:


> I've found that it helps to layer in a legato cello ensemble track (I use 8Dio) with the audio modeling cellos. Gives it a richer, more realistic sound.



The one thing I didn't add to my whole spiel (and it's an important one) is that I do massive EQ on every modeled section to get the tone I want. Once this is sorted you don't really need the sampled backup I would think.

The best approach would be to match eq with your favorite sample library or orchestral recording to get the color you want from your section.


----------



## rohandelivera (Mar 23, 2018)

burp182 said:


> A WEEKEND?
> 
> I hate you.....
> 
> ...



Thank you  and I hate you too


----------



## rohandelivera (Mar 23, 2018)

Erik said:


> Very very funny!
> 
> Aside from this, thanks for sharing your ideas here Rohan. BTW what about the specs of your PC? In fact I am quite interested, did you do a lot of freezing?




only on the strings. My machine can pull the rest by itself. It's a 2010 "Cheese-grater" Westmere 12 core Mac Pro with 64 Gigs of RAM. The system runs off a M.2 drive on one of the PCIE slots.


----------



## Straight2Vinyl (Mar 27, 2018)

Damn...This is the best tutorial I've read in a while. Thank you kindly Rohan. Truly awesome work. The SWAM Engine guys should hire you for this stuff.


----------



## QLee (Mar 28, 2018)

Catch me if you can! My favourite this is. (just finished a SW marathon


----------



## antcarrier (Mar 30, 2018)

This is great! Thanks 
It prompted me to order a leap motion controller, haha!
I have been using SWAM strings for a while, layered with Dimension strings, which works very well. Am certainly interested in trying this method though 
Is it difficult setting up the leap motion with swam strings?
Would you be willing to share a basic preset? 

Cheers,
Jon


----------



## lelepar (Mar 31, 2018)

Thanks to @rohandelivera for this useful tutorial.

As for the suitable controllers to control SWAM, I think you should have a look at this:
http://www.swamengine.com/2018/03/swam-violin-pen2bow/

What do you think?

BTW: we have recently released an update of all our SWAM products. Login to the Audio Modeling Customer Portal (https://my.audiomodeling.com) to get it.

Best,
Emanuele


----------



## antcarrier (Mar 31, 2018)

My leap motion controller arrived - it was easy to set up and start playing more expressively. I am using it with a Linnstrument, which it plays well with.
It makes dynamic control fast and seamless, and is great for legato playing. Great purchase for $30 used!
The biggest disappointment for me is using it in 'bowing' mode in swam instruments. It seems impossible to get anything other than pp dynamics when trying to bow sustained notes. I have found this to be the case with other expression controllers as well, so I think it is more an issue with the way the swam bowing algorithm works than with the controllers. In order to get even moderate volume, you have to bow extremely fast - so currently it only seems suitable for tremolo playing.
The controller is excellent for more 'conductor' style expression, though


----------



## lelepar (Apr 1, 2018)

antcarrier said:


> My leap motion controller arrived - it was easy to set up and start playing more expressively. I am using it with a Linnstrument, which it plays well with.
> It makes dynamic control fast and seamless, and is great for legato playing. Great purchase for $30 used!
> The biggest disappointment for me is using it in 'bowing' mode in swam instruments. It seems impossible to get anything other than pp dynamics when trying to bow sustained notes. I have found this to be the case with other expression controllers as well, so I think it is more an issue with the way the swam bowing algorithm works than with the controllers. In order to get even moderate volume, you have to bow extremely fast - so currently it only seems suitable for tremolo playing.
> The controller is excellent for more 'conductor' style expression, though



The problem is the limited resolution of current MIDI. The "bowing" gesture computes the derivative of the input expression (i.e. the position of the bow), which has just 128 values (0 to 127). With so limited resolution, it is like having a bow 5 cm long or even less.
We could increase the sensitivity, i.e. like having a longer bow, but the lowest dynamics sound so bad! It is like having a saw-bow!
There is no solution with the current standard MIDI, using a single 7-bit byte. We are still working on the next major release which will support high-resolution MIDI (i.e. 14-bit: 16384 values).
The problem is that the majority of controllers out there does not support it.

That's why I've pointed to Pen2Bow: finally a controller that exploits the "Bipolar" bowing gesture, and overcomes to the limited length of the "Bowing" gesture.

Best,
Emanuele


----------



## jonnybutter (Apr 1, 2018)

rohandelivera said:


> I haven’t seen much chat here on using SWAM or Sample Modeling instruments in sections, and thought i’d pipe in with my process for anyone who might find it interesting.



Late to this. Thanks for sharing with us! Super interesting. You got the strings to sound so good, at pp especially! I don't have any SWAM strings, but do have the saxes (+ SM brass). It's hard to _consistently_ get the attack right on the saxes (I need a lot more practice!). Not so easy to get the attack to not stick out in an ugly way. Your BC input curve has to be right, but even then sometimes it's like driving a car with overly sensitive power steering; or piloting a helicopter . I wonder if it's similar on hard string attacks?


----------



## garylionelli (Apr 1, 2018)

rohandelivera said:


> Here's a mockup with the strings alone



Thank you for doing that!!! It is utterly amazing. I don't think I've ever hear anything more impressive from something that isn't real strings. I'm going to have to try this. I already have the SWAM instruments.


----------



## antcarrier (Apr 1, 2018)

lelepar said:


> The problem is the limited resolution of current MIDI. The "bowing" gesture computes the derivative of the input expression (i.e. the position of the bow), which has just 128 values (0 to 127). With so limited resolution, it is like having a bow 5 cm long or even less.
> We could increase the sensitivity, i.e. like having a longer bow, but the lowest dynamics sound so bad! It is like having a saw-bow!
> There is no solution with the current standard MIDI, using a single 7-bit byte. We are still working on the next major release which will support high-resolution MIDI (i.e. 14-bit: 16384 values).
> The problem is that the majority of controllers out there does not support it.
> ...



Thanks for the information Emanuele, that makes sense. I think that Gecko sends 14 bit midi - so I'm looking forward to your next release  Keep up the good work!

Pen2bow does look cool. I don't think I would use an iPad for anything else though!


----------



## Esteban. (Apr 18, 2018)

Thanks for this @rohandelivera! I found your Tchaikovsky's video a couple of months ago when I found out about SWAM instruments and you're one of the very few people sharing how they use these instruments with each other to recreate an orchestra.

I'm thinking on diving into doing the same, probably will do some automation to humanize MIDI parameters (like velocity, length, CC's) in order to avoid recording a lot of parts over and over again. I use Reaper so creating a button to execute a couple of scripts in a certain order should do the trick, a little tweaking to the code could change the amount of "humanization" applied to each copy of the original MIDI recording. 

I'm even thinking on having multiple buttons to apply different amounts of MIDI humanization to the different subsections of the orchestra (ex: apply 2% to 1st violins and 3.5% to 2nd violins). Have you tried something like this? Would love to develop this idea even further.


----------



## pmcrockett (Apr 19, 2018)

Esteban. said:


> Thanks for this @rohandelivera! I found your Tchaikovsky's video a couple of months ago when I found out about SWAM instruments and you're one of the very few people sharing how they use these instruments with each other to recreate an orchestra.
> 
> I'm thinking on diving into doing the same, probably will do some automation to humanize MIDI parameters (like velocity, length, CC's) in order to avoid recording a lot of parts over and over again. I use Reaper so creating a button to execute a couple of scripts in a certain order should do the trick, a little tweaking to the code could change the amount of "humanization" applied to each copy of the original MIDI recording.
> 
> I'm even thinking on having multiple buttons to apply different amounts of MIDI humanization to the different subsections of the orchestra (ex: apply 2% to 1st violins and 3.5% to 2nd violins). Have you tried something like this? Would love to develop this idea even further.


The script-based approach to this is definitely something I've thought about, too, though I haven't had time to actually experiment with it yet. A couple of the thoughts I've had, in no particular order, that might be of interest to you:

You need to be able to generate CC data that resembles an original input but is different enough from it to produce a distinct performance. This means you should probably be looking for ways to abstract important pieces of general info about notes' CC data. For example, with expression data, you'd be looking at things such as average level, level at the start of the note, level at the end of the note, amount of change between the levels at the start and end, min/max values, location of the fastest change in level, etc. Knowing characteristics like that about a note should allow you to tweak those characteristics and recombine them to get a performance that both works for the musical context but also differs from the original. It would also be helpful to have a script to generate data about how multiple performances compare in terms of these characteristics, which would help you find the ideal randomization ranges for these characteristics by looking at how real performances of the same material differ.

You should also be able to abstract a general playing style with regard to how the original follows the tempo -- for example, on loud, short notes, does it generally anticipate the beat? Does this differ from long, quiet notes? How does the average grid adherence change throughout the piece? Like with the CCs, if you can abstract this sort of data from the performance, you should be able to tweak it then reconstruct it all in a way that still makes musical sense.

If you're doing large sections, it might be useful to set the scripts up to accept multiple tracks as inputs then record a couple of takes instead of just one so the scripts have a bit more variety to work with. I expect that even just having a script that generates an average of multiple performances would go a long way in speeding up the whole process even if it didn't do any actual humanization.

If you wanted to get really into it, it might be possible to come up with a quasi-machine learning system that can be trained -- it generates a bunch of possible versions, then you mark the ones that work best, and the system remembers the parameters used in those versions and tries to apply them to other similar situations.

Again, the key to all of this is that you need to be able to abstract meaningful data about the performance rather than just randomizing arbitrary parameters, and I think the way to go about determining what is meaningful in this context is to ask _what am I hearing when I listen critically to a performance?_ and not _what parameters does MIDI make it easy to adjust?_


----------



## Esteban. (Apr 25, 2018)

pmcrockett said:


> The script-based approach to this is definitely something I've thought about, too, though I haven't had time to actually experiment with it yet. A couple of the thoughts I've had, in no particular order, that might be of interest to you:
> 
> You need to be able to generate CC data that resembles an original input but is different enough from it to produce a distinct performance. This means you should probably be looking for ways to abstract important pieces of general info about notes' CC data. For example, with expression data, you'd be looking at things such as average level, level at the start of the note, level at the end of the note, amount of change between the levels at the start and end, min/max values, location of the fastest change in level, etc. Knowing characteristics like that about a note should allow you to tweak those characteristics and recombine them to get a performance that both works for the musical context but also differs from the original. It would also be helpful to have a script to generate data about how multiple performances compare in terms of these characteristics, which would help you find the ideal randomization ranges for these characteristics by looking at how real performances of the same material differ.
> 
> ...



I think your thoughts on the matter are well intended but I personally differ as I think they could provide different results to the ones that, at least in my case, I'm after.

What you describe is a common scenario of data analysis and machine learning, which is often used in software development in order to mimic certain behaviors to the point of accurate replication, very useful if what you want is to define a "performance" of your own and then create an automation system in order to print that same "performance style" into each one of your orchestral productions. On its own that sounds like a cool idea to be honest, but if you think about it an instrument performer should probably realize that you never have a real "performance style" on whatever you do unless you spend a really vast amount of time playing in similar scenarios over and over again (as your performance when practicing an instrument won't probably be the same in a live situation because of multiple variables). That's why people often debate on which live performance by the same musician they like the most (or even, which they disliked the most).

I think what I'm trying to say is that from what I've experienced people make mistakes in their own way but, under pressure, they won't probably make similar mistakes under the same ratio or "randomization range" as you put it.

Coming back to the script topic: I've actually tried this script the other day and seems to be just what I was looking for, I tested it and even duplicated it with a different percentage just to see the changes and it works great.



pmcrockett said:


> rather than just randomizing arbitrary parameters





pmcrockett said:


> what is meaningful in this context is to ask what am I hearing when I listen critically to a performance? and not what parameters does MIDI make it easy to adjust?



That's actually the part that I need to test now, as my idea was not to randomize arbitrary parameters (MIDI doesn't make any parameter easier than others, your DAW does and in Reaper seems like they're all easily editable) but specific parameters, which demands me checking out all the possible parameters the guys at Audio Modeling made available on their instruments. I would like to create a dedicated version of the script so instead of humanizing the selected CC values it humanizes all the values available on specific CC's. Just to save some more time.

I wish I could not only theorize about this topic but experiment and just say if it works or not, but until customs in my country and my local post office shake hands and decide on giving me the breath controller I was supposed to receive over a month ago I can't put any money where my mouth is. For now, I find the exchange of ideas related to this very appealing and would love to hear more ideas like these.


Edit: Forgot to add the script link, my bad.


----------



## pmcrockett (Apr 25, 2018)

Esteban. said:


> I think your thoughts on the matter are well intended but I personally differ as I think they could provide different results to the ones that, at least in my case, I'm after.
> 
> What you describe is a common scenario of data analysis and machine learning, which is often used in software development in order to mimic certain behaviors to the point of accurate replication, very useful if what you want is to define a "performance" of your own and then create an automation system in order to print that same "performance style" into each one of your orchestral productions. On its own that sounds like a cool idea to be honest, but if you think about it an instrument performer should probably realize that you never have a real "performance style" on whatever you do unless you spend a really vast amount of time playing in similar scenarios over and over again (as your performance when practicing an instrument won't probably be the same in a live situation because of multiple variables). That's why people often debate on which live performance by the same musician they like the most (or even, which they disliked the most).
> 
> ...



To me, it looks like the results from that script are likely to be either too similar to the input (if using a low randomization percentage) or too spiky and disjointed to be usable (if using a higher percentage). I'd definitely be interested in hearing anything you come up with using it, though. It may well be that the approach I'm thinking of unnecessarily overcomplicates things.

I think we're talking about two different things when we say _parameters_, and that's probably my fault for choosing a word that already has a pretty clearly defined meaning in terms of virtual instruments, which is the way you're using it. What I'm getting at when talking about parameters -- characteristics might be a better word -- is that the MIDI spec broadly represents notes by defining certain characteristics about them. Start time, pitch, end time, velocity, and a collection of CC data points being the most important. But the MIDI spec has no inherent representation of things like attack time. If you're using an instrument whose level is primarily controlled by, say, CC11, then understanding the attack characteristics of a MIDI note requires abstracting the early part of a note's CC11 data and interpreting its effect on the instrument. So if we want to randomize the start point of a note, which is directly characterized by the MIDI spec, we look up the start point of the note in the MIDI data and randomize a single number. But if we want to randomize the level or length of the attack portion of a note, we first must decide what collection of MIDI data represents the attack, then come up with an algorithm able to locate that data, then come up with an algorithm that can process the data as a collection rather than as individual numbers. But if the MIDI spec used ADSR envelopes attached to notes rather than individual CC data points, modifying a note's attack characteristics would simply be a matter of editing a couple of easy-to-find numbers like editing the note start is. That's what I'm getting at when I talk about the things that MIDI makes it easy to adjust vs. the things about notes that we actually hear.

I guess that's why I'm skeptical about the usefulness of the above script -- it randomizes the CC points as individual things, but we don't _hear _the CC points as individual things; we hear broader note characteristics that are built out of the CC points, and I expect that randomizing the characteristics that we hear will give better results than randomizing the individual data points that those characteristics are made from. It's like how changing an image of a face by randomizing the shapes of its features will work better than changing it by randomizing the colors of individual pixels.


----------



## Esteban. (Apr 26, 2018)

pmcrockett said:


> To me, it looks like the results from that script are likely to be either too similar to the input (if using a low randomization percentage) or too spiky and disjointed to be usable (if using a higher percentage). I'd definitely be interested in hearing anything you come up with using it, though. It may well be that the approach I'm thinking of unnecessarily overcomplicates things.
> 
> I think we're talking about two different things when we say _parameters_, and that's probably my fault for choosing a word that already has a pretty clearly defined meaning in terms of virtual instruments, which is the way you're using it. What I'm getting at when talking about parameters -- characteristics might be a better word -- is that the MIDI spec broadly represents notes by defining certain characteristics about them. Start time, pitch, end time, velocity, and a collection of CC data points being the most important. But the MIDI spec has no inherent representation of things like attack time. If you're using an instrument whose level is primarily controlled by, say, CC11, then understanding the attack characteristics of a MIDI note requires abstracting the early part of a note's CC11 data and interpreting its effect on the instrument. So if we want to randomize the start point of a note, which is directly characterized by the MIDI spec, we look up the start point of the note in the MIDI data and randomize a single number. But if we want to randomize the level or length of the attack portion of a note, we first must decide what collection of MIDI data represents the attack, then come up with an algorithm able to locate that data, then come up with an algorithm that can process the data as a collection rather than as individual numbers. But if the MIDI spec used ADSR envelopes attached to notes rather than individual CC data points, modifying a note's attack characteristics would simply be a matter of editing a couple of easy-to-find numbers like editing the note start is. That's what I'm getting at when I talk about the things that MIDI makes it easy to adjust vs. the things about notes that we actually hear.
> 
> I guess that's why I'm skeptical about the usefulness of the above script -- it randomizes the CC points as individual things, but we don't _hear _the CC points as individual things; we hear broader note characteristics that are built out of the CC points, and I expect that randomizing the characteristics that we hear will give better results than randomizing the individual data points that those characteristics are made from. It's like how changing an image of a face by randomizing the shapes of its features will work better than changing it by randomizing the colors of individual pixels.



Yes, I think I'm better understanding at what you are getting at and sure, we should be aiming to "humanize" what we hear which is kind of tricky as we're not trying to humanize audio, but MIDI CC values.

Correct me if I'm wrong, but seems like what you want to do is to control the transients with the algorithm you are describing, is that right? If so I'm completely in the shadows as I can't really tell how the Audio Modeling instruments model the output audio produced by their instruments based on the input MIDI CC values they receive from the user, this is again because I haven't had the chance to play with their instruments yet which is unfortunate.

From what I've seen in some tutorials online seems like the transients are mainly controlled by velocity and, in some Sample Modeling instruments, there were some keyswitches to provide some flexibility but that was about it. Other than that I guess you could use external help like compressors, but were you thinking on something else besides that? Or were you mainly pointing out the fact that besides humanizing MIDI CC values we should also aim to modify each transient differently as part of the process to give even more differences between each instrument line?


----------



## rohandelivera (Apr 29, 2018)

Hello. Yes that’s how my Escapades mock-up happened. 

I copied and pasted midi note data which I randomised using Logic’s midi transform, and then Re performed all the controller data for every track. Much faster than playing everything multiple times. 





Esteban. said:


> Thanks for this @rohandelivera! I found your Tchaikovsky's video a couple of months ago when I found out about SWAM instruments and you're one of the very few people sharing how they use these instruments with each other to recreate an orchestra.
> 
> I'm thinking on diving into doing the same, probably will do some automation to humanize MIDI parameters (like velocity, length, CC's) in order to avoid recording a lot of parts over and over again. I use Reaper so creating a button to execute a couple of scripts in a certain order should do the trick, a little tweaking to the code could change the amount of "humanization" applied to each copy of the original MIDI recording.
> 
> I'm even thinking on having multiple buttons to apply different amounts of MIDI humanization to the different subsections of the orchestra (ex: apply 2% to 1st violins and 3.5% to 2nd violins). Have you tried something like this? Would love to develop this idea even further.


----------



## pmcrockett (Apr 30, 2018)

Esteban. said:


> Yes, I think I'm better understanding at what you are getting at and sure, we should be aiming to "humanize" what we hear which is kind of tricky as we're not trying to humanize audio, but MIDI CC values.
> 
> Correct me if I'm wrong, but seems like what you want to do is to control the transients with the algorithm you are describing, is that right? If so I'm completely in the shadows as I can't really tell how the Audio Modeling instruments model the output audio produced by their instruments based on the input MIDI CC values they receive from the user, this is again because I haven't had the chance to play with their instruments yet which is unfortunate.
> 
> From what I've seen in some tutorials online seems like the transients are mainly controlled by velocity and, in some Sample Modeling instruments, there were some keyswitches to provide some flexibility but that was about it. Other than that I guess you could use external help like compressors, but were you thinking on something else besides that? Or were you mainly pointing out the fact that besides humanizing MIDI CC values we should also aim to modify each transient differently as part of the process to give even more differences between each instrument line?


Mostly I'm using the attack characteristics of a note as an example of the sort of thing that I think ought to be focused on by a script that auto-generates additional takes of a performance. Attack isn't uniquely important in a broader context, but it's a good example of something that substantially influences how you perceive the note but is difficult to modify via script without first building some fairly robust MIDI parsing tools.

The reason, I think, that no one has yet come up with a script to generate alternate takes is that it's going to be difficult to do it well. Not impossible -- I'm confident it can be done -- just difficult.


----------



## Erick - BVA (Apr 30, 2018)

stevenson-again said:


> Very interesting post and the mock-ups are just incredible. One point though I am not entirely in agreement with is this:
> 
> 
> 
> ...


Surprised this post hasn't gotten more love. I think you're spot on.


----------



## stevenson-again (May 4, 2018)

Sibelius19 said:


> Surprised this post hasn't gotten more love. I think you're spot on.



Shucks....thanks!


----------



## gregh (May 25, 2018)

The way to humanise a synthetic performance is to permute the data from an existing real performance rather than randomise using an arbitrary distribution. I've been using that method for years and it gives better results. If you want to use a random distribution, equal is probably the worst yet seems to be what everyone uses. I imagine that is because the people writing the code are coders rather than psychologists and musicians


----------



## Straight2Vinyl (Jun 14, 2018)

Hi Rohan, are you using the TEC Breath Controller, or the Breath and Bite Controller? Any thoughts on whether it's worthwhile shelling out the extra cash for the breath and bite model? I'll probably pick up a Leap Motion controller first since it's inexpensive, but I wouldn't mind grabbing a TEC controller later on.


----------



## rohandelivera (Jun 14, 2018)

rohandelivera said:


> Hello. Yes that’s how my Escapades mock-up happened.
> 
> I copied and pasted midi note data which I randomised using Logic’s midi transform, and then Re performed all the controller data for every track. Much faster than playing everything multiple times.



Just a re-reply since I’m back here. I’m now just tracking a few instances and doing a midi transform which varies the controller and note data ever so slightly. 

I’m also working on scripting the execution of all these transforms. 

I’ll do a video with more specifics soon.


----------



## rohandelivera (Jun 14, 2018)

Straight2Vinyl said:


> Hi Rohan, are you using the TEC Breath Controller, or the Breath and Bite Controller? Any thoughts on whether it's worthwhile shelling out the extra cash for the breath and bite model? I'll probably pick up a Leap Motion controller first since it's inexpensive, but I wouldn't mind grabbing a TEC controller later on.



Yes the original. I’d say go for the extra control, you never know, you could map it to something useful. The BC is my cc11 expr controller the Leap does everything else. 

Doing cc11 with a Leap never worked for me. Midi expression is the most complex input and best done with a BC. 

If you’re staggering your purchases, I would say get a bc first even.


----------



## LHall (Jun 15, 2018)

gregh said:


> The way to humanise a synthetic performance is to permute the data from an existing real performance rather than randomise using an arbitrary distribution. I've been using that method for years and it gives better results. If you want to use a random distribution, equal is probably the worst yet seems to be what everyone uses. I imagine that is because the people writing the code are coders rather than psychologists and musicians


I'll have to get out my dictionary.
Personally, I just play it a bunch of times till it sounds good. 
LOL


----------



## Straight2Vinyl (Jun 15, 2018)

rohandelivera said:


> Yes the original. I’d say go for the extra control, you never know, you could map it to something useful. The BC is my cc11 expr controller the Leap does everything else.
> 
> Doing cc11 with a Leap never worked for me. Midi expression is the most complex input and best done with a BC.
> 
> If you’re staggering your purchases, I would say get a bc first even.


The extra controls, bite and tilt, obvously can't hurt. Big difference in cost though between the two. I'm no professional, so budget is always something to take into consieration. Thanks for the reply though.


----------



## rdieters (Jul 1, 2018)

Straight2Vinyl said:


> The extra controls, bite and tilt, obvously can't hurt. Big difference in cost though between the two. I'm no professional, so budget is always something to take into consieration. Thanks for the reply though.



I think if you possibly can, even if you have to save and wait a bit longer, go for the bite/tilt model. It really is a great device and very versatile. I suck at making videos but there are some seriously good ones


----------



## Straight2Vinyl (Jul 5, 2018)

rdieters said:


> I think if you possibly can, even if you have to save and wait a bit longer, go for the bite/tilt model. It really is a great device and very versatile. I suck at making videos but there are some seriously good ones



I'll see if I can scrape together the funds for the bite/tilt model. 
I just picked up a used Leap Motion. Any suggestions you'd be willing to share for setup? I'm trying to figure out how to map pitch control to hand rotation. Also want to see how the heck side motion can be used to control the bipolar mode. Any help would be greatly appreciated.


----------



## pmcrockett (Jul 9, 2018)

Straight2Vinyl said:


> I'll see if I can scrape together the funds for the bite/tilt model.
> I just picked up a used Leap Motion. Any suggestions you'd be willing to share for setup? I'm trying to figure out how to map pitch control to hand rotation. Also want to see how the heck side motion can be used to control the bipolar mode. Any help would be greatly appreciated.


I just got a Leap Motion and have spent a bit of time this afternoon playing with it. I'm using GECO to produce the MIDI output, which requires that you install the older V2 of the Leap software rather than the newer Orion Beta. For the SWAM violin, the best results I've found so far are mapping left/right to expression (with bow gesture in expression mode and expression curve set to Ln 0.2) and up/down to bow pressure with bow pressure clamped between 25 and 116 in the SWAM MIDI config window.

An insight that I've only just now had after playing the SWAM violin set up as described above is that it seems like you generally want to keep expression and bow pressure in the lower halves of their ranges and only go into the upper halves for momentary spikes, mostly at the starts and ends of notes. This helps avoid the strained, strident sound that I've heard in so many demos of the SWAM strings. There's a particular gesture, almost like a quick grab, that moves you through the upper expression and pressure ranges down to an appropriate sustain range and gives you a fantastically natural note attack that sounds more like a proper violin than anything else I've been able to achieve to this point.

I've gotten mixed results with bow gesture in bowing mode, but I could see it possibly working well with a little practice. The biggest problem is that the CC range doesn't really give you much room to sustain long notes. I guess rebowing is a real technique, though, and vibrato seems to make rebowing less noticeable. Definitely would take some practice.

I have a breath controller, too, but I haven't tried it in conjunction with the Leap Motion yet. I've used it for expression on the SWAM strings in the past, but I can't see continuing to do that anymore, because controlling expression in conjunction with pressure by hand motion feels much more natural and responsive to me, at least for a violin.


----------



## DANIELE (Aug 6, 2018)

Rohan you do wonderfull things with audio modeling libaries, I love them too and I'm studying to use them at their best.


----------



## pipedr (Aug 6, 2018)

I wonder if anyone would like to post examples of controller data for different types of string articulations, e.g. staccato, marcato, sfz, etc. (and audio of the results).

In Rohan's interpretation of the Tchaikovsky piece (awesome!--so expressive), I saw just a little clip in the legato passages showing Mod (I think this is vibrato intensity), Pitchbend (? is this just to prevent phasing with other instruments in unison?), Vib Speed, Expression, bow pressure, bow position (interesting that it changes within each note--I thought this would be used only for special articulations like Sul Pont or Sul tasto) and port (portamento speed, I suppose). 

I suppose these are parameters that must be continuously in motion for a realistic performance. Fascinating to see in operation, but hard to wrap my head around for the multiple continuously changing parameters, as opposed to expression, note on velocity, and few keyswitches that are all that seem necessary for the Sample Modeling brass.


----------



## DANIELE (Aug 7, 2018)

pipedr said:


> I wonder if anyone would like to post examples of controller data for different types of string articulations, e.g. staccato, marcato, sfz, etc. (and audio of the results).
> 
> In Rohan's interpretation of the Tchaikovsky piece (awesome!--so expressive), I saw just a little clip in the legato passages showing Mod (I think this is vibrato intensity), Pitchbend (? is this just to prevent phasing with other instruments in unison?), Vib Speed, Expression, bow pressure, bow position (interesting that it changes within each note--I thought this would be used only for special articulations like Sul Pont or Sul tasto) and port (portamento speed, I suppose).
> 
> I suppose these are parameters that must be continuously in motion for a realistic performance. Fascinating to see in operation, but hard to wrap my head around for the multiple continuously changing parameters, as opposed to expression, note on velocity, and few keyswitches that are all that seem necessary for the Sample Modeling brass.



Well, if you want realistic strings you have to control many things like a strings player does. I find myself lost at the beginning, too much controls...but after some time I loved it, you can control nearly everything and this is a very good thing. I personally tend to follow this steps:

1) Composition part, I don't care about color and expression (regarding controls).
2) Control part 1, I program the most important controls, expression and vibrato.
3) Control part 2, when I'm at a very advanced stage on my track I take what I thinked about in the step 1) and I polish everything by adding more controls and details.

Working like this is useful because you don't loose yourself in too many activities at once.


----------



## Bruhelius (Sep 2, 2018)

I thought I would note here that Logic Pro X has a bunch of MIDI plugins that can introduce randomness and modulations into incoming MIDI CC data. This way you could create ensembles that have differing degrees of CC dynamics, including the pitch variations, starting from one source MIDI clip (i.e. the first violin chair). Together with the transformer post-processing schemes applied to note positions, these things might work out in the end to not phase. I have yet to try it all out. Also, one way to introduce some degree of offset in the pitch of each instrument is to modulate the pitch at lower frequencies in all ensemble members.

LPX has also some neat Java scripting features that were mentioned I think earlier in this thread. Has anyone made any progress with that by any chance and is willing to share?


----------



## mrazz (Apr 10, 2019)

Very good work! You and I think in a similar fashion. All of my woodwinds and solo strings are individual audio modeling instruments. The same goes for my brass sections which are comprised of individual sample modeling instruments. My percussion comes from a hodgepodge of library’s and my string sections - Hollywood strings. Except for the strings, all other instruments including solo strings each have an instance of virtual sound stage for positioning. After reading your article I may try a mix with less instances of virtual soundstage. I would be very excited to see a video in which you show your virtual sound stage and mixer set up. I also use the tecontrol breath controller and about seven or eight sliders worth of cc data!


----------



## mrazz (Apr 10, 2019)

I also have a great interest in how you interface the leap motion vr with audio and sample modeling instruments. Is there a piece of software that translates the hand motion into cc control? I went to their website and it seemed like it was only PC compatible? Can you elaborate on this?


----------



## dflood (Apr 10, 2019)

mrazz said:


> I also have a great interest in how you interface the leap motion vr with audio and sample modeling instruments. Is there a piece of software that translates the hand motion into cc control? I went to their website and it seemed like it was only PC compatible? Can you elaborate on this?



This article explains how to use the Geco MIDI interface with a Leap Motion controller. I have used this setup on a Mac. It should work fine with SWAM instruments.

http://blog.leapmotion.com/playing-a-virtual-violin-with-serenade-geco-midi/


----------



## DANIELE (Apr 11, 2019)

mrazz said:


> Very good work! You and I think in a similar fashion. All of my woodwinds and solo strings are individual audio modeling instruments. *The same goes for my brass sections which are comprised of individual sample modeling instruments*. My percussion comes from a hodgepodge of library’s and my string sections - Hollywood strings. Except for the strings, all other instruments including solo strings each have an instance of virtual sound stage for positioning. After reading your article I may try a mix with less instances of virtual soundstage. I would be very excited to see a video in which you show your virtual sound stage and mixer set up. I also use the tecontrol breath controller and about seven or eight sliders worth of cc data!



Give also a try at Aaron Venture - Infinite Brass. Great library!!


----------



## rohandelivera (Apr 13, 2019)

mrazz said:


> I also have a great interest in how you interface the leap motion vr with audio and sample modeling instruments. Is there a piece of software that translates the hand motion into cc control? I went to their website and it seemed like it was only PC compatible? Can you elaborate on this?



Leap works with Mac, I use Geco Midi to translate gestures to MidiCC


----------



## DANIELE (May 10, 2019)

lelepar said:


> The problem is the limited resolution of current MIDI. The "bowing" gesture computes the derivative of the input expression (i.e. the position of the bow), which has just 128 values (0 to 127). With so limited resolution, it is like having a bow 5 cm long or even less.
> We could increase the sensitivity, i.e. like having a longer bow, but the lowest dynamics sound so bad! It is like having a saw-bow!
> There is no solution with the current standard MIDI, using a single 7-bit byte. *We are still working on the next major release which will support high-resolution MIDI (i.e. 14-bit: 16384 values)*.
> The problem is that the majority of controllers out there does not support it.
> ...



Any news on this?


----------



## lelepar (Jun 4, 2019)

DANIELE said:


> Any news on this?



SWAM v3 is still under development. Please check our roadmap on our Community https://community.audiomodeling.com

Best,
Emanuele


----------



## Rilla (Apr 13, 2020)

I would love to see your techniques applied to the new Sample Modeling Strings.


----------



## Markrs (Jun 21, 2021)

Wow this is very impressive, great seeing a reminder that physically modelled Instruments can do!


----------



## muziksculp (Jun 21, 2021)

Looking forward to SWAM Woodwinds V3


----------



## Miklós Vigh (Jul 10, 2021)

gregh said:


> The way to humanise a synthetic performance is to permute the data from an existing real performance rather than randomise using an arbitrary distribution. I've been using that method for years and it gives better results. If you want to use a random distribution, equal is probably the worst yet seems to be what everyone uses. I imagine that is because the people writing the code are coders rather than psychologists and musicians


@gregh Sounds interesting - how and what would you capture as data from an existing real performance?


----------



## lelepar (Jul 11, 2021)

gregh said:


> The way to humanise a synthetic performance is to permute the data from an existing real performance rather than randomise using an arbitrary distribution. I've been using that method for years and it gives better results. If you want to use a random distribution, equal is probably the worst yet seems to be what everyone uses. I imagine that is because the people writing the code are coders rather than psychologists and musicians


At Audio Modeling all developers are also musicians. We are not psychologists, but we have some knowledge of psychoacoustics, and we spend a lot of time analyzing and synthesizing random distributions that match real behavior as closely as possible. Extrapolate data from a real performance is a good way, but it can only be applied to the virtual counterpart. What do you do when you want to provide your own expressivity to that performance or to an original performance?


----------



## NoamL (Jul 11, 2021)

Miklós Vigh said:


> @gregh Sounds interesting - how and what would you capture as data from an existing real performance?


Can't speak for Greg but the method I personally use is free-playing a piano sketch of my idea, smpte locking the notes in place, and then devising a tempo track that follows what I played. Then I can orchestrate against that tempo track - all the MIDI instrument performances can be quantized but because of the fluid tempo everything will sound like a real performance.

What Greg said about "equal random distribution" being bad is spot on! in my view.

What *"humanization"* really does is take each quantized note and budge it a random distance from its 'correct' quantized position. This is nothing like how humans actually humanize music because each note is still 'tethered' to (or 'equally randomly distributed from') the underlying quantized position which humans can easily infer.

In effect -

quantized music = robot

quantized music + humanization = deliberately sloppy robot

quantize + tempo map = human

I explained it better in this video:


----------



## osterdamus (Jul 11, 2021)

rohandelivera said:


> I haven’t seen much chat here on using SWAM or Sample Modeling instruments in sections, and thought i’d pipe in with my process for anyone who might find it interesting.
> 
> Over the last couple of months I’ve evolved my orchestral template away from sampled instruments to modeled instruments by Sample Modeling and Audio Modeling. So for example, instead of a sectional French Horn library I have 4 instances of a Sample Modeled Horn; and 10 instances of a SWAM Cello would be my Cello section.
> 
> ...



Great post, thank you for sharing. A little late to the party, but would you mind looping me in on what Direction Mixer is in this context?


----------



## Miklós Vigh (Jul 12, 2021)

NoamL said:


> Can't speak for Greg but the method I personally use is free-playing a piano sketch of my idea, smpte locking the notes in place, and then devising a tempo track that follows what I played. Then I can orchestrate against that tempo track - all the MIDI instrument performances can be quantized but because of the fluid tempo everything will sound like a real performance.
> 
> What Greg said about "equal random distribution" being bad is spot on! in my view.
> 
> ...



Much appreciated, Noam. Thank you.


----------



## youngpokie (Jul 12, 2021)

There was a thread recently on whether convincing string quartet library exists. I wrote they didn't but since then I started looking into Audio Modeling and getting seriously excited.

Reading the manual for SWAM Violin, it seems like it's possible to (re)create a large number of articulations, including many for the genuine virtuoso style of playing. That's really incredible despite the extra work. And I'm slowly starting to understand how to potentially do that with specific combinations of variables like bow pressure, bow start/end types, attack %. Just that already covers so many...

But I am not sure how to deal with articulations that require the specific section of the bow (top or bottom). For example, a common technique in classical music is to play colle as the first note in a martele run, thus producing a highly recognizable "click" effect. But a colle (and petit detache, lance, etc) requires the bow at the frog to do it right.

Is there a way to specify the bow segment in SWAM violin? Or can it be simulated in some other way? Grateful if anyone could comment. @lelepar? Many thanks


----------



## Rilla (Jul 12, 2021)

NoamL said:


> Can't speak for Greg but the method I personally use is free-playing a piano sketch of my idea, smpte locking the notes in place, and then devising a tempo track that follows what I played. Then I can orchestrate against that tempo track - all the MIDI instrument performances can be quantized but because of the fluid tempo everything will sound like a real performance.
> 
> What Greg said about "equal random distribution" being bad is spot on! in my view.
> 
> ...



This is great!!


----------



## the_pro (May 2, 2022)

Is this possible without the Leap Motion Controller? It is sold for more than double of the price you stated in my country which is very expensive for me.


----------



## lelepar (May 2, 2022)

the_pro said:


> Is this possible without the Leap Motion Controller? It is sold for more than double of the price you stated in my country which is very expensive for me.


There are MANY ways to control SWAM instruments in real time. For example, one of our users has developed a personal skill to move three sliders independently, each one with a different finger.
Another example is to combine an expression pedal (or even two) with a Mod Wheel and AfterTouch.
You need too find your own way, the one that transform your intention into the right gesture.


----------



## Kevin63101 (May 2, 2022)

I just got the TEControl breath controller (basic version) last week for Swam and other vsts.

Very easy to customise settings. A couple inquiries:

1. Although I like it for expressive legato phasing and cc based vibrato, I had hoped to do tonguing breath for staccatos. Doesn't seem effective yet. Is there something I'm missing?

2. Wish I could send multi cc with multiple different curves and different min/max midi values from the breath only version. Is there a workaround to achieve this in real time?


----------



## Trash Panda (May 2, 2022)

the_pro said:


> Is this possible without the Leap Motion Controller? It is sold for more than double of the price you stated in my country which is very expensive for me.


Maybe check out MusiKraken on the Play/App Store. It's pretty fantastic as an alternative to Leap Motion and only requires a smart phone or tablet.


----------



## timbit2006 (May 2, 2022)

Trash Panda said:


> Maybe check out MusiKraken on the Play/App Store. It's pretty fantastic as an alternative to Leap Motion and only requires a smart phone or tablet.


Thanks for mentioning this, I didn't realize it was on Android now.


----------



## lelepar (May 2, 2022)

Kevin63101 said:


> I just got the TEControl breath controller (basic version) last week for Swam and other vsts.
> 
> Very easy to customise settings. A couple inquiries:
> 
> ...


1. If you mean with SWAM v3: have you selected the "Breath Controller Default" MIDI Preset? Are you keeping the buffer size very low, i.e. 64 or 128?

2. On SWAM v3 instruments you can assign the same MIDI CC, defining a completely different remapping CURVE for each parameter


----------



## Windbag (May 3, 2022)

Kevin63101 said:


> 1. Although I like it for expressive legato phasing and cc based vibrato, I had hoped to do tonguing breath for staccatos. Doesn't seem effective yet. Is there something I'm missing?


You should be able to double-tonguing... the TEControl is much faster than every other breath controller I've used. I hadn't noticed this parameter before the V3 versions but double check your breath control mode is set to FAST:






...and actually have a look at both the Wind Controller Release mode (which I believe makes sure the instruemnt is silent with no breath input - otherwise it may be at a minimum expression level that still makes noise) and that Attack Control is looking at Expression (breath) and not note velocity. Those will ensure you have full dynamic control via the TEControl, rather than split between breath control and keyboard input.

As for the other thing; if you happen to be in Logic, there is also a bit you can do with MIDI modifiers applied to the track; should include modifying the CC data bit and scaling the value (-200% to 200%)...should let you send breath CC data anywhere else, in addition to or instead of allowing it through CC2. Should be able to handle the rest with the curves built into SWAM

[EDit #13 or whatever] One other thing I'll toss in since the tonguing question seems pretty specific to winds; do yourself a big favor and map formant to bite control on your BBC2 - it takes some getting used to but is an incredibly expressive way to simulate mouth pressure or embouchure shape and, in my experience, so effective at getting the reeds to sound natural that I consider it essential.


----------



## muziksculp (May 3, 2022)

Hi @lelepar ,

I would love to watch more SWAM Video Tutorials that go in-depth with good narration, and explanation of the ways to make them expressive, There is one video that was released a little while ago, but has no narration, and some annoying music in the background. 

For example, more about the SWAM Solo Strings Playing Modes, (Expression, Bowing, Bipolar), and when to use them for specific types of performances. also explaining some of the parameters that are available to be controlled, i.e. Dynamic Transitions, Bow lift, Bow Start, ..etc. 

Thanks,
Muziksculp


----------



## fadiese (May 24, 2022)

_That is it_! @rohandelivera, this is exactly what I've been wanting to do: play an orchestra musically, rather than build a Frankenstein monster made of pieced together samples. Your post is very inspiring to me, thank you for sharing your techniques and showing the expressive possibilities of the SWAM orchestra.


----------



## fadiese (May 24, 2022)

gregh said:


> The way to humanise a synthetic performance is to permute the data from an existing real performance rather than randomise using an arbitrary distribution.


Hi @gregh, what do you mean by permute the data?

The solution I envision for a *variation generator* is to combine two (or more) MIDI performances (of the same music of course) by combining the gestures of each note, including the timing, in varying amount for each virtual musician. The amount that each musician contributes changes over time, hence resulting in a new performance that doesn't imitate any musician of the group in particular, but which always sits between the extreme of what is played by the other virtual musicians. The new performance preserves the musicality of the gestures and the intentions of the composer yet ensures small variations compared to the recorded MIDI performances.

I can explain the details of the algorithm if requested.


----------

