What's new

Expression & Dynamics Performance

Nate Johnson

Senior Member
So I finally picked up a simple controller (Korg Nanokontrol2) with faders to program for expression, dynamics, etc. Seems like thats the approach everyone takes to add more realism to the performance of orchestral library patches. Of which I'm using Spitfire's Bernard Herrmann Orchestra.

I notice most people seem to have their fingers on both expression and dynamics faders at the same time while recording in a keyboard part. In practicing with this technique, I'm having a hard time figuring out in which amounts to move either. I understand that expression is just altering the volume within the patch and dynamics are crossfading between different sample layers (soft, medium, loud).

Question #1 - Do you move expression and dynamics at the same time? I mean if a line is swelling and fading, wouldn't overall volume and dynamic timbral layer increase/decrease in sync?

So far it seems like my performance of all of this input is a little clunky. I find myself going back into the automation lanes (LPX) and cleaning up my mess. And not understanding question #1 keeps me scratching my head as to why I'm messing with both parameters instead of say, just dynamics.

Question #2 - is the ultimate way to create the goal of recreating realistic human performance, playing it in all at once; faders and keys, or can I create enough realism with recording fader automation or drawing in automation AFTER the keyboard performance?
 
So I finally picked up a simple controller (Korg Nanokontrol2) with faders to program for expression, dynamics, etc. Seems like thats the approach everyone takes to add more realism to the performance of orchestral library patches. Of which I'm using Spitfire's Bernard Herrmann Orchestra.


I notice most people seem to have their fingers on both expression and dynamics faders at the same time while recording in a keyboard part. In practicing with this technique, I'm having a hard time figuring out in which amounts to move either. I understand that expression is just altering the volume within the patch and dynamics are crossfading between different sample layers (soft, medium, loud).

Question #1 - Do you move expression and dynamics at the same time? I mean if a line is swelling and fading, wouldn't overall volume and dynamic timbral layer increase/decrease in sync?

So far it seems like my performance of all of this input is a little clunky. I find myself going back into the automation lanes (LPX) and cleaning up my mess. And not understanding question #1 keeps me scratching my head as to why I'm messing with both parameters instead of say, just dynamics.

Question #2 - is the ultimate way to create the goal of recreating realistic human performance, playing it in all at once; faders and keys, or can I create enough realism with recording fader automation or drawing in automation AFTER the keyboard performance?
Both questions I’m asking myself too, so interested in people’s responses. My initial thoughts are that it seems to depend a lot on the library. Ie some may have enough volume change built into the dynamic layers (so here I’d only use modulation, not expression). Whereas others (including my Spitfire stuff) often seem to need additional ‘help’ by changing expression in addition to modulation. Of course it depends on the musical line itself though, and how extreme or subtle I need the dynamic change to be. But very interested in what more experienced users think...
 
To me personally, it seems a bit of an athletic aspiration to be able to perform all passages in one go, including riding all the faders, wheels etc. If you're one of the real olympic keyboarder guys who can do it anyway, sure, why not. You got people like Jordan Rudess, who made a career out of dazzling keyboard athleticism. But I'm not sure if it's all that important to seek that kind of ability if you're not that kind of performer in the first place.

Then there's guys like Mike Verta, who is a seasoned and knowledgeable performer and improviser and and the whole way he thinks relies a lot on those abilities. It's just way more natural for him to perform the stuff on the go.

Depends on the library and the material too, of course. If you're just laying down some nice string chords or some other underscore, it's not that difficult to ride CC1 and 11 while playing. But if you're doing a woodwind solo performance or something - I'd personally not concern myself with that. These passages deserve special attention to detail, so why not go into the piano roll, take some time and make sure it's really great.

In any case, no, I don't believe that the ultimate way of recreating realistic human performance is intrinsically tied to playing it in all at once. Some people swear by it, but on the other side of the spectrum there's people who literally click in every single note of any performance and draw all curves in by hand, and get really awesome results from it. There are many methods.

I very much even doubt that all the greatest MIDI works are so good specifically because someone played everything in one pass. After all, sample libraries aren't exactly synth patches. There's a reason why all these controls, keyswitches, parameters etc. are there. Tweaking is expected.
 
Last edited:
Ohhh dang it - Just watched the first part of Mike’s tutorial.

The takeaway from his point of view is ‘practice the full performance method.’

I did also notice that he seems to only focus on dynamics, instead of volume and dynamics. I started messing around doing it that way, and at least its easier.
 
I pretty much only use Spitfire VI's, so this applies to their GUI and instruments but isn't hard to translate to others.

I link dynamics and expression to my breath controller, but limit the amount of expression variance in Kontakt from 25%-100% so that it's adding some extra range on top of the recorded dynamics, but not dropping all the way down to silence at the bottom end. I've been thinking about making that range even more narrow, to see if that still feels as responsive as I'd like, but maybe sounds a little more natural.

I save the modwheel for vibrato, and put the handful of other controls I might want to vary mid-performance on other sliders, and then go to town with it. I think I'd rip my hair out drawing in CC curves. It all has to happen as part of the performance for me.
 
I try to avoid controlling Expression – after all, the instruments that are emulated only have dynamics, but no "expression" or "volume". The sound of a sampled string ensemble which is being faded down with expression or volume is different from the sound of a library where the volume reduction only happens by crossfading into recordings of instruments played in a lower dynamic range. But some libraries need use of both controllers, of course - due to lack of enough dynamic layers.
 
I try to avoid controlling Expression – after all, the instruments that are emulated only have dynamics, but no "expression" or "volume". The sound of a sampled string ensemble which is being faded down with expression or volume is different from the sound of a library where the volume reduction only happens by crossfading into recordings of instruments played in a lower dynamic range. But some libraries need use of both controllers, of course - due to lack of enough dynamic layers.

Me too @Vik. It feels such a cheat using expression rather than a real diminuendo, especially for strings, but it is as you say necessary at times. We all know that Expression riding can help at the end of phrases and cheat a crescendo/diminuendo but I'd love more and more dynamic layers to play with from the likes of OT and SFA.
 
Ideally it would be great to never have to "cheat" with instrument volume. But most VIs would require a greater dynamics resolution, and sometimes more dynamic layers present their own set of problems as well.

In the end, it's all "cheating" anyway with samples and VIs, isn't it? We're using these weird recorded and sliced up snapshots of performances that are being wrangled by scripts and whatnot into some kind of malleable sound sorce. It's not a real instrument and I can't treat it as such, so basically anything is fair game, as long as it helps the performance.

So yeah, generally I too try to stick to the most "natural" approach possible and get as much done as I can with just dynamics, without processing, plugins etc., but still - I'd run the sound through a toaster and mix baby monkey samples underneeth if it somehow ended up sounding better and more believable. :)
 
Interesting discussion so if you only use cc1 to move between the dynamic layers, where do you set cc11 expression and cc7 volume? Yes it depends but do you generally want to set both of them to 127 to give cc1 the widest range? If not where do you set them? I find that in Cubase if you set a cc7 controller lane to 127 the corresponding volume slider in Kontakt goes to 0. If you still need a volume boost you can always use volume automation in Cubase or the volume fader in VEPro or don't assign any cc7 and manually move the volume slider up in Kontakt. There are just way too many ways to change the volume of a given instrument. No wonder all if this is confusing. Yes, do whatever sounds the best but there has to be a rule of thumb to use as a place to start. Thoughts??
 
Last edited:
From my perspective to have a performance that breathes you need to have handle on both tone colour and volume independently. The way you get there is very library dependent but invariably tone colour and volume are somehow linked which complicates matters as you are trying to emulate players who have spent lifetimes practicing ways of separating tone color from volume for expression (musical not MIDI term) sake.

After about 7 years of trying (as an amateur. I would hate to try to make a living working like this), this is the workflow that works best for me:

First of all my keyboard skills aren't that great and at age 70 it's a choice between honing keyboard or writing before I die. Chose the latter. Regardless, about 1/2 my stuff is 'drawn' on the piano roll and 1/2 is played in. When I play stuff in I ignore dynamics and concentrate on the tempo variations to make it a musically turned clip.

Next comes tone color and that control is library depedent and as I said almost always tied to volume. It could be controlled by velocity or CC2 or as in case of the Chris Hein libraries CC11. In any case I am only listening for tone color and ignoring the volume changes. Also, like a good player would, I use this colouration for nuance only. Yes there are times I want to greatly brighten brass over the course of a big crescendo, for instance, so go through many layers of colouration, but for general passages it usually comes out one to 3 layers in a passage.

Next is volume which, again depending on the library, could be CC7 or automating the volume slider on the VST UI. This is where the shape of the passage is actually added. Very often I have to use very sharp drops or rises (like 90 degrees) in places to adjust for the volume rises and falls programmed into the colouration.

While not included in the OP, the next step is articulations. In the libraries I like to use, these are programable via keyswitch or CC#. Very often the volume curve will need tweeking after this.

Finally I add any automations concerning vibrato, slides, portamento, etc.

For a 16 bar violin solo for instance all that would take about an hour

And then the next day I listen to it and want to change everything :P
 
as an amateur. I would hate to try to make a living working like this

Hey, I'm supposed to be a professional (in a sense that I make this for my living) and yet my days are full of those "try"s. ;) It's all learning, no matter if you do it for living or not! :)
 
I do wonder if, given the relatively tiny timbral variation that even "deep sampling" gives us, it would be better to use the dynamics slider to set the overall color of a passage and then leave it be until a major dynamic change, while relying on expression for... well, expression, within each note and phrase.

Maybe I'll experiment with that. I don't expect to be satisfied with it, but sometimes it feels exaggerated to move between as little as two dynamic/timbral snapshots as much as I do.
 
Interesting discussion so if you only use cc1 to move between the dynamic layers, where do you set cc11 expression and cc7 volume?
I try to not touch them at all, which can be confusing - since there seem to be two different default levels for CC7 in Kontakt. There must be enough headroom to boost something if needed. But I'd never set CC7 to 127. All this also varies from lib to lib.

The main problem isn't about boosting for me, it' about how to lower dynamics without sounding as if someone is just turning down the volume with a knob.
 
I try to not touch them at all, which can be confusing - since there seem to be two different default levels for CC7 in Kontakt. There must be enough headroom to boost something if needed. But I'd never set CC7 to 127. All this also varies from lib to lib.

The main problem isn't about boosting for me, it' about how to lower dynamics without sounding as if someone is just turning down the volume with a knob.
Most of the time, I see the Kontakt default volume slider set to either 0 or -6.0 depending on the library. If in Cubase I set up a cc7 lane with a value of 127 it triggers a value of 0 in the Kontakt volume slider and if I set a cc7 value of 101 in Cubase, the Kontakt volume slider drops to -6.0. So if you do not set a cc7 value in a lane in Cubase are these the corresponding default values in Kontakt?

And if you don't set up a cc11 value in a lane in Cubase, what's the default cc11 value in Kontakt? For Spitfire it appears to be 64, since their instruments usually have an expression slider so if you don't set a cc11 value it defaults to 64 but I don't see an expression slider for example in Berlin Woods. It appears that I'm probably overthinking all of this because this is really more Art than Science so use you ear and do whatever you think sounds right. My poor little pea brain hurts from thinking about all of this.
 
Top Bottom