Nate Johnson
Senior Member
So I finally picked up a simple controller (Korg Nanokontrol2) with faders to program for expression, dynamics, etc. Seems like thats the approach everyone takes to add more realism to the performance of orchestral library patches. Of which I'm using Spitfire's Bernard Herrmann Orchestra.
I notice most people seem to have their fingers on both expression and dynamics faders at the same time while recording in a keyboard part. In practicing with this technique, I'm having a hard time figuring out in which amounts to move either. I understand that expression is just altering the volume within the patch and dynamics are crossfading between different sample layers (soft, medium, loud).
Question #1 - Do you move expression and dynamics at the same time? I mean if a line is swelling and fading, wouldn't overall volume and dynamic timbral layer increase/decrease in sync?
So far it seems like my performance of all of this input is a little clunky. I find myself going back into the automation lanes (LPX) and cleaning up my mess. And not understanding question #1 keeps me scratching my head as to why I'm messing with both parameters instead of say, just dynamics.
Question #2 - is the ultimate way to create the goal of recreating realistic human performance, playing it in all at once; faders and keys, or can I create enough realism with recording fader automation or drawing in automation AFTER the keyboard performance?
I notice most people seem to have their fingers on both expression and dynamics faders at the same time while recording in a keyboard part. In practicing with this technique, I'm having a hard time figuring out in which amounts to move either. I understand that expression is just altering the volume within the patch and dynamics are crossfading between different sample layers (soft, medium, loud).
Question #1 - Do you move expression and dynamics at the same time? I mean if a line is swelling and fading, wouldn't overall volume and dynamic timbral layer increase/decrease in sync?
So far it seems like my performance of all of this input is a little clunky. I find myself going back into the automation lanes (LPX) and cleaning up my mess. And not understanding question #1 keeps me scratching my head as to why I'm messing with both parameters instead of say, just dynamics.
Question #2 - is the ultimate way to create the goal of recreating realistic human performance, playing it in all at once; faders and keys, or can I create enough realism with recording fader automation or drawing in automation AFTER the keyboard performance?