Discussion in 'Newbie Questions' started by Coriolis, Apr 26, 2019.
I’m working on it
Keyswitches are pure hell and I never got why is this system used. It is extremely chaotic (each library uses a different system) And making your own for each lib is crazy.
What I like is to have velocity controlled articulations like in Cinesamples - for shorts, this is the best thing ever, 10000x better, faster, easier. For all other stuff, separate tracks in groups are win.
For me this kind of instruments are the future, I think KS it becaming an old way of using libraries, the future it is completely dynamic realistic libraries. No KS and no limit in articulations.
It’s already a combination of both.
For you (or anyone else who uses one track per articulation), how on earth do you ever manage to write anything? I switch back and forth between articulations a lot, and I can’t see myself jumping back and forth between different midi tracks every couple of notes, and if you want to see how different articulation combinations would sound for a phrase...forget about it!
I dunno. I’m using reaper. Maybe other daws have an easier way of moving back and forth between different midi tracks.
there are situations where something more intelligent can determine for you which articulation might be right for the line you're playing. but there are also stylistic decisions: like using con sort or not. or col legno vs. bartok pizz vs pizz. - or if you decide to have a fraction of the notes in a chord play trems whereas the rest plays normal longs. in other words, the more micro editing you do, the more important KS become.
i'm biased, i know
I think there will always be a place for composers to explicitly specify one way or another what articulation to apply to a certain note or phrase. Whether that is handled with keyswitches or channelizing or perhaps some kind of universal articulation ID system someday is yet to be determined. For now both methods are out there. At least with keyswitching libraries you still have the option to put separate instances in different channels and work that way if you like, as opposed to PLAY which is mostly all channel based and you have not choice but use that method.
Articulation management systems can blur the distinction so that you use articulation ID’s or cc based indicators, expression maps , pc messages, etc to indicate what you want to happen and the articulation management solution will either channelize or send keyswitches for you and you don’t have to worry about it.
I like articulation switching that messes up notation/the sheet music as little as possible. If a glisando requires one note to overlap another to activate, you need to correct that for the sheet music. Keyswitch notations tend to be easier to get rid of on your sheet music. I hope future daws make it easier to convert your daw file into proper sheet music. Separate channels for articulations is a step in the wrong direction imho because of this.
With channel based you can still have a single source midi track that will translate to notation just fine, as long as you’re using an articulation management system to channelize during playback.
The overlapping legato note problem is real and I personally think the right solution for that is to also have an articulation management system that overlaps the needed transitions on the fly during playback, based on other indicators such as articulation ID. Then notation can be proper easily.
Hmm, I need to look into this some more...
Note that as of right now I’m not aware of any articulation management system that will overlap transitions on the fly. But maybe someday.
Physis K4 rules for articulations. Even beats custom iPad systems.
I use 4 x banks of buttons x 9 on the fly in real-time.
I’d hate it if somebody decided to fix something that already works.
Chris Hein is my favorite developer for making Arctic’s a walk in the park.
The Harmonica is sweet but the Horns are brilliant.
Especially hot keys where I can hit Section shakes then default back to my choice of sustains or swells.
I can’t even imagine using sampled instruments without KS.
Assign them to notes, Pedals, anything you want, even a SysEx string for sequential triggering.
Reaper - that may be a problem, I have tried almost all the Daws on the market and sorry, Cubase is the only usable, user friendly midi editor with a good looking piano roll. Logic, reaper, FL loops - HELP!!! You don't have to exit piano roll, just select the midi track at the drop-down menu. Once you write couple of bars you just click the notes you already have there and the channel select automatically. But how, when you write phrase using KS you know what key switch you are currently on? When you need to replay a passage, you sometimes cant as you have differetn keyswitch triggered... its so annoying
Prefer expression maps as it's easier to experiment on-the-fly. Earlier in this thread, someone made a point about it taking too long to set up expression maps. The thing is, you only need to do it once per library and that's it. Regarding pre-delays and whether shorter artics get affected with larger pre-delays for legatos etc - if that is a problem for a specific library, either drag notes forward instead of using pre-delay, or separate the artics into different channels afterwards.
With studio one I used to prefer separate tracks, but since I gradually switched to Keyswitches. Not that I’ve found a workaround for Keyswitches, it’s a lot easier to see why Keyswitches can be good for creativity.
I use key switches to control expression maps which send key switches (or midi CC). Expression maps are just a layer, to archive a compatibility between different libraries.
I would like to hear a bir more of this. Are you implementing a keyswitch (say, C0) to your piano roll just like you would do it without expression maps? And that will somehow tell the expression map to change it into something else which can be universally used?
I suspect I'm doing the same thing as @Tfis, which is the opposite of what you just asked. I define a MIDI note range to control the expression map itself. So it's possible to define a consistent set of keyswitches for all libraries, no matter whether they use keyswitching or not. And the expression maps will show up in the midi editor in their own expression map lane - it won't be extra notes in the piano roll.
For example, for EWQL Hollywood brass, I've loaded different articulations and combi patches to different midi channels. Then the expression map is defined so that C0 will trigger to send notes to midi channel 1, C#0 well send to midi channel 2, etc.
For other libraries that do have keyswitching features, then you can define a different expression map that receives C0-C1 and maps that to send the right signal to the library for keyswitching. So if the articulation you want required sending C7 + value 127 for CC1 to e library, you can program the expression map to send that when you press C0.
So that way you can make C0-C1 (or whatever range you want) have the same articulations for every library.
I would definitely agree that once you find a way to automate or simplify the keyswitch method, it becomes far easier to understand and use with positive feelings toward the system. Some libraries are dabbling now in more advanced programming to avoid using keyswitches, and this is great too, but again becomes a problem when every library you have requires different playing methods to get through the articulation types. Automating keyswitches for me has definitely taken a lot of the stress out of the whole thing.
Articulation ID's/Expression Maps
Articulation ID's are embedded in the note data so there's no way of compensating for legato delay without the program time travelling to the future. However it's possible in Cubase to write a macro that looks at the Articulation ID's after the fact and move the notes to compensate.
Separate names with a comma.