# Keyswitching vs Separate Tracks vs UACC etc...



## jononotbono (Aug 20, 2017)

I'm just curious about different workflows regarding using Sample Libraries. At the minute my approach is to have articulations on separate tracks but I have been reading about Spitfire's UACC and quite intrigued about using it with an iPad. Is it faster to patch together different Articulations using UACC (I'm aware that it means using less Midi tracks in Cubase which could be a good thing) say, compared to using Keyswitches or Separate tracks? I wondering what everyone's Pros and Cons of working in the these different ways are?

I don't really like Keyswitches because I can never remember them all between different Libraries by different Devs and the thought of using an iPad with Articulations all Labelled is quite appealing. I noticed Spitfire have a TouchOSC template on their Site but I realise you can also use Lemur and hopefully Metagrid. Do Cubase Expression Maps have to be used with UACC or can it just be used by it's self (which would be handy so you could use Sample Libraries in different DAWs using the same iPad Template etc

What is everyone finding to be the greatest way of improving workflow and cutting down on Midi Tweaking time to piece together performances with different Articulations? Am I right in thinking that you could have literally every single Articulation of, for example, Spitfire Chamber Strings Violins 1 all accessible from 1 Midi Channel and an iPad Template? If so this does sound rather good!


----------



## KerrySmith (Aug 20, 2017)

I pretty much gave up on Cubase (sticking with Pro Tools) but as long as you can send data on CC 32 (_Program Change_) you can make UACC work. You just have to look at what the individual UACC number for each articulation on each instrument is. I did all articulations for Albion I, V and SCS in Metagrid and it took a few hours, but works fairly well. You just program the button for a MIDI message. For example, for Albion I - Strings Long, my Action Queue is: *CC32, value 1, channel 1*. Then you just change the Value # to correspond to each articulation. (i.e. Spicatto for that Instrument is *CC32, value 42, channel 1)*

The only reliable way I found to do the programming was to click on each articulation and note which UACC it was keyed to (_displays in lower left of Spitfire UI, if you have "Locked to UACC" checked for the instrument_) before I made each button. As I said in another thread, I've found their UACC assignments for different instruments (_based on how their UACC master template is laid out_) not exactly intuitive compared to how I would have assigned them. But you just look each one up and it goes pretty quickly. 

In practice, I wouldn't have ALL articulations on one MIDI track/Kontakt Instrument instance. I still run a couple of instances for some Instruments. For instance, for Strings, I'll usually do 3 tracks: "_Longs_", "_Shorts_, and "_FX/Others_", enabling only the relevant articulations for each, but being able to switch between those (say, all of the Longs) using my UACC-triggering buttons in Metagrid. That way I can keep the effects and output levels separate for the "main food groups". 

Honestly, I use the Metagrid buttons mostly for previewing which sound I want use more than anything. When I know I want a "Tundra Long String Sound", the exact articulation name doesn't always pop to mind, as I don't use that library regularly enough, and it's nice to just click through the buttons on the iPad. 

But yes, you can (Cubase... Should?) also record the UACC messages from Metagrid, and they should show up as MIDI Program Changes on the lane for CC 32.


----------



## jononotbono (Aug 20, 2017)

Thanks for you help (and in the other threads - I thought I'd start my own so I don't hijack everyone elses)



KerrySmith said:


> You just program the button for a MIDI message. For example, for Albion I - Strings Long, my Action Queue is: *CC32, value 1, channel 1*. Then you just change the Value # to correspond to each articulation. (i.e. Spicatto for that Instrument is *CC32, value 42, channel 1)*



Interesting, I have just tried this and it isn't working for me.


----------



## jononotbono (Aug 20, 2017)

I've just been playing about with the OSC Spitfire Template Layout and UACC certainly works. Need to figure out how to get it working in MG and then decide whether I like the workflow of this. I'm wondering about something... When you essentially use UACC, how do you mix levels of some Articulations because take a Col Leg patch. They are usually too loud compared to most and curious as to how someone can balance the arts?

I do like the idea of being able to just select each Art via an iPad screen but I'm so used to separate tracks that I'm not sure yet.


----------



## KerrySmith (Aug 20, 2017)

jononotbono said:


> Thanks for you help (and in the other threads - I thought I'd start my own so I don't hijack everyone elses)
> 
> 
> 
> Interesting, I have just tried this and it isn't working for me.


I'm not sure how it works in Cubase, but in Pro Tools I had to
make sure that the METAGRID - MIDI input was enabled (as opposed to Metagrid - Cubase or Metagrid - Studio One) before it would accept MIDI messages.


----------



## KerrySmith (Aug 20, 2017)

jononotbono said:


> I've just been playing about with the OSC Spitfire Template Layout and UACC certainly works. Need to figure out how to get it working in MG and then decide whether I like the workflow of this. I'm wondering about something... When you essentially use UACC, how do you mix levels of some Articulations because take a Col Leg patch. They are usually too loud compared to most and curious as to how someone can balance the arts?
> 
> I do like the idea of being able to just select each Art via an iPad screen but I'm so used to separate tracks that I'm not sure yet.


For mixing, I'm just used to putting in volume (in whatever form) automation all of the time anyway, so it's not that big of a deal. Just something I assume I have to do. That's also partly why I have them split over a few tracks, to group together the main similarities. Col Legno
would go on the "String FX" track.


----------



## benatural (Aug 20, 2017)

I use Keyswitches + expression maps for common articulations. Anything else gets it's own track, especially for special effects. Sometimes I'll do both where I'll have a single track that is self contained using expression maps, but I also make duplicate MIDI tracks that splits everything out.

I should note that I don't use Instrument tracks in my template.


----------



## jononotbono (Aug 21, 2017)

KerrySmith said:


> For mixing, I'm just used to putting in volume (in whatever form) automation all of the time anyway, so it's not that big of a deal. Just something I assume I have to do. That's also partly why I have them split over a few tracks, to group together the main similarities. Col Legno
> would go on the "String FX" track.



Makes sense but when an articulation gets loud it's timbre usually gets brighter so wouldn't just automating Volume just turn up the volume and not change the timbre? Perhaps it's no big deal and dependant on how the arts were recorded. Also, I just figured out how to send Metagrid Midi Messages. Thanks for your help!


----------



## jononotbono (Aug 21, 2017)

Sorry to ask more questions about UACC and Articulation switching but I have got Metagrid working. I press the Spiccato button and it changes the Articulation to Spiccato. That's all great. The problem I'm having is that none of the button presses are recording CC32 Data in the Key editor so it doesn't remember any of the articulations I press. I must be missing something here.


----------



## babylonwaves (Aug 21, 2017)

jononotbono said:


> Makes sense but when an articulation gets loud it's timbre usually gets brighter so wouldn't just automating Volume just turn up the volume and not change the timbre? Perhaps it's no big deal and dependant on how the arts were recorded. Also, I just figured out how to send Metagrid Midi Messages. Thanks for your help!


usually CC01 adjusts the timbre (along with the volume because the timbre changes with the volume) but if you use CC11 you'll just tackle the volume. all that is just a part of balancing your template. as for your original question, the mic mixer is stored per articulation so if you want to reduce the volume of the Con Legnos in general you can do this there as well.


----------



## procreative (Aug 21, 2017)

babylonwaves said:


> the mic mixer is stored per articulation



I am sure this is not automatic, I think you have to click something in the GUI to make adjustments per articulation? Is this not something to do with the Cog?


----------



## babylonwaves (Aug 21, 2017)

procreative said:


> I am sure this is not automatic, I think you have to click something in the GUI to make adjustments per articulation? Is this not something to do with the Cog?


i'm not in front of it right now but his might help. you're right, apparently you need to enable this mode first and it is only available for newer libs.

https://www.syntheticorchestra.com/blog/?1


----------



## jononotbono (Aug 21, 2017)

Well, I've always wondered how people balance Arts when using Keyswitches and now with UACC. Hopefully it is automated. Any idea on how to record the CC32 data onto a CClane so it remembers the art selection playback (in Cubase)?


----------



## StevenMcDonald (Aug 21, 2017)

This doesn't involve UACC, but I've always done one kontakt instance per instrument with sustains on midi ch. 1, staccato on 2, pizzicato on 3.... depending on the instrument of course. 

I've used Mixcraft for years and its easy to select some notes, right click and change their channel to the appropriate articulation. I did it this way to keep all of one instrument on one track, because I used to export midi and make sheet music. I don't do sheet music any more, but I still work this way.

I've been trialing cubase and it's possible to set it up this way as well. I think the only downfall I've encountered is that CC1 only would affect Ch. 1 ( at least only in mixcraft? I never looked into it). I usually don't need to load more than legato and shorts, so that hasn't affected me much. If I ever needed two CC1 controlled articulations on the same instrument, I would just duplicate the track to seperate them. Easy!

Just something to consider! Also you can balance the volumes of your artics within Kontakt/PLAY/whatever you use.


----------



## jononotbono (Aug 21, 2017)

StevenMcDonald said:


> Also you can balance the volumes of your artics within Kontakt/PLAY/whatever you use.



Something I am still a little confused on is balancing volumes of Arts when they are coming out of the same output if that makes sense? I'm probably missing something obvious.


----------



## fixxer49 (Aug 21, 2017)

jononotbono said:


> Something I am still a little confused on is balancing volumes of Arts when they are coming out of the same output if that makes sense? I'm probably missing something obvious.


most VIs now provide a small volume control per articulation WITHIN any particular KS patch. sometimes you have to look under the hood of the GUI. Two examples off the top of my head are Symphobia and PLAY (EWQLSO).


----------



## StevenMcDonald (Aug 21, 2017)

jononotbono said:


> Something I am still a little confused on is balancing volumes of Arts when they are coming out of the same output if that makes sense? I'm probably missing something obvious.



Well they are coming out of the same instance of Kontakt in my case, yes. But every Kontakt instrument has a volume control on the top right of its interface. I use that if I feel the shorts are too loud or whatever. I'm not making that up, right? I'm away from my PC to verify.


----------



## procreative (Aug 21, 2017)

Of course if youre on Logic you can use the excellent SkiSwitcher Artz ID which allows you to use one track, with notes encoded with articulation changes. 

This means you can record multiple notes playing different articulations in one track.

Does not matter if the library uses Keyswitches, UACC, Midi Channel etc. In fact I have some libraries that dont have keyswitches and have Multis setup that can be triggered as if they are Keyswitches.


----------



## jononotbono (Aug 21, 2017)

I have come to the conclusion I will stick will Separate tracks. Still, it was good to explore and realise what options exist. Thanks for everyone's advise.


----------



## mc_deli (Aug 21, 2017)

procreative said:


> Of course if youre on Logic you can use the excellent SkiSwitcher Artz ID which allows you to use one track, with notes encoded with articulation changes.
> 
> This means you can record multiple notes playing different articulations in one track.
> 
> Does not matter if the library uses Keyswitches, UACC, Midi Channel etc. In fact I have some libraries that dont have keyswitches and have Multis setup that can be triggered as if they are Keyswitches.


@babylonwaves is perhaps too modest to reply but it's interesting to note that he/she also has a cunning artic switching script plug in for Logic where you can see the artics and change them in the automation lane, which is nice...


----------



## rgames (Aug 21, 2017)

If you're in Cubase then expression maps can greatly improve your workflow, assuming you use enough different artics to make it worthwhile. They're one of two bits of music tech that have really made a difference in my workflow in the last 10 years - the other is SSDs.

They take a while to setup but I'm still using ones from when they first came out 6-8 years ago. So it's a good investment in time.

Combined with Lemur or similar interface they're a huge help.

rgames


----------



## Alex Fraser (Aug 21, 2017)

I saw a Junkie XL vid recently. Whilst it might have been an older movie (I was lost down a YouTube rabbit hole and had no sense of time) he had a simple yet clever way of approaching keyswitching.

For every track in his template, there appeared to be a corresponding list of available articulations written in the "notes" window of Cubase. Logic has a similar feature. When selecting a new track, the notes window updated with the available key switches for the current vi.

I guess you could take this approach with cc data. By creating a single preset on your controller (or pads) and then going through your vi libraries, setting up articulation switching to match the values set on the controller, along with the relevant written articulation notes.. it could a be simple yet effective setup.

Think I'll try this with my next template. Interesting to hear from anyone who's already taken this approach.


----------



## jononotbono (Aug 22, 2017)

rgames said:


> If you're in Cubase then expression maps can greatly improve your workflow, assuming you use enough different artics to make it worthwhile. They're one of two bits of music tech that have really made a difference in my workflow in the last 10 years - the other is SSDs.
> 
> They take a while to setup but I'm still using ones from when they first came out 6-8 years ago. So it's a good investment in time.
> 
> ...



I actually made expression maps for the 8DIo Adagio and Agitato Libraries a while a go to try them out as Adagio has a notorious amount of different Articulations so I thought it would be wise. I'm still struggling, from a mix and control point of view, how you have control over the levels of each art when they all come out of the same stereo Output? And how do you control Reverb Levels? I'm going to stick with separate tracks but I do like the idea of Expression Maps and UACC on paper. Just not feeling it in practise. And yes, SSDs are a total game changer. There's no going back when you go down that path.


----------



## jononotbono (Aug 22, 2017)

Alex Fraser said:


> I saw a Junkie XL vid recently. Whilst it might have been an older movie (I was lost down a YouTube rabbit hole and had no sense of time) he had a simple yet clever way of approaching keyswitching.
> 
> For every track in his template, there appeared to be a corresponding list of available articulations written in the "notes" window of Cubase. Logic has a similar feature. When selecting a new track, the notes window updated with the available key switches for the current vi.



Yeah, I did notice this regarding the notepad and would love to know how he mixes the levels of the arts.


----------



## meradium (Aug 22, 2017)

jononotbono said:


> Makes sense but when an articulation gets loud it's timbre usually gets brighter so wouldn't just automating Volume just turn up the volume and not change the timbre? Perhaps it's no big deal and dependant on how the arts were recorded. Also, I just figured out how to send Metagrid Midi Messages. Thanks for your help!



@jononotbono does the MIDI message send in Metragrid finally work together with Cubase? I reported this bug many month ago as for whatever reason Cubase did not receive any associated timecode with the sent MIDI message and thus did not properly align the received messages on the timeline... they all got stuck at time 0. This only occurred in Cubase and non of the other DAWs. My workaround was to pipe the MIDI messages through an instance of PD which seemed to correct the timing problem.

Since then I haven't tried again... Should give it a try to see if it was fixed in the meantime...


----------



## jononotbono (Aug 22, 2017)

meradium said:


> @jononotbono does the MIDI message send in Metragrid finally work together with Cubase? I reported this bug many month ago as for whatever reason Cubase did not receive any associated timecode with the sent MIDI message and thus did not properly align the received messages on the timeline... they all got stuck at time 0. This only occurred in Cubase and non of the other DAWs. My workaround was to pipe the MIDI messages through an instance of PD which seemed to correct the timing problem.
> 
> Since then I haven't tried again... Should give it a try to see if it was fixed in the meantime...



Everything works except for Data being recorded on CC32. When you press a MG button (and assuming you have chosen the correct CC, value, and midi channel), the arts change which is great but then none of the button presses get recorded onto the CC32 Controller Lane and therefore when you playback whatever it is you recorded, none of the art changes occur. I haven't noticed anything about timing problems and now going to sack this idea off. Separate tracks for me.


----------



## Alex Fraser (Aug 22, 2017)

jononotbono said:


> Everything works except for Data being recorded on CC32. When you press a MG button (and assuming you have chosen the correct CC, value, and midi channel), the arts change which is great but then none of the button presses get recorded onto the CC32 Controller Lane and therefore when you playback whatever it is you recorded, none of the art changes occur. I haven't noticed anything about timing problems and now going to sack this idea off. Separate tracks for me.



Isn't it weird that even in 2017, we don't have an agreed way of switching articulations which isn't kludgy in some way or requires a bunch of scripts, a couple of tablet apps and a sacrifice to the gods?


----------



## JPComposer (Aug 22, 2017)

I use Lemur to send program changes to my Cubase expression maps which in turn send the CC32 values to the instrument track, and they record fine into the articulation lanes of the expression map and playback ok (most of the time). If you can send Program Changes from Metagrid, I don't see any reason while it shouldn't work. I use win 10 though mac may be different.


----------



## jononotbono (Aug 22, 2017)

Well I'm glad it works but what's the point in using UACC if you can just use Expression Maps? This is another thing that is confusing me because isn't it just another layer complexity to make this stuff work?


----------



## jononotbono (Aug 22, 2017)

Also, when you need to layer other articulations to fool the ear, for example, layering a String Spiccato on top of a Legato Art just to give it a sort of Sforzando how is this even possible when arts all share the same midi track and Audio Output? Genuinely curious as to how someone working with Key Switches, UACC, and/or Expression Maps.


----------



## JPComposer (Aug 22, 2017)

jononotbono said:


> Well I'm glad it works but what's the point in using UACC if you can just use Expression Maps? This is another thing that is confusing me because isn't it just another layer complexity to make this stuff work?



This is the easiest way I found to get it working on one track. I couldn't get any response from just recording CC32 straight to a track. There was no reason to use UACC as opposed to keyswitches or program changes on the expression maps, it just seemed easier to set up and obviously with anything non Spitfire you have to use keyswitches. Layering is a limitation but I believe you can do it using and holding keyswitches - haven't looked in to this yet.


----------



## jamwerks (Aug 22, 2017)

I use 1 track per instrument (that can access all arts). With SF UACC seems to be the best. The post above shows well how it's done. What you put in the description panel of the expression map setup is what you'll see in the articulations lane of the Piano roll. To access the arts while playing (not recording) that's where the "Remote" assignments come to play. I have these mapped to buttons on my Zero SL.

With libraries like SCS with tons arts, I take that a step farther using also the "groups" feature on the SF gui. Let's say I have 5 different shorts. I assign each of them to the same group that has an output mapping of any of the arts included in the group (CC32 value 20). I call that slot "SH (5)". With the "(5)" I know I have 5 to choose from. I do so via a CC fader (let's say CC 99). I divide the CC 99 range by 5 (127 ÷ 5) and each short gets assigned a corresponding range of CC 99. In general I go from shortest to longest, softest to "raspiest", etc. I control CC99 via a HW fader.

That way I can access all 25 arts of SCS (for example) with 7-8 slots in the articulations lane and varying values of another CC (99) lane.

With other libraries (OT) to do the same I have to use multiple "multies" loaded into K5 instrument banks.

All of my libraries are managed in the same way. I don't have to remember anything. The articulations lane tells me what's available. I think only about composing!

Of course all arts get the same reverb, but isn't that how it is in the real world?


----------



## JohnG (Aug 22, 2017)

jononotbono said:


> when you need to layer other articulations to fool the ear, for example, layering a String Spiccato on top of a Legato Art just to give it a sort of Sforzando how is this even possible when arts all share the same midi track and Audio Output? Genuinely curious as to how someone working with Key Switches, UACC, and/or Expression Maps.



For most libraries, the easiest thing is to set up a separate instance of the instrument. Hollywood Strings has patches like this already put together (layer an attack with some kind of legato or sus strings -- either marcato or spicatto) but that's the only one I know of.


----------



## babylonwaves (Aug 22, 2017)

jononotbono said:


> Well I'm glad it works but what's the point in using UACC if you can just use Expression Maps? This is another thing that is confusing me because isn't it just another layer complexity to make this stuff work?


UACC is nothing more than switching arts using a controller. that's not more complicated than using a program change or key switch. expression maps are a layer which aids you to call up an articulation at the right point in time - that's something different.


----------



## yhomas (Aug 22, 2017)

I am just starting with VIs, but articulation switching only makes sense if the articulations will all receive the same processing signal chain during mixing. If stems are being made, obviously, all articulations need to be going into the same stem. 

I have read that it is desirable to individually adjust reverb levels for different articulations such as shorts vs longs. I can imagine at mix time, it is convenient to be able to make global volume or other eq/processing adjustments as well or change articulations after the fact. 

Where articulation switching makes sense to me is wherever the switching is integrated into a human performance--like a portamento slide within a legato line.


----------



## Alex Fraser (Aug 22, 2017)

I don't think there's a "correct way" to do it. There's so many variables, workflow preferences, desired results..
I think the key is to create an environment where you have the flexibility you require without getting bogged down by the technical aspects. Nothing kills a creative vibe more than technical issues, or the "why is that happening" scenario.


----------



## chimuelo (Aug 22, 2017)

If you can't physically change the articulation while your recording it I cant see how some automation would address that better.
I have a Master MIDI Controller that really makes performing with articulations a breeze.
Recently added a Source Audio Reflex Pedal.
Add a Boss FS-6 dual switch and Reflex Expression to your set up and you should be able to concentrate on playing with less concerns about technical prowess.


----------



## rgames (Aug 22, 2017)

jononotbono said:


> how you have control over the levels of each art when they all come out of the same stereo Output?


In Kontakt you add a little script that sends cc1/7/11/whatever you want to all patches in the multi. Then they're always the same when you switch. In other players (VI and Play) it happens automatically.

The really great thing about expression maps is that you can think like a composer and not a technician. If I want staccato, I pick staccato. And if I drag that MIDI to a different instrument it still knows what stacc is even if it uses a completely different artics switching method (e.g. CC vs KS).

Once you set them up it's the same for all libraries from all developers. You just think about the music, not the technology.

The other great thing is that they have their own lanes in the MIDI editor. So you see "stacc" or "pizz" instead of some cryptic CC or KS that's different for every library and impossible to remember.

rgames


----------



## procreative (Aug 22, 2017)

One important difference with UACC patches:

You can create a Multi, set them all to the same MIDI channel and they act as one giant keyswitch patch. For example I have all of the Tundra High Strings KS patches in one multi and can access all from one track.

It is actually possible to trigger multiple articulations at once. As I am in Logic I can easily do this with the ArtzID script as each note has an Articulation ID encoded into which gets translated to whatever trigger method a library uses. This can be Note Keyswitch, UACC Keyswitch, or Midi Channel (ideal for say HW Strings as they dont come with many KS patches).

I use Composer Tools Pro that sends these Articulation IDs and for each library they are labelled with the Articulations.

Cubase Expression Maps are similar I believe and you could use this kind of setup with them I suppose.

Its all Pros and Cons. Separate tracks per articulation gives an instant overview but makes huge track counts and changing articulations per note or phrase much trickier. On the other hand KS approaches mean much smaller track counts and easier management of changing or editing phrases that use multiple articulations, but you need a system that helps keep up with whats playing.

I tried a Komplete Kontrol to help me see where the KS were, but it still did not help me remember which keys played which articulations. Originally I was very much a separate track person, but when I started to experiment more with articulations I realised it made editing much more cumbersome.

Not sure why the obsession over balancing levels, a real orchestra plays many articulations and the engineer surely does not ride the volume throughout? Dynamics are the players choice. Surely Velocity and Modwheel should be where this is happening? And anyway and volume adjustments can be done with Expression if required?


----------



## jononotbono (Aug 22, 2017)

procreative said:


> Separate tracks per articulation gives an instant overview but makes huge track counts and changing articulations per note or phrase much trickier. On the other hand KS approaches mean much smaller track counts and easier management of changing or editing phrases that use multiple articulations, but you need a system that helps keep up with whats playing.



It's exactly why I have designed a template that is controlled using the Project Logical Editor in Cubase and Metagrid. Want to just see a specific track? One button press. Want to see only Strings? Only solo Strings? Just audio tracks? Just Midi tracks with data? All possible with the touch of a single button. Navigating around thousands of tracks never been more simple (of course, once you learn how to do that) and the thought of multiple tracks means nothing to me anymore. Coupled with a massive 4K screen to display verticality and it's really incredible how slick it is. Providing everything has been balanced (usually sub mixed in VEPro) which leads to...



procreative said:


> Not sure why the obsession over balancing levels, a real orchestra plays many articulations and the engineer surely does not ride the volume throughout? Dynamics are the players choice. Surely Velocity and Modwheel should be where this is happening? And anyway and volume adjustments can be done with Expression if required?



The "Obsession" isn't an obsession at all. It's the fact a real orchestra sounds or plays nothing like a human programming music with a sample library (and this includes playing lines in with a Controller keyboard). A real player can switch between playing techniques with emotion and dynamic range seamlessly and fluidly (or not if they so wish). An engineer doesn't have to ride the volume through a live performance of an Orchestra (although they certainly can, and if needed, certainly will) because all the players have spent their lifetimes (providing the orchestra is professional) practising and honing their musical craft to play what's been printed to score. Dynamics are the composer's choice and ultimately the conductor's. Not the player's choice (in an Orchestral context).

A collection of Sound snippets with some clever scripting is nothing a like. The Samples aren't "aware" of each other and usually (hopefully) samples haven't been normalised so you get for example, Bartok Pizz Articulations sounding much louder than French Horn Samples straight out of the box and if you want any resemblance of trying to emulate the real thing, these arts need to be balanced. Easy to do with separate tracks and the "obsession" you seem to think is there is just about getting all of the articulations to be at typical level to make the arts sound cohesive when patching together the different tracks (different arts) to create a "realistic" performance.

I get it. There are plenty of ways of working. Separate tracks is what I'm sticking to for now.


----------



## jononotbono (Aug 22, 2017)

rgames said:


> In Kontakt you add a little script that sends cc1/7/11/whatever you want to all patches in the multi. Then they're always the same when you switch. In other players (VI and Play) it happens automatically.
> 
> rgames



Sounds interesting.


----------



## meradium (Aug 22, 2017)

jononotbono said:


> Everything works except for Data being recorded on CC32. When you press a MG button (and assuming you have chosen the correct CC, value, and midi channel), the arts change which is great but then none of the button presses get recorded onto the CC32 Controller Lane and therefore when you playback whatever it is you recorded, none of the art changes occur. I haven't noticed anything about timing problems and now going to sack this idea off. Separate tracks for me.



Urgs... That's exactly what I was experiencing back then when I first tried it :( 

What a shame... 

What's the point of being able to change the audible articulation but not recording it... 

If you open the recorded track in the list editor you will see all CC messages are actually there, but they occur at - infinity. 

So something strange is going on with the incoming MIDI.


----------



## procreative (Aug 23, 2017)

jononotbono said:


> It's exactly why I have designed a template that is controlled using the Project Logical Editor in Cubase and Metagrid. Want to just see a specific track? One button press. Want to see only Strings? Only solo Strings? Just audio tracks? Just Midi tracks with data? All possible with the touch of a single button. Navigating around thousands of tracks never been more simple (of course, once you learn how to do that) and the thought of multiple tracks means nothing to me anymore. Coupled with a massive 4K screen to display verticality and it's really incredible how slick it is. Providing everything has been balanced (usually sub mixed in VEPro) which leads to...
> 
> The "Obsession" isn't an obsession at all. It's the fact a real orchestra sounds or plays nothing like a human programming music with a sample library (and this includes playing lines in with a Controller keyboard). A real player can switch between playing techniques with emotion and dynamic range seamlessly and fluidly (or not if they so wish). An engineer doesn't have to ride the volume through a live performance of an Orchestra (although they certainly can, and if needed, certainly will) because all the players have spent their lifetimes (providing the orchestra is professional) practising and honing their musical craft to play what's been printed to score. Dynamics are the composer's choice and ultimately the conductor's. Not the player's choice (in an Orchestral context).
> 
> ...



Hey no worries everyone chooses what suits them. But try writing a rhythmic phrase that uses Staccato/Portato/Marcato in it, you then have to move notes around tracks and then if you need to edit the phrase it gets very tricky to manage.

But going back to your point about balancing, many ways to tackle this. Many Spitfire titles have per articulation Mic Mixing and then you have lots of control using CCs and Velocity.

As you are using VEP its very easy to setup submixes even if the end goal is one track. UACC patches can be a multi of several single instruments and still be triggered by the CC as if they are one KS patch. Similarly using MIDI channels as the trigger method they use a standard Multi.

I do like the immediate overview single tracks give but the massive track count and inflexible phrase building make creating some of the more rhythmic and fluid lines much more hassle. After all when writing melodic lines who wants to be dragging notes between tracks of the same instrument?

But each to their own. If it suits you thats fine.


----------



## Shad0wLandsUK (Aug 23, 2017)

JohnG said:


> For most libraries, the easiest thing is to set up a separate instance of the instrument. Hollywood Strings has patches like this already put together (layer an attack with some kind of legato or sus strings -- either marcato or spicatto) but that's the only one I know of.


For all that is said against it...I cannot get enough of Hollywood Strings :D Such a huge sound, even with the Main mics alone


----------



## jamwerks (Aug 23, 2017)

procreative said:


> But try writing a rhythmic phrase that uses Staccato/Portato/Marcato in it, you then have to move notes around tracks and then if you need to edit the phrase it gets very tricky to manage...


Yes that seems awkward. And the problem with that (imo) is that it's so awkward that your creative brain quits imagining those lines that would be so difficult to carry out, without you even knowing about it. So to all you 1-art-per-track folks, be vigilant! Don't let your tools kill your creativity!


----------



## Shad0wLandsUK (Aug 23, 2017)

chimuelo said:


> If you can't physically change the articulation while your recording it I cant see how some automation would address that better.
> I have a Master MIDI Controller that really makes performing with articulations a breeze.
> Recently added a Source Audio Reflex Pedal.
> Add a Boss FS-6 dual switch and Reflex Expression to your set up and you should be able to concentrate on playing with less concerns about technical prowess.


Let me just rob a bank to get those two :/


----------



## jonathanwright (Aug 23, 2017)

rgames said:


> In Kontakt you add a little script that sends cc1/7/11/whatever you want to all patches in the multi. Then they're always the same when you switch. In other players (VI and Play) it happens automatically.
> 
> The really great thing about expression maps is that you can think like a composer and not a technician. If I want staccato, I pick staccato. And if I drag that MIDI to a different instrument it still knows what stacc is even if it uses a completely different artics switching method (e.g. CC vs KS).
> 
> ...



If you don't mind me asking, which method do you use for triggering articulations?

To try and keep my articulation setup very simple, I've given each one their own MIDI channel, controlled by my iPad and Expression Maps.

It's great in the that I can drag a MIDI part from - say - Hollywood Strings to Spitfire Symphonic Strings and the same articulations (or their equivalent) will play.

The downside is that Cubase 'resets' the articulation to the first MIDI channel as soon as play or record is pressed, do you know if there is a way around this?


----------



## tack (Aug 23, 2017)

babylonwaves said:


> UACC is nothing more than switching arts using a controller. that's not more complicated than using a program change or key switch.


UACC is more than just articulation switching via CC. More crucially, it's a specification to standardize on the _semantics_ of the specific CC values. For example, if an instrument supports UACC, then if you send it CC32/1 you know you'll get standard longs -- or whatever is the closest analogue to standard longs for that instrument. 20 is standard legato, 11 is tremolo/flutter, etc, etc.

At least this was the idea. No one else but Spitfire actually implemented it. And even Spitfire didn't implement it consistently. And subsequent libraries like Tundra, which didn't remotely conform to the articulations set out in the UACC specification, underline the gaps that exist with UACC. (I can forgive that, but Spitfire might have at least made an effort to stay consistent to UACC when there _was_ an analogue.)

But I like the basic intention of UACC and have standardized my own template around the specification through translation layers, mostly FlexRouter since I'm pretty much exclusively Kontakt. Except in my case I record Program Change events into the MIDI items, where the program numbers conform to the UACC specification, and then there is a translation from program change to whatever the underlying library wants to see -- _actual_ UACC in the case of Spitfire instruments (i.e. program change N is translated to CC32/N).

So like Jonathan, I _could_ theoretically drag a MIDI item from Spitfire Chamber Strings over to Cinematic Studio Strings and have it play, at least assuming I was using the articulations that intersect between the two products. If I'm using tremolo sul pont con sord, that's not going anywhere in CSS. Consequently, that capability isn't that important to me personally.


----------



## babylonwaves (Aug 23, 2017)

tack said:


> UACC is more than just articulation switching via CC.


you need to read my statement in context, it was about CCs vs other ways of switching (notes, PGs etc) - it wasn't about the standard itself, what it means and its potential. i was just answering a question.


----------



## jononotbono (Aug 23, 2017)

procreative said:


> Hey no worries everyone chooses what suits them. But try writing a rhythmic phrase that uses Staccato/Portato/Marcato in it, you then have to move notes around tracks and then if you need to edit the phrase it gets very tricky to manage.
> 
> But going back to your point about balancing, many ways to tackle this. Many Spitfire titles have per articulation Mic Mixing and then you have lots of control using CCs and Velocity.
> 
> ...



Well, actually, you mention a very good point regarding the fact you can load single Patches into a Kontakt multi using the VEPro faders to balance each art and still have UACC work. Damn, this may need another revisit but I would still have the problem of none of the button presses being recorded on CC32 Controller Lane and would have to make Expression maps as well which is something I was hoping not to have to do. Perhaps I might make a trial "Writing track" using UACC and see how I get on.

Something I should say here regarding your comment about dragging notes, well, I have been using some Macros and Cubase Key Commands, specifically Copy buttons (Mod, Exp etc) and PASTE AT ORIGIN. Literally two button presses and you can copy Midi data without even having to be in the Key editor and copy said data onto any track in template just by being in the Project a window. It is crazy fast for editing and layering. I'm starting to think a "hybrid" approach between separate tracks and UACC etc might be good. Best of both worlds etc

Loving so much how deep people have gone on the quest for the perfect approach. Bloody nerds haha!


----------



## Shad0wLandsUK (Aug 23, 2017)

jononotbono said:


> Well, actually, you mention a very good point regarding the fact you can load single Patches into a Kontakt multi using the VEPro faders to balance each art and still have UACC work. Damn, this may need another revisit but I would still have the problem of none of the button presses being recorded on CC32 Controller Lane and would have to make Expression maps as well which is something I was hoping not to have to do. Perhaps I might make a trial "Writing track" using UACC and see how I get on.
> 
> Something I should say here regarding your comment about dragging notes, well, I have been using some Macros and Cubase Key Commands, specifically Copy buttons (Mod, Exp etc) and PASTE AT ORIGIN. Literally two button presses and you can copy Midi data without even having to be in the Key editor and copy said data onto any track in template just by being in the Project a window. It is crazy fast for editing and layering. I'm starting to think a "hybrid" approach between separate tracks and UACC etc might be good. Best of both worlds etc
> 
> Loving so much how deep people have gone on the quest for the perfect approach. Bloody nerds haha!


Yes we are all nerds.. :/
Some more proud than others, in reference to myself


----------



## tack (Aug 23, 2017)

jononotbono said:


> Bloody nerds haha!


Yeah. That moment when you realize you've spent way more time nerding out than composing ...


----------



## jononotbono (Aug 23, 2017)

tack said:


> Yeah. That moment when you realize you've spent way more time nerding out than composing ...



Well, that's true but once you know all this stuff there is no going back and becoming at one with your music technology is like getting to know your Guitar. I'm thinking about going all in December and buying the SA Everything Bundle. My template deserves it.


----------



## Alex Fraser (Aug 23, 2017)

tack said:


> Yeah. That moment when you realize you've spent way more time nerding out than composing ...



That cuts a little close..


----------



## procreative (Aug 23, 2017)

jononotbono said:


> Something I should say here regarding your comment about dragging notes, well, I have been using some Macros and Cubase Key Commands, specifically Copy buttons (Mod, Exp etc) and PASTE AT ORIGIN. Literally two button presses and you can copy Midi data without even having to be in the Key editor and copy said data onto any track in template just by being in the Project a window. It is crazy fast for editing and layering.



For the example you quoted that sounds okay. But what if you want a phrase that plays for example:

Portato Staccato Staccato Portato Staccato Staccato

Thats a pain to create, you play in your notes then to divide them between articulations means moving notes not copying them.

You might think "why would you want to do this?". Well if you listen to certain string rhythmic phrases the accented first notes are often longer bowed notes (technically played Legato into a Staccato but none of the libraries can do these convincingly).

I was playing around after using one of my phrase libraries trying to replicate the phrases and what gives notes life seems to be subtle variations in both velocity and articulation choices.

Of course you can achieve these with Velocity based articulation switching, but its very hard to play in with most controllers as I found out with PS Swing which uses these.


----------



## NoamL (Aug 23, 2017)

IMO there are a lot of reasons to prefer one articulation per track.

1. You don't have to remember keyswitches, everything is set for you in a tag at the start of the session.
2. You can manage the volume and track delay for each articulation separately, this can be important to "fix" suboptimally programmed libraries.
3. Track stacks/folders are now available in all major DAWs which makes it easier to handle large track counts.
4. The ability to separately route String Shorts from String Longs in your mix, and bounce separate stems. A must-have for many contemporary/hybrid scoring situations IMO especially for assistant writers.
5. Don't have to fully load a library if you only like 1 or 2 articulations from it.

The only time I would consider fully loading a library as designed, with built in KS, is if it's a core workhorse library that you use every day on most-every project. In that situation, the "learn your instrument" principle applies.

UACC is unfortunately a well intentioned project that no other developer seems to have joined. Nor is it clear that other devs SHOULD join this model - I think the approach pioneered by C.A.P.S.U.L.E. is superior.

The irrationality of KS (developers cannot even decide on a consensus region of the keyboard, leading to many cumulative minutes in the course of producing a cue that are spent fumbling around the keys looking for the KS region because you don't have Kontakt open) is a good reason to ignore the feature. I suppose there's a better case for it if you have a multi screen setup where you somehow automatically load the relevant Kontakt screen every time you highlight an instrument track? Is that even automatable?


----------



## tack (Aug 23, 2017)

NoamL said:


> 5. Don't have to fully load a library if you only like 1 or 2 articulations from it.


Though not a benefit exclusive to the one-articulation-per-track approach. You can always rope in patches in a single Kontakt instance on a single track on demand and use an articulation switching mechanism of your choice (as discussed elsewhere on this thread, e.g. different MIDI channels, lock-to-UACC, etc.) By assigning different Kontakt outputs to these patches you could also route them differently, addressing #4 in your list as well.

Admittedly, the setup is going to be easier when you put articulations on separate tracks and disable what you're not using in your template. Even if you save out customized patches in Kontakt, you still have to futz with the MIDI channel and/or output channels whenever you load said customized patch.

Yet I couldn't imagine having to juggle so many tracks to switch between articulations. Consider this little flute line which bounces between articulations 10 times within the span of a few seconds, just because I found they yielded a lively performance. I'm pretty sure I wouldn't bother if I had to coordinate 5 different tracks for a little phrase.

Perhaps you have more patience, Noam.


----------



## rgames (Aug 23, 2017)

jonathanwright said:


> If you don't mind me asking, which method do you use for triggering articulations?


Depends on the library. Sometimes CC, sometimes KS, sometimes MIDI channel.

That's the beauty of expression maps. It doesn't matter. Once you set them up, stacc is stacc is stacc. You can drag-and-drop across libraries and everything works. To be honest, I have some expression maps that are so old that I can't even remember which technique they use. I love not having to remember 

Regarding the switch to the first artic - you just place an empty artic there. Then it stays wherever you left it. It is a bug - you shouldn't have to do that - but it's an easy workaround.

rgames


----------



## jononotbono (Aug 24, 2017)

procreative said:


> For the example you quoted that sounds okay. But what if you want a phrase that plays for example:
> 
> Portato Staccato Staccato Portato Staccato Staccato
> 
> ...



Well you make some fair points but copying notes means the timing of what you performed are still bang on. Then just move notes, delete others as applicable. It's no big deal to do that.



rgames said:


> Depends on the library. Sometimes CC, sometimes KS, sometimes MIDI channel.
> 
> That's the beauty of expression maps. It doesn't matter. Once you set them up, stacc is stacc is stacc. You can drag-and-drop across libraries and everything works. To be honest, I have some expression maps that are so old that I can't even remember which technique they use. I love not having to remember
> 
> ...



I might look back into making Expression maps again if they are as wonderful as you make out. And you are completely correct about the bug. I remember doing that work around with an Empty art at begging of each map.

Basically using expression maps is it still possible to...

A) Route specific arts to specific outputs, for example, Shorts to a stem, longs to a Stem, Etc etc,

B) Sub Mix (balance) the Arts so they sound like a cohesive performance and if needed, re sub mix them to suit per piece

C) Use an iPad to select specific arts that will have the data (unlike in UACC) record so on playback all arts will change?


----------



## procreative (Aug 24, 2017)

Every DAW has its idiosyncracies. Logic does not have Expression Maps, but thanks to a few 3rd party solutions has something very close. But Flexrouter also can achieve similar goals of unifying the control mechanism.

These all allow you to effectively trigger the libraries the same way and if you wish to have each articulation routed to separate audio channels or submixed before entering the DAW you can also do that.

For example I have Hollywood strings set up as a Multi of separate patches separated by Midi Channel. But my trigger method appears the same in my DAW its always CC32 0, 1, 2 etc.

In fact I just finished mapping many of my libraries in such a way that for instance most of my string libraries play Sustain on 1, Marcato Long on 2, Marcato Short on 3, Staccato on 4 etc and where no equivalent exists I skip a number.

So I can drag a line played on one library to another and have it play the same articulation (obviously only applies to the more bread and butter ones).

And using my iPad app, I dont have to remember what KS does what as every button is labelled for every library.

Its taken me many nights to get to this point.

In another universe... I need to get a life and play more notes! This designing a setup thing can get very OCD.


----------



## Vik (Aug 24, 2017)

procreative said:


> Every DAW has its idiosyncracies. Logic does not have Expression Maps, but thanks to a few 3rd party solutions has something very close


Logic actually has a better solution IMO, only that it's fully implemented. Logic can attach Articulation ID-tags to each and every note, so two notes on the same MIDI channel and on the exact same time position and track/region can have two different articulations. 

That's a very future "proof" solution, because it means full flexibility. You can, for instance, improvise a four part chorale using a piano sound, and later assign each of the four voices to SATB or V1/V2/Va/Cello *and* keep the MIDI channels intact. This means that you can select articulations with the articulation IDs and eg Kontakt preset with the MIDI channel - and this way write a string quartet where all the four notes are inside a piano (bass+treble) clef - with full control of which of your chosen violas that shall be used for each of the articulations and so on. Brilliant concept, implemented in Logic many years ago - and hardly developed since then.


----------



## procreative (Aug 24, 2017)

Vik said:


> Logic actually has a better solution IMO, only that it's fully implemented. Logic can attach Articulation ID-tags to each and every note, so two notes on the same MIDI channel and on the exact same time position and track/region can have two different articulations.
> 
> That's a very future "proof" solution, because it means full flexibility. You can, for instance, improvise a four part chorale using a piano sound, and later assign each of the four voices to SATB or V1/V2/Va/Cello *and* keep the MIDI channels intact. This means that you can select articulations with the articulation IDs and eg Kontakt preset with the MIDI channel - and this way write a string quartet where all the four notes are inside a piano (bass+treble) clef - with full control of which of your chosen violas that shall be used for each of the articulations and so on. Brilliant concept, implemented in Logic many years ago - and hardly developed since then.



Indeed and the built in Articulation IDs are what ArtzID a Midi FX Script by Peter Schwartz utilises.


----------



## Vik (Aug 24, 2017)

procreative said:


> Indeed and the built in Articulation IDs are what ArtzID a Midi FX Script by Peter Schwartz utilises.


Sure. I have seen some discussions aboutArtzID (eg in this thread: https://www.logicprohelp.com/forum/viewtopic.php?f=1&t=125203 ) and I don't know how artzID and Cubase Expression Maps compare in terms of number of needed steps/user friendliness and all that.


----------



## jonathanwright (Aug 31, 2017)

rgames said:


> Depends on the library. Sometimes CC, sometimes KS, sometimes MIDI channel.
> 
> That's the beauty of expression maps. It doesn't matter. Once you set them up, stacc is stacc is stacc. You can drag-and-drop across libraries and everything works. To be honest, I have some expression maps that are so old that I can't even remember which technique they use. I love not having to remember
> 
> ...



Apologies for the late reply, I've just returned from a trip away.

I've tried the 'empty artic' method (mainly using MIDI channels for KS) but I always appear to get a reset to the first channel no matter what.


----------



## A.G (Aug 31, 2017)

The best Articulation systems (Cubase Expression Maps & AG Logic Maps) display the Articulation Maps as a single text articulation names in the Piano Roll or in the Main Window which is essential to compose easily. Both systems offer custom Articulation Maps ordering (you can group the Maps as you wish, set Colors etc).

The Cubase/AG Map is a collection of Articulation assignments (Note, Program, Controller or MIDI Channel switching and other extra settings) which is triggered via a single message inside a DAW editor or via external MIDI device.
AG Maps system works with the Apple Main Stage application as well.

*Key Switching*. It works with all DAWs however the articulations do not chase when you move back and forth and the Key Switches are destroyed when you transpose the track or the MIDI regions. The Key Switches are printed in the Score which is another problem.

*Separate Tracks*. That old fashioned method works as expected, however it takes lots of system resources and time to select and record music using ultra large templates.
Another problem is that some DAWs are limited to a given number of Instrument tracks.

*UACC*. It is a regular CC32 system which is designed to work with all DAWs. It prevents the Key Switching issues (see above), however it does not display the Articulation names in the DAW timeline.
*Note*: The UACC can be programed as Cubase/AG Maps which are shown as text Articulation names in the Cubase and the Logic Piano Roll editor (Logic: Main window multiple tracks articulation view).

*Articulation Remote/External Control*. The traditional Key Switching, Separate Tracks and the UACC use regular MIDI external/remote control.

Cubase Maps can be triggered externally via:
• Key Switches
• Program Change messages

AG Logic Maps can be triggered externally via:
• Key Switches
• Program Change messages
• Control Change Messages

AG Articulation system comes with an unique iPad (Lemur) workstation. The goal is that you can teleport the Articulation Maps (Articulation Names, Group Names & Colors, Instrument Patch Name) and create an iPad Articulation control layout + mixer controls (CC assignments) automatically.


----------



## Peter Schwartz (Sep 13, 2017)

Thanks for the mention, Vik. Being that the thread you linked to is now pretty old, I'd like to suggest that anyone interested in learning about SkiSwitcher or ARTzID should visit the SkiSwitcher http://www.skiswitcher.com (website) to read about its capabilities and latest features.

I've never used Cubase or its expression maps, but I've been told by many customers that my systems offer more flexibility than expression maps. (Some of them have even said that they're better than expression maps. OMG!) As I said, I wouldn't know myself. But what I can absolutely vouch for is that both SkiSwitcher or ARTzID embed articulation information into every note, so there's never any mistracking of articulation switching. Furthermore notes in octaves or chords can each sound with their own individual articulation.

Both systems support the one-instrument-per-track approach, but at the same time they don't lock anyone out from using the one-articulation-per-track approach when that proves to be the best method for realizing a particular part.



Vik said:


> Sure. I have seen some discussions aboutArtzID (eg in this thread: https://www.logicprohelp.com/forum/viewtopic.php?f=1&t=125203 ) and I don't know how artzID and Cubase Expression Maps compare in terms of number of needed steps/user friendliness and all that.


----------



## babylonwaves (Sep 13, 2017)

And while you're at Peter's website checking out his articulation switcher, make sure that you also check out mine. It supports a mix of single and multitrack configurations and UACC is embedded so, if you like, you can use it for EVERY instrument - even those which were not produced by Spitfire.
Art Conductor a bit smaller and simpler and gets the job done fast: www.babylonwaves.com - now isn't that a great row of advertisement we got going here? But seriously, there are a couple of solutions for Logic users and they're all better than what Logic itself offers right now.


----------



## Dollismine (May 14, 2020)

JPComposer said:


> I use Lemur to send program changes to my Cubase expression maps which in turn send the CC32 values to the instrument track, and they record fine into the articulation lanes of the expression map and playback ok (most of the time). If you can send Program Changes from Metagrid, I don't see any reason while it shouldn't work. I use win 10 though mac may be different.


...long time, sorry for the up !!
According to your post, what is (was ?) the configuration of your Program Change in Lemur ?
I try this way but I have a lot or erratic result.

Thanks


----------

