# Your way of doing orchestra articulations in 2021?



## pulpfiction (Jul 30, 2021)

Hi,

I wonder how Orchestra sample library people will do their articulations in 2021.

It's probably obvious to most that keyswitches are best for live performance.

But how do you work apart from keyswitches? Do you use different tracks or different midichannels for your articulations or do you use newer tools like Expression Map (Cubase) or Rearticulate (Reaper). 

I don't see any particular advantages with the latter tools (Expression Map, Rearticulate etc.) over using different midi channels.

Thanks


----------



## cygnusdei (Jul 30, 2021)

Separate channels every day of the week and twice on Sundays!


----------



## pulpfiction (Jul 30, 2021)

cygnusdei said:


> Separate channels every day of the week and twice on Sundays!


Thanks!

Just to understand: With "different midi channels" I mean the function that a track can contain several midi channels. As a difference to the different tracks (one track for each articulation)...


----------



## Piotrek K. (Jul 30, 2021)

I really dislike keyswitches but even more I dislike mess of dozens of patches loaded on separate tracks. My creativity dies when I see 15 tracks per instrument and every one of them with own set of CC's.

In essence I do anything I can to have one track per instrument and all articulations available via visual interface. Lately I'm using Reaper with Reaticulate and it is pretty awesome. Takes some time to set things up, but this is one time job. And as FL studio user I was always huge fan of BRSO articulate plugin. In general I just can't imagine using for example Synchron Player libraries without articulation maps. 

I hope for dropping articulations once and for all though. First step to do so was buying Infinite Brass


----------



## GNP (Jul 30, 2021)

I only use Expression Maps only when there's no choice because the library itself has no keyswitches.
I also prefer to have separate channels due to mixing capability.
2021 is no different.


----------



## cygnusdei (Jul 30, 2021)

When you 'keyswitch' (or whatever proxy operation) there is no way to execute a legato from the first articulation to the second anyway, so they are in essence separate tracks.

On that: I thought 'tracks' and 'channels' were interchangeable. At any rate, it's whatever unit that can be independently manipulated in terms of gain (CC7), and for that matter, pan (CC10), expression (CC1/CC11) or even pitch bend (B0).


----------



## muk (Jul 30, 2021)

Keyswitches for me, and all articulations of an instrument on one track. Apart from that, I spend time to configure each new library to work as similarly as possible as the rest of my setup. To that end I configure the keyswitches so that, no matter the library I chose, the same articulation is always on the same keyswitch. (Legato, for example, I always set to C0 on all libraries that allow to be configured like this). I also adjust the dynamic range of each library/instrument to fit into my template.

I tried using Cubase Expression Maps as the concept has a lot of potential. In practice it didn't work for me. Too many niggles and dropouts. Things that didn't work how I wanted, or couldn't be configured the way I wanted. In the end it was easier and quicker for me to work with plain ol' keyswitches.


----------



## holywilly (Jul 30, 2021)

Expression maps all the way. Easy to manage libraries with and without keyswitch, and also easy and fast to produce scores for live recording.


----------



## jbuhler (Jul 30, 2021)

I like to work with one track per instrument so the DAW resembles a score. For me that means keyswitches operated via articulation sets (Logic), and when possible (e.g., Spitfire) I connect them via UACC because it simplifies the set up. I also set my articulation sets so they are consistent across libraries and have the same keyswitches. I follow the Babylon Waves scheme for the first octave and extend that a bit and use a 25 key keyboard to select articulations but I’m thinking of getting assigning these to buttons in an iPad app or a hardware controller. 

With UACC everything operates on the same channel if you want. Some instruments I have to load on separate channels, clone the automation so it goes to all channels, and the use the articulation set to map back to keyswitches. That’s a bit of a pain. 

Setting this up to separate longs and shorts for mixing (if that is something you do) is more complicated than an articulation per track, but I prefer the conceptual simplicity of a track per instrument.


----------



## chibear (Jul 30, 2021)

I’ve moved mainly to Chris Hein Orchestra libs. All my articulations are done via a value put into CC#6.


----------



## ALittleNightMusic (Jul 30, 2021)

I prefer one track per instrument and expression maps too. However, I have noticed, at least in some libraries or maybe it is Cubase, you might not be able to switch the articulation fast enough to sound right. In that world, using a separate instance on another track is the only workaround.


----------



## cygnusdei (Jul 30, 2021)

ALittleNightMusic said:


> I prefer one track per instrument and expression maps too. However, I have noticed, at least in some libraries or maybe it is Cubase, you might not be able to switch the articulation fast enough to sound right. In that world, using a separate instance on another track is the only workaround.


In my experience keyswitches sometimes work only if it's triggered before the note, not right at the note.


----------



## jbuhler (Jul 30, 2021)

ALittleNightMusic said:


> I prefer one track per instrument and expression maps too. However, I have noticed, at least in some libraries or maybe it is Cubase, you might not be able to switch the articulation fast enough to sound right. In that world, using a separate instance on another track is the only workaround.


I haven't had this issue with Logic articulation sets. Sending multiple articulations to the same instrument at the same time is not always predictable though—usually they stack correctly, but occasionally not. When they don't, I do have to clone the instrument.


----------



## ALittleNightMusic (Jul 30, 2021)

jbuhler said:


> I haven't had this issue with Logic articulation sets. Sending multiple articulations to the same instrument at the same time is not always predictable though—usually they stack correctly, but occasionally not. When they don't, I do have to clone the instrument.


Yeah usually my experience (this is with CSS recently which also needed a track delay). Could be that Cubase doesn’t correct delay articulations? Granted I was also using the CSS KSP panel which delays things automatically too so maybe that was also messing with things.


----------



## babylonwaves (Jul 30, 2021)

cygnusdei said:


> In my experience keyswitches sometimes work only if it's triggered before the note, not right at the note.


They only work when triggered before the note. A key switch doesn’t change an already playing sample. If you use any kind of expression map (attribute mode), articulation set etc that allows you to choose on a note level you won’t get into much trouble. Most modern libraries handle that well. If you use something time line based (directions) you need to switch at the right point in time (better earlier than to late).


----------



## Rich4747 (Jul 30, 2021)

I set all my keyswitches to start at C0, and I use 1 generic expression map for everything.


----------



## jbuhler (Jul 30, 2021)

ALittleNightMusic said:


> Yeah usually my experience (this is with CSS recently which also needed a track delay). Could be that Cubase doesn’t correct delay articulations? Granted I was also using the CSS KSP panel which delays things automatically too so maybe that was also messing with things.


For me it seems to be the product of certain instruments not being able to handle multiple note-on events arriving too closely together, and if you can't stack articulations on a particular instrument with keyswitches without articulation sets, then it doesn't seem to work to stack articulations with articulation sets. Different notes with different articulations at the same time seems in general to be more reliable, except if one of the articulations is legato. So I do usually have a separate legato track for most instruments, but also include legato in the main instrument track. That arrangement is also convenient for breaking out parts/divisi legato when needed.


----------



## tack (Jul 30, 2021)

babylonwaves said:


> A key switch doesn’t change an already playing sample.


Some libraries do actually do this, for example to control the tail sample of a release, or some mid-note embellishment.


----------



## babylonwaves (Jul 30, 2021)

tack said:


> Some libraries do actually do this, for example to control the tail sample of a release, or some mid-note embellishment.


Oh yes you’re right, I didn’t think of that. I was only referring to the articulation selection, not the ornament or tail.


----------



## Karmand (Jul 30, 2021)

I'll chime in, since I am new to writing MIDI orchestration since January. I like each instrument on a track - that instrument plays all it's articulations - as in RL. Now, what I've been experimenting with is Babylon Waves. Since so many SF, Cine and others have a diverse set or keys, I figure these should work, they are not expensive. I do like that two samples can be placed in Kontakt on the same midi channel and each sample can respond to an articulations call; today I placed a viola without a trill and one with FX and trills together and each sample responded to it's call nicely. So it solved a problem where SF libs are not set up the same, and I don't need a separate track for one trill - which I know some people like to mix with each track has it's articulations, but for me that's hard to match sound, space and such.


----------



## wills (Jul 31, 2021)

If you want to well use articulation management and Midi controlers too, may I suggest you to use Reaper. The expression maps of Cubase are now "has been" if you compare with Reaticulate tool of Reaper. The windows to create the expression map in Cubase Pro is a nightmare if you have big library and the expression list takes a huge area in the midi editor. There is no place for notes and Midi CC lanes... Expression maps are good if you use instruments with just a little list of articulations but if you use big libraries, it becomes unusuable. And with Reaper, you have the possibility to "name" all midi CC ... It is of course easier to work on midi CC if you read portamento, vibrato... on the midi lanes... With Cubase, you will see cc21 or CC45 and you have to remember in your rmind all assignements of all Midi CC for each instrument, it is designed only for elephant memory ! I used Cubase pro during three years and I switched to reaper 6 months ago.. now I use all my virtual instruments with all their features (midi CC, articulations...) with great pleasure... I consequently use one track per insturment as a real orchestra... It is a very nice job to do.


----------



## cygnusdei (Aug 1, 2021)

wills said:


> I consequently use one track per insturment as a real orchestra


I see that could be an aesthetic principle that helps the programmer, although the computer is indifferent and ultimately neither is the audience/listener aware. On that point: one advantage of the separate track approach is the opposite, i.e. the sound is not limited to just one articulation at a time! A common use would be, if a sustain articulation is too wooly, you can have the short articulation playing in tandem (with CC7 adjustment as neccesary), effectively stacking on the fly, on demand, in note by note resolution. And that's something the listener would hear.


----------



## wills (Aug 1, 2021)

But i use a lot phrases or sequences with a lot of articulation changes… for example, i sometimes could change articulations in a serie of three notes like this : first marcato legato for good attack, after sustain note and at this end a short detache…. With your solution, i would be obliged to have my short sequence divided in three tracks… i will be a nightmare to edit it… on another side, i never use « volume » because CC7 is as a knob on a soundmachine to adjust a sound level for loudspeakers… it is very far of playing technics to play soft or loud that we may approach using Cc1, Cc11 or velocities (depending of libraries we use)… but, all these remarks are due to the fact, i would like to simulate real players, i don’t want to « create » new sounds… In we open to new creation ways, we may do anything of course…


----------



## Emmanuel Rousseau (Aug 1, 2021)

wills said:


> But i use a lot phrases or sequences with a lot of articulation changes… for example, i sometimes could change articulations in a serie of three notes like this : first marcato legato for good attack, after sustain note and at this end a short detache…. With your solution, i would be obliged to have my short sequence divided in three tracks… i will be a nightmare to edit it… on another side, i never use « volume » because CC7 is as a knob on a soundmachine to adjust a sound level for loudspeakers… it is very far of playing technics to play soft or loud that we may approach using Cc1, Cc11 or velocities (depending of libraries we use)… but, all these remarks are due to the fact, i would like to simulate real players, i don’t want to « create » new sounds… In we open to new creation ways, we may do anything of course…


Well, some of the best mockup artists on the planet are using a track per articulation with...rather good results


----------



## cygnusdei (Aug 1, 2021)

wills said:


> i sometimes could change articulations in a serie of three notes


Oh sure, whatever works for you by all means you should do it. I do use keyswitches myself when it's convenient, e.g. if there is a trill that appears just in that one bar. But when the number of keyswitch triggers approaches the number of notes itself, it's a diminshing return.

On the switching of articulations though: whether it's keyswitches or maps or separate tracks, from the computer's point of view it's the same stitch job, just in different guises. That is, in contrast to probably the audio modeling approach.


----------



## cygnusdei (Aug 1, 2021)

Emmanuel Rousseau said:


> Well, some of the best mockup artists on the planet are using a track per articulation with...rather good results


I think it's because keyswitches, maps, separate tracks etc. are just different user interfaces that can achieve exactly identical rendering.


----------



## Emmanuel Rousseau (Aug 1, 2021)

cygnusdei said:


> I think it's because keyswitches, maps, separate tracks etc. are just different user interfaces that can achieve exactly identical rendering.


Yes, I agree ! I'm using Expression Maps and a single track per instrument (with sometimes an additional MIDI track when I want a very special layering of articulations that couldn't be done otherwise).


This was just a reply to @wills , who seemed to say "I wouldn't be able to use one track per articulation, because I'm using a lot of articulations and am looking for realistic results".


----------



## J-M (Aug 1, 2021)

Preferably all articulations in a single track. Expression maps combined with my pad (Lemur) makes life good...


----------



## Loïc D (Aug 1, 2021)

Articulations map in Logic using Spitfire UACC (cc #32). I’ve programmed my other keyswitches-based libs the same way.
Then I’m using a combination of OSCulator + Open Stage Control + Keyboard Maestro to display current track KS and select it OR apply it to the selected notes in piano roll editor.

But also, since I switched to Aaron Venture’s Winds, I don’t need keyswitches anymore which is perfect to me.


----------



## wills (Aug 1, 2021)

As we see….everyone uses his method… in fact, when articulation management becomes a very easy process (as in Reaper, logic, Studioone, Cakewalk… ) it is very easy to use one track per instrument… if we use Cubase pro, for example, due to difficulties to prepare and to use Expression maps, it could easier to use one track per articulation… it is only my opinion of course,not an universal truth !


----------



## Wunderhorn (Aug 1, 2021)

wills said:


> The windows to create the expression map in Cubase Pro is a nightmare if you have big library...


The AG X-DAW app has helped me a lot dealing with Cubase's expression maps creation or conversion.


----------



## ALittleNightMusic (Aug 1, 2021)

wills said:


> As we see….everyone uses his method… in fact, when articulation management becomes a very easy process (as in Reaper, logic, Studioone, Cakewalk… ) it is very easy to use one track per instrument… if we use Cubase pro, for example, due to difficulties to prepare and to use Expression maps, it could easier to use one track per articulation… it is only my opinion of course,not an universal truth !


There's no issue with Cubase expression maps. Set them up once, you're good to go. My entire 500+ track template is based on expression maps. Same with many other composers.


----------



## Babe (Aug 1, 2021)

Some instruments I use 1 track, some 2, with articulation maps. Two reasons why I use 2 tracks. 

1. Sometimes, articulations aren't properly balance and switching between them has too much volume difference. 

2. If there is a sudden volume change in the music, from soft to loud, even within the same articulation, the release tail of the soft note in increased.


----------



## cygnusdei (Aug 1, 2021)

@Babe Good point, but the tail is due to reverb though, right? Otherwise wouldn't the instrument send a 'note off' MIDI signal upon attack of the next note?


----------



## Zedcars (Aug 2, 2021)

tack said:


> Some libraries do actually do this, for example to control the tail sample of a release, or some mid-note embellishment.


But then that would be a different sample that the playback engine/script triggers after the main note sample has triggered.


----------



## tack (Aug 2, 2021)

Zedcars said:


> But then that would be a different sample that the playback engine/script triggers after the main note sample has triggered.


Of course. But that's beside the point. He was saying keyswitches always precede note-on, and I was saying there are times keyswitches modify active notes. They change the current sample -- into another sample.


----------



## Alex Fraser (Aug 2, 2021)

Strange how folk sort themselves into two tribes: Keyswitchers vs Separate tracks.
The best answer IMO, is both! Depending on the work and context.

For key switching, Logic's articulation system is doing it for me. I love the way I can send the switches to an iPad, main keyboard, 25 key mini etc etc, depending on what I fancy at the time.


----------



## wills (Aug 6, 2021)

ALittleNightMusic said:


> There's no issue with Cubase expression maps. Set them up once, yo for a common collection. u're good to go. My entire 500+ track template is based on expression maps. Same with many other composers.


Yes, the reliability is not a problem. But if yu want to prepare a Cubase pro expression map for, for exmple, Synchron strings of Vienna Sl , or Chris hien solo strings... ... it is a huge job. You have to manually introduce about three data per articulation and the dedicated window doesn't communicate with the virtual instrument (If you click on your articulaiton, nothing happens on the VSti... it is a main difficulty when you want to check your new Expression map). You also can't do any copy/paste to prepare its . It is consequently a huge work to prepare an Expression map (I did about one hundred !) and after if you want to view the expression map in your midi editor, the area reserved for that is very large... you have no place for notes or midi CC lanes ... With some other daws, as Logic , Cakewalk, Studioone or Reaper, The building tool is very easy to use and you may see the articulations clearly written in text on a common line which takes just a little place in the midi editor. As mentionned before, I am pretty sure that Steinberg is working on this feature in order to simplify it...


----------



## ALittleNightMusic (Aug 6, 2021)

wills said:


> and after if you want to view the expression map in your midi editor, the area reserved for that is very large... you have no place for notes or midi CC lanes ... With some other daws, as Logic , Cakewalk, Studioone or Reaper, The building tool is very easy to use and you may see the articulations clearly written in text on a common line which takes just a little place in the midi editor.


You can solve for this by using "Attribute" expression map articulations instead of Direction.


----------



## wills (Aug 6, 2021)

ALittleNightMusic said:


> You can solve for this by using "Attribute" expression map articulations instead of Direction.


in my opinion, attribut/direction has no interest. Others DAWs don’t use this distinction, and if you only use attribut, the presentation of the expression map, in lines, is the same. It just modify the length of the blue case.… you always have a line per articulations, except if you don’t show thecexpression maps and you use the top editor info… with others you only have a line with articulations clearly written.. and all art are in direction…


----------



## MusiquedeReve (Aug 6, 2021)

Separate channels for me

As an aside (please remove if I should have started a new thread) - when I am using a tremolo violin patch, would I still use a rise/sustain/fall type of expression automation or would the tremolo be more consistent than a long articulation?

Thank you


----------



## fourier (Aug 6, 2021)

I tend to use separate channels, but I do load a keyswitch mapping for my shorts, but that also alters other parameters simultaneously.

This is probably a digression, but I'll share a screenshot since it seems there isn't too many composing in Ableton. All work in progress, but for now I've set up snapshots (articulation presets) for my CSS shorts.

Using Max4Live devices Controlchange 8.5 and HoneyMapper I route buses and keyswitches to my macro controls - which in turn is routed to my knobs and sliders on my Keylab 88 MkII. 

Loading single instances of a template doesn't remember the negative track delay, while loading the full template does. It would be helpful if Ableton made that a possibility in an update.

I was very tempted by the Cubase competitive crossgrade, but withstood the temptation. Are you able to set up things in a similar fashion there?


----------



## Dewdman42 (Aug 6, 2021)

There is commonly miscommunication in this point about per channel or per track articulations. It is important to distinguish both the SOURCE and the DESTINATION as distinct factors.

Having articulations separated into seperate instrument instances listening on seperate midi channels has a couple advantages that have been mentioned already such as easily being able to level-balance the articulations against each other as well as possibility to layer them and to combine articulations from different sample libs into a single virtual inst.

At the other end is the SOURCE. The track where the midi notes reside. This is generally desirable to have the entire instrument on a single track, like staves on a score. Many people have expressed this idea one way or another, there are occasionally unique situations where isolating one articulation to its own SOURCE track might make sense but that is generally the exception not the rule

The problem is that none of the current DAW’s provide a complete solution to direct a single SOURCE track to multiple articulations on multiple midi channels due to the fact they only route notes to the different channels and not cc and other midi expression.

For this reason key switching is currently the better solution for most people since one source track corresponds to one instrument and all expression data on the track covers all articulations.

I definitely do not like having multiple SOURCE tracks for multiple DEST midi channels but I know some people prefer that so that they can get the benefits of level-adjusting articulations and layering them, etc

For Logicpro I made a scripter script that does the single-to-multi routing of expression data and works well; but I’m moving to DP now and back to the same old frustration on that front. In that scenario I prefer using key switched instruments so that I have a single SOURCE track which is a higher priority for me.

In the future I would like to see the DAW’s add support for the expression routing as well as per-articulation latency offsets, and ability to layer multiple midi channels on a per articulation basis also. A few other things but I feel those are the glaring things missing from all current articulation management systems and unfortunately I see no indication of that changing anytime soon.

If and when DAW's provide some way to assign some kind of articulation state on a note by note basis, or as DIRECTION's, I would use it, despite the fact they could all use some work IMHO.


----------



## Trash Panda (Aug 6, 2021)

Dewdman42 said:


> The problem is that none of the current DAW’s provide a complete solution to direct a single SOURCE track to multiple articulations on multiple midi channels due to the fact they only route notes to the different channels and not cc and other midi expression.


Sorry to be "that guy" but Reaper does this.


----------



## Dewdman42 (Aug 6, 2021)

glad to hear it if so, but I would need to see it to believe it and anyway I don't care about Reaper.


----------



## Trash Panda (Aug 6, 2021)

Dewdman42 said:


> glad to hear it if so, but I would need to see it to believe it and anyway I don't care about Reaper.


Fair enough on not caring about reaper. At least you can now see and believe if nothing else.

[Video Removed]


----------



## Dewdman42 (Aug 6, 2021)

show me a better example where you have one source track...with one volume curve and its feeding a series of articulations that are listening on different midi channels....and the curve is transferred to each of those midi channels as the notes go there.


----------



## Dewdman42 (Aug 6, 2021)

ps - without doctoring the midi track..in other words....you are not manually configuring the midi channel of every midi event on that track...anyone can do that with any daw, but that's not a practical solution either


----------



## Trash Panda (Aug 6, 2021)

Sorry, I don’t follow.


----------



## Dewdman42 (Aug 6, 2021)

That’s what I suspected


----------



## Trash Panda (Aug 6, 2021)

Are you asking to have MIDI notes of different channels on a single track, with CC on a single channel in that track, but sending to articulations on other channels as a different MIDI channel?


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> show me a better example where you have one source track...with one volume curve and its feeding a series of articulations that are listening on different midi channels....and the curve is transferred to each of those midi channels as the notes go there.


Reaticulate supports this. I'd prove it with a video, but I'd be wasting my time as you don't care about Reaper.


----------



## Dewdman42 (Aug 6, 2021)

What I suggested in my earlier post is that there is a need to have both midi notes and expression events on a single track, using a single midi channel assigned to the notes (not manually assigning midi channels to each event)...such that the articulation management system in the DAW can send the notes to different midi channels where articulation instruments are waiting, one channel per articulation....and the corresponding expression events will also be sent to those corresponding channels.

You said Reaper can do this. I still need to see it to believe it.


----------



## Dewdman42 (Aug 6, 2021)

tack said:


> Reaticulate supports this. I'd prove it with a video, but I'd be wasting my time as you don't care about Reaper.


And I still don't believe it.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> And I still don't believe it.


You won't take my word for it as Reaticulate's author -- would you really take my word for it with a video? (I'll do one, if you will.)


----------



## Dewdman42 (Aug 6, 2021)

I am not knowledgable about how to make videos, but what video would you be wanting me to make? Yes if I see it in video I will believe it.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> I am not knowledgable about how to make videos, but what video would you be wanting me to make?


No I meant that I would do the video, if you really want to see it.



Dewdman42 said:


> Yes if I see it in video I will believe it.


Ok then.


----------



## Trash Panda (Aug 6, 2021)

Dewdman42 said:


> What I suggested in my earlier post is that there is a need to have both midi notes and expression events on a single track, using a single midi channel assigned to the notes (not manually assigning midi channels to each event)...such that the articulation management system in the DAW can send the notes to different midi channels where articulation instruments are waiting, one channel per articulation....and the corresponding expression events will also be sent to those corresponding channels.
> 
> You said Reaper can do this. I still need to see it to believe it.


I follow. Like Tack said, Reaticulate can do that in Reaper and Sound Variations/Music Symbols can do that in Studio One.


----------



## Dewdman42 (Aug 6, 2021)

Trash Panda said:


> and Sound Variations/Music Symbols can do that in Studio One.



most certainly it does not! Prove this statement!

actually I know what S1 does is that it works kind of like Cubase DIRECTION's in a way that when a new articulation is enoucounterd, (on a new channel), it changes the whole track to that channel..including all expression events...but it is being done in a very sloppy way that will cause weird problems, which I have outlined in other threads. It won't handle poly-articulation chords, for example and there is often a need for overlapping expression to go to two channels at once, even when not an actual chord to handle the release of previous note, etc.. So go ahead...see how that works for you and you'll see its problematic. It is something though...

Listen, I'm not trying to be a pest. This thread is quickly escalating into yet another tribalistic, my daw is better then yours. Please stop that, its not productive. ALL DAW's currently have deficiencies in this area. Get over it and please submit feature requests to your favorite DAW maker.


----------



## Trash Panda (Aug 6, 2021)

Dewdman42 said:


> most certainly it does not! Prove this statement!


Sure thing. I’m playing daddy right now, but I’ll put together a video once he’s in bed.


----------



## Dewdman42 (Aug 6, 2021)

make sure you account for poly-articulation chords and overlap from one articulation to another as described above..


----------



## storyteller (Aug 6, 2021)

wills said:


> If you want to well use articulation management and Midi controlers too, may I suggest you to use Reaper. The expression maps of Cubase are now "has been" if you compare with Reaticulate tool of Reaper. The windows to create the expression map in Cubase Pro is a nightmare if you have big library and the expression list takes a huge area in the midi editor. There is no place for notes and Midi CC lanes... Expression maps are good if you use instruments with just a little list of articulations but if you use big libraries, it becomes unusuable. And with Reaper, you have the possibility to "name" all midi CC ... It is of course easier to work on midi CC if you read portamento, vibrato... on the midi lanes... With Cubase, you will see cc21 or CC45 and you have to remember in your rmind all assignements of all Midi CC for each instrument, it is designed only for elephant memory ! I used Cubase pro during three years and I switched to reaper 6 months ago.. now I use all my virtual instruments with all their features (midi CC, articulations...) with great pleasure... I consequently use one track per insturment as a real orchestra... It is a very nice job to do.



With OTR 2’s release, I created and included an easy-to-use GUI editor for Reaticulate. You can still use your existing Reaticulate reabanks with it as well. The editor will only work with the banks you create with it. There is a thread on the reaper forum HERE with more info. It is super fast to create maps with it. No more text editing 

@tack was awesome to work through any questions I had designing it. Big shout out to his contributions to the community.


----------



## Trash Panda (Aug 6, 2021)

storyteller said:


> With OTR 2’s release, I created and included an easy-to-use GUI editor for Reaticulate. You can still use your existing Reaticulate reabanks with it as well. The editor will only work with the banks you create with it. There is a thread on the reaper forum with more info. It is super fast to create maps with it. No more text editing
> 
> @tack was awesome to work through any questions I had designing it. Big shout out to his contributions to the community.


Oh snap! I bought OTR awhile back but never got around to installing it. Might have to sit down and go through the process now.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> make sure you account for poly-articulation chords and overlap from one articulation to another as described above..


You just moved the goalpost though.

With Reaticulate, Reaper can easily do this:



Dewdman42 said:


> What I suggested in my earlier post is that there is a need to have both midi notes and expression events on a single track, using a single midi channel assigned to the notes (not manually assigning midi channels to each event)...such that the articulation management system in the DAW can send the notes to different midi channels where articulation instruments are waiting, one channel per articulation....and the corresponding expression events will also be sent to those corresponding channels.


But this ...


Dewdman42 said:


> there is often a need for overlapping expression to go to two channels at once, even when not an actual chord to handle the release of previous note


... well, I'm not really sure what the use case is here. If you have a prior note under articulation 1, hold it, activate articulation 2 and trigger a note in parallel under the new articulation, CCs subsequently will divert to the channel for articulation 2, but doesn't send them both channels at once.

Note-offs are routed properly, but if you need to send a CC back to the first channel to finesse a release of that articulation, say, I can't imagine why you would want that to also go to the other articulation on the second channel, which you may not even want to release at the same time (or even if you did, may require a different curve to get the right sound).


----------



## Dewdman42 (Aug 6, 2021)

tack said:


> You just moved the goalpost though.



Not at all, this is a fundamental requirement, do I need to explain why? I just want to make sure these other half measures are not checked off as completed.



tack said:


> But this ...
> 
> ... well, I'm not really sure what the use case is here. If you have a prior note under articulation 1, hold it, activate articulation 2 and trigger a note in parallel under the new articulation, CCs subsequently will divert to the channel for articulation 2, but doesn't send them both channels at once.
> 
> Note-offs are routed properly, but if you need to send a CC back to the first channel to finesse a release of that articulation, say, I can't imagine why you would want that to also go to the other articulation on the second channel, which you may not even want to release at the same time (or even if you did, may require a different curve to get the right sound).



Several use cases to think about. One is uncommon, but some people do need it, which is to have poly-articulation chords on one track. I have been told some people use that to layer two different sounds for one attack, for example, but there could be other reasons for having overlapping notes with different articulations in the same track.

another use case is that even if you don't have intended overlapping notes, lets say you just have a quantized melodic line, and the end of each note butts right up against the start of the next note. Alright so far.. But when you release the first note, there will be some amount of time where the actual sound is still releasing and it needs to continue receiving CC and PitchBend (and aftertouch) expression data. If the track has been diverted to a different channel at that point then the tail of the first note will miss that necessary CC/PB data.

Ivan has also posted an interesting video last week showing exactly the problem with S1 in particular because of these discrepancies. He took it down though because people were fighting instead of listening.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> Not at all, this is a fundamental requirement


You may see it that way, but you still described a new behavior subsequent to your original post where I responded that Reaper can do that. Reaper _can_ do what you said first, but not what you said afterward.

I'm interested in the use cases though. Because I can probably implement it easily enough (provided it doesn't break existing valid use cases).



Dewdman42 said:


> One is uncommon, but some people do need it, which is to have poly-articulation chords on one track. I have been told some people use that to layer two different sounds for one attack, for example, but there could be other reasons for having overlapping notes with different articulations in the same track.



With Reaticulate there are two obvious options:

Create a custom layered articulation (say long+spiccato) which explicitly routes to both patches in parallel
Use separate source MIDI channels on the same track and notes on different channels. The MIDI channels can still work independently with respect to articulation selection. This would be similar to what I demonstrate here.



Dewdman42 said:


> But when you release the first note, there will be some amount of time where the actual sound is still releasing and it needs to continue receiving CC and PitchBend (and aftertouch) expression data. If the track has been diverted to a different channel at that point then the tail of the first note will miss that necessary CC/PB data.


Ok, I understand this scenario. But it's not clear to me that this should _always_ happen. For example, if the purpose of the second articulation was something like fortepiano, the CCs would abruptly change and indeed you wouldn't want those to transfer back to the previous note.




Dewdman42 said:


> Ivan has also posted an interesting video last week showing exactly the problem with S1 in particular because of these discrepancies. He took it down though because people were fighting instead of listening.


That's unfortunate.


----------



## Dewdman42 (Aug 6, 2021)

tack said:


> I'm interested in the use cases though. Because I can probably implement it easily enough (provided it doesn't break existing valid use cases).



Good!



tack said:


> Ok, I understand this scenario. But it's not clear to me that this should _always_ happen. For example, if the purpose of the second articulation was something like fortepiano, the CCs would abruptly change and indeed you wouldn't want those to transfer back to the previous note.



I think you want the curve to apply to all the notes as if they were being performed on the same midi channel...which is predictable and understandable. If you had a very soft note followed by FFF note...and a Curve there..that is exactly what would be heard on a single non-channelized track. Whether or not that is a problem or not I leave to you, but as it is...if its cutting it off...it can often be a problem. So if its optional, which way would be the default? I would personally like a multi-midi channel approach to perform as if its a single track and single channel and predictably as such. if you really needed to have a soft note with its own CC curve that doesn't blast up before its release is over...then you can always put those on seperate tracks.

As I said, Ivan posted a video the other day which showed other usability issues with this approach, which I can't remember right now, but its important also, maybe he will post it again. it had to do with jumping back and forth around while editing and working with the track. Anyway...I repeat again..there is no complete DAW-based solution for this. Period. Ivan sells a third party solution for LogicPro that does in fact handle all of this, and I have a similar free solution on GitLab for LogicPro. Cubase doesn't handle it. DP doesn't handle it. Reaper doesn't handle it. it sounds like Rearticulate is 75% of the way there, as is S1 and that's good to hear. Hope you will consider improving it further.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> You're just bickering now Tack.


I'm more explaining why I'm not going to bother with the video, because while I can demonstrate what I originally said I could, I can't demonstrate the subsequent behavior you described (and seem more interested in). At least not in the way you described it.



Dewdman42 said:


> So if its optional, which way would be the default?


That's the crux of it, yeah. I'll poll Reaticulate's users and see what they would prefer.

To your earlier point, there are two variations here -- one where the notes overlap, and one where they are merely adjacent -- but I don't think these should be treated differently, as I think that violates the Principle of Least Astonishment. Unfortunately while I think the non-overlapping scenario you described is probably pretty safe, I'm a bit more apprehensive about the overlapping note case, whether this is optimal default behavior.



Dewdman42 said:


> if you really needed to have a soft note with its own CC curve that doesn't blast up before its release is over...then you can always put those on seperate tracks.


Or a separate channel on the same track, like in the divisi example I linked to. Whichever is default, this approach can be used to implement the other behavior.

There are some other nuances to consider that can complicate implementation. For example, there could be more than 2 channels to replicate performance data across -- although perhaps if you have quick successive notes (each with different articulations) it may not be necessary in practice for all of them to get the same events. What do you think?

There should also be a moratorium defining how long performance events are replicated to channels that have no active notes. You want enough time to massage tails, but some patches burn CPU when they receive CCs even while no notes ring, so you wouldn't want to replicate the events for longer than is necessary.


----------



## Dewdman42 (Aug 6, 2021)

tack said:


> I'm more explaining why I'm not going to bother with the video, because while I can demonstrate what I originally said I could, I can't demonstrate the subsequent behavior you described (and seem more interested in). At least not in the way you described it.



I think it would still be interesting to see what rearticulate can do. I understand though, making videos takes time.




tack said:


> To your earlier point, there are two variations here -- one where the notes overlap, and one where they are merely adjacent -- but I don't think these should be treated differently, as I think that violates the Principle of Least Astonishment. Unfortunately while I think the non-overlapping scenario you described is probably pretty safe, I'm a bit more apprehensive about the overlapping note case, whether this is optimal default behavior.



I like this euphemism. Well that's partly why I'm in favor that a "channelized" solution from a single source track should behave the same way as if it were just a normal track going to a single midi channel in a key switched instrument. All notes on the track get all the CC, PitchBend and AfterTouch... (excluding NoteExpression of course).




tack said:


> There are some other nuances to consider that can complicate implementation. For example, there could be more than 2 channels to replicate performance data across -- although perhaps if you have quick successive notes (each with different articulations) it may not be necessary in practice for all of them to get the same events. What do you think?



There definitely could be more than 2! Especially if you want to support intentional poly-articulation chords and such. Which can be done in LogicPro, by the way...though I personally have not ever needed it, but I have at times heard some people give reasons why they need it.




tack said:


> There should also be a moratorium defining how long performance events are replicated to channels that have no active notes. You want enough time to massage tails, but some patches burn CPU when they receive CCs even while notes ring, so you wouldn't want to route the events for longer than is necessary.



So what I did in my LogicPro solution is provide a GUI with a slider and you configure how much "continuation" time you want from the NoteOff's, from zero to pretty far out wide range. 

You can read about it and see the code, copy and paste if it helps, whatever, I don't care...

https://gitlab.com/dewdman42/Channelizer/-/wikis/home


----------



## Dewdman42 (Aug 6, 2021)

also, to add further.....not to be perceived as moving the goal posts, but unfortunately my brain does not always think of everything at once...but another factor in a channelized solution that needs to be taken into account is "chasing". Similar as when DAW's will chase CC's when you press PLAY from some place in a timeline...it chases back and finds the last set value of Pitchbend and every CC# and then makes sure that they get "chased" to that value at the point you are playing back.

Well when you are doing a multi-channel approach...every time you switch to a new channel, the CC's need to be chased! I very much doubt S1 is doing that...I can't speak for Rearticulate, but you can tell us. I can tell you the DAW's don't do it with current arcticulation managers


----------



## JohnG (Aug 6, 2021)

Some of the solutions you guys are talking about seem a heck of a lot more annoying than just putting up a separate midi track.

I mean, hats off to you if you want to work this way but some of these things seem like a lot of bother, and a lot of details to remember.

I still like having separate tracks for different articulations a lot of the time. It also makes it clearer for the orchestrator -- pizz is separate from spiccato from sustained.

I don't always do that but it is a lot easier for the recipient to understand your intentions.


----------



## tack (Aug 6, 2021)

Dewdman42 said:


> every time you switch to a new channel, the CC's need to be chased! I very much doubt S1 is doing that...I can't speak for Rearticulate, but you can tell us.


Yes, chasing (to the new destination channel) is handled with Reaticulate (as demonstrated here in this ancient video).

And reverse chasing (for want of a better term) is too -- but only for specific CCs (like CC2 and CC64) to prevent note hanging. Ironically, the code to selectively chase those CCs for the anti-hanging feature is more complex than if I just blindly fired back all CC/PB/AT events like you're arguing.


----------



## tack (Aug 6, 2021)

JohnG said:


> Some of the solutions you guys are talking about seem a heck of a lot more annoying than just putting up a separate midi track.


In fairness, the past few posts have been getting well into the implementation weeds that users should never have to worry about. While we can debate about how the ideal articulation management system should work, I bet we can all agree that from a UX perspective it should all Just Work.


----------



## Dewdman42 (Aug 7, 2021)

tack said:


> Yes, chasing (to the new destination channel) is handled with Reaticulate (as demonstrated here in this ancient video).
> 
> And reverse chasing (for want of a better term) is too -- but only for specific CCs (like CC2 and CC64) to prevent note hanging. Ironically, the code to selectively chase those CCs for the anti-hanging feature is more complex than if I just blindly fired back all CC/PB/AT events like you're arguing.



excuse me, what am I arguing? I said nothing about doing anything blindly.

Glad to hear that rearticulate is chasing CC, PitchBend and AfterTouch while channelizing articulations! now if we can just get the DAW makers to do it.


----------



## StillLife (Aug 7, 2021)

Sound variations in Studio One 5. Here's hoping Spitfire etc will join this fantastic system, just as VSL has done.


----------



## wayne_rowley (Aug 7, 2021)

Call me set I my ways, but I still prefer and use 1 articulation per track. Gives me the most flexibility when moving between them and in mixing/balancing articulations. 

Wayne


----------



## Heinigoldstein (Aug 7, 2021)

I want to work in a score system and don’t want to jump around in a dosen systems for only 1st Violins.
For me there could be two solutions, but unfortunately none of them seem to exist in a DAW yet (or maybe I missed it so far)

1) Articulation Maps/IDs with individual (negative) delay options

2) Some kind of score stacks (like track stacks in Logic) where you could handle individual tracks, but with the possibility to show and edit all tracks of a stack in one score system.


----------



## PaulieDC (Dec 30, 2021)

Sorry to stir up the dust with a newbie question, but I want to make sure I understand the simpler picture: I'm going to use BOB as my library for my Berklee class in January (new acronym, Berlin Orch w/Berklee). I use Cubase and bought Babylon Waves.

So it's template time: I like the idea of playing in the flute or violin or whatever with a single articulation so I can get the melody down. Then I'd like to go back and change articulations where I need, and I assume that Babylon Waves will let me go back to change articulations where needed. I understand it's really Expression Maps and that BW just makes the setup WAY easier. Am I on track so far?

I also will replay the bits that are legato on a second track if they don't sound great. So I'm building my template with one instrument per track and I'm hoping Expression Maps created by BW will allow me to do the above... play in a line, then head back and quickly tweak/change articulations. 

Am I off the rails yet, or getting it so far? Might be a totally DUH question but I don't want to get too far into the template and find out I'm nuts. Thanks!


----------



## pulpfiction (Dec 31, 2021)

Hi


PaulieDC said:


> So it's template time: I like the idea of playing in the flute or violin or whatever with a single articulation so I can get the melody down. Then I'd like to go back and change articulations where I need, and I assume that Babylon Waves will let me go back to change articulations where needed. I understand it's really Expression Maps and that BW just makes the setup WAY easier. Am I on track so far?


Correct.

Only if Babylon Waves contains expressions for your library - I guess it does for Berlin stuff.

Otherwise, you have to set it up manually in Expressions Maps.

I use Reaper + Reaticulate. Here I can programme the expressions for each library separately or use the same for everything (that's what I do).

But in principle Expression Maps works pretty sure similar and it should meet your requirements.

But maybe a Cubase user will get in touch with you who knows more about this topic


----------



## PaulieDC (Dec 31, 2021)

pulpfiction said:


> Hi
> 
> Correct.
> 
> ...


Actually that’s great info, just confirms what I had hoped. Now I just need to fire it up and work it out. Thanks! 👍🏼


----------

