# The annual question... Who is using Cubase Expression Maps



## jononotbono (Jan 12, 2020)

So, this seems to crop up every year, so I thought I'd just get this started rather than waiting for it to happen...

Who is using Cubase Expression Maps with their sample libraries? Who prefers separate tracks per articulations?

What are your favorite things about each way? What are the downsides to using Expression Maps?

Maybe well trodden ground but through experience and time, new approaches and techniques usually get developed and what may have been something not typically accepted as a good way of working, sometimes does!

I'm kind of interested in reducing track count in a template but I have my reservations about Expression Maps. So, convince me... 


Jono


----------



## Dewdman42 (Jan 12, 2020)

What are your reservations?


----------



## jononotbono (Jan 12, 2020)

Off the top of my head...

Not being able to instantly see which art is currently selected when looking at the project window.

Not being able to layer arts and have separate control over each layered art

Balancing the arts across a whole template

The sheer time it takes to set this up properly


----------



## Guy Rowland (Jan 12, 2020)

I've tried three times - with really good forum help right here - to get on with Expression Maps. And three times I've failed. I hate them. Unintuitive doesn't begin to cover it. And not just in the set up, in use. I distinctly recall being unable just to drag a staccato articulation and make it spiccato. It's just awful, over-engineered and clumsy.

All I really want is a partitioned area of the key editor exempt from transposition, with articulation labels. As it is, I standardise all my basic articulations among instruments, and it works pretty well.


----------



## jononotbono (Jan 12, 2020)

Yeah, I just really want to love Expression Maps but every time I have set them up, I quickly go back to disliking them immensely.

Maybe I'll ask again next year


----------



## Dewdman42 (Jan 12, 2020)

So just to understand, you’re saying that in cubase you’d rather use nothing then use then because they don’t do everything you wish they did? And I don’t disagree they could be improved, but aside from the setup hassles, seems like it still is better then nothing.

i haven’t spent much time with cubase yet but experience is with logicpro but I am wanting to get more into cubase and will for sure try to make use of expression maps as well as I can. Logicpro’s articulation System is also imperfect but better then nothing


----------



## ed buller (Jan 12, 2020)

I use to swear by em....I have a fancy touchscreen Too !...But now I have every articulation on it's own track. Yup Template is massive......But I never hear the wrong articulation playing !!!

best

ed


----------



## Dewdman42 (Jan 12, 2020)

So is the problem that Cubase has bugs in that area of the product or that expression maps are unintuitive and complicated to understand and setup such that the results don’t make sense?


----------



## Guy Rowland (Jan 12, 2020)

Dewdman42 said:


> So is the problem that Cubase has bugs in that area of the product or that expression maps are unintuitive and complicated to understand and setup such that the results don’t make sense?



The latter is a pretty good summary. I really would cope with the clumsy set up - just - if the results were worth it, but they so aren't.


----------



## Dewdman42 (Jan 12, 2020)

Can you be more specific?

I’m asking this not to be disagreeable but just to get more clarity from those who have worked with it. How does it fall short in the final result?


----------



## Jdiggity1 (Jan 12, 2020)

I use them, but not exclusively. It really depends on the library and what works best with it. Or if it's a commonly used library or not.
Most commonly, I have my tracks broken up into playing styles, with an expression map for each.
So I'll have a 'Longs' track, a 'shorts' track, a 'legato' track, and an 'FX' track. Apart from the legato track, each one will have its own expression map.
I find it especially handy for libraries like Hollywood Strings or CSS that have several different lengths or styles of short notes. Makes it much easier to add some variation and realism in ostinatos or patterns using shorts. Make these 3 notes spiccato, these 2 notes spicatissimo, these ones marcato.






You can apply the same logic to FX or runs. You probably won't need to stack minor runs and major runs together right? Well with expression maps you only need one Runs/FX track, and choose which run or effect you want with the EM. Much cleaner than have 12 different tracks just to cover all the variations.

Another reason I use them, is that every instrument track you have active will add to the file size and save times, so if I already have my 'main' libraries set up as separate tracks per articulation, I might set up my 'supplemental' libraries as a single track with an expression map (if it doesn't already have its own keyswitch), just to save space and feel like I'm being more efficient.

If I find myself wanting to 'stack' or 'layer' articulations from a track i have set up with an expression map, I just duplicate the track. Boom. Layer away.

Edit: Added another screenshot of a bassoon track. Can just be helpful sometimes to see the whole performance in one view.


----------



## rgames (Jan 12, 2020)

The two biggest workflow enhancements for me over the last decade are Expression Maps and SSDs.

I think the value of ExpMaps will depend on you background - I find that people who come from a traditional music background favor them because it makes MIDI more like traditional notation and it's easy to keep everything for one instrument group on one track.

When you look at a score the pizz and arco aren't on separate lines. They're on one line with a note on which technique is active so you can see the harmonic structure independent of the playing technique. ExpMaps let you do that in MIDI without having to remember which keyswitch goes with which artic. My template has 30 artics for some instruments, all on one track. There's no way I'd remember which keyswitch goes with which technique across all libraries.

And here's the other benefit: when set up correctly they work across libraries. So if I have a viola line with legato, staccato and trill and I drag it to a clarinet, guess what? It correctly plays legato, staccato and trill.

They are clunky to setup sometimes but once you do they last a long time. I'm still using ExpMaps from 5-7 years ago. [EDIT: I just checked and my oldest expression maps still in use are from 2010.]

rgames


----------



## tack (Jan 12, 2020)

As the author of an articulation management system for a different DAW, I'm always interested in these threads, and to see the frustrations users experience with their own DAWs.

One thing that perplexes me about the Cubase implementation is the UI. I just don't understand how it scales when you have a library with dozens of articulations. It seems like the interface is just dominated by articulation rows.

Which of these UI approaches do you prefer?







or


----------



## Jdiggity1 (Jan 12, 2020)

tack said:


> One thing that perplexes me about the Cubase implementation is the UI. I just don't understand how it scales when you have a library with dozens of articulations. It seems like the interface is just dominated by articulation rows.


Yeah the UI is another reason I split into performance style tracks (longs, shorts, fx, etc.).
But also, what many people don't realise is that you can change and view the articulations without even having the articulation lane open.
You can select the notes you wish to change and assign it an articulation from the articulation drop-down menu that appears in the Info tab at the top.


----------



## givemenoughrope (Jan 12, 2020)

In a dramatic twist...

I’m basically using jdog’s approach but WITH tack’s fancy and incredible kontakt multiscript. 

Except for Expression Maps (which may or may not be worse, better, equal to tack’s art management in that other daw, no idea) and the new audio alignment feature in C10 (forget the name) I would gladly jump over to that other daw and give it. a go. Life’s short. Why not...


----------



## jononotbono (Jan 12, 2020)

Jdiggity1 said:


> Another reason I use them, is that every instrument track you have active will add to the file size and save times, so if I already have my 'main' libraries set up as separate tracks per articulation, I might set up my 'supplemental' libraries as a single track with an expression map (if it doesn't already have its own keyswitch), just to save space and feel like I'm being more efficient.
> 
> If I find myself wanting to 'stack' or 'layer' articulations from a track i have set up with an expression map, I just duplicate the track. Boom. Layer away.



Yes, duplicating instrument tracks for layering is very easy and attractive. However, this all goes to shit when using VEPro. It's why in VEPro I actually have duplicates of, for example, Spitfire Performance Legatos (so there are 2 of each). 

The whole thing with save times increasing with Instrument track only templates seriously grinds my gears and most people that go on and on about disabled Instrument track templates, never talk about the save times. I once had 55 sec save times. Definitely not for me as much as I love the idea on paper.


----------



## Jdiggity1 (Jan 12, 2020)

jononotbono said:


> Yes, duplicating instrument tracks for layering is very easy and attractive. However, this all goes to shit when using VEPro. It's why in VEPro I actually have duplicates of, for example, Spitfire Performance Legatos (so there are 2 of each).
> 
> The whole thing with save times increasing with Instrument track only templates seriously grinds my gears and most people that go on and on about disabled Instrument track templates, never talk about the save times. I once had 55 sec save times. Definitely not for me as much as I love the idea on paper.


Oh. You use VEP.

You'll come to your senses eventually...

(kidding!!!!)


----------



## Dewdman42 (Jan 12, 2020)

Could any of these frustrations be solved by routing the output of a Cubase midi track through some kind of midi processing plugin for the sake of channelizing, or layering, or doing more sophisticated keyswitching then what Expression maps can do on their own?


----------



## Jimmy Hellfire (Jan 12, 2020)

I've always used expression maps and have just recently learned new things about them I previously never understood. I have now created even more complex and extensive maps and it was totally worth it. It's all cleaner and more compact than before while covering even more articulations and variations of stuff I can choose from freely.

I would never want to work without them. The feature is not perfect and does need an overhaul, but even in it's current state it's invaluable, and I truly believe that if someone says they're not worth it, they probably never really understood the concept.


----------



## Dewdman42 (Jan 12, 2020)

jononotbono said:


> Yes, duplicating instrument tracks for layering is very easy and attractive. However, this all goes to shit when using VEPro. It's why in VEPro I actually have duplicates of, for example, Spitfire Performance Legatos (so there are 2 of each).



I don't understand the problem with VePro here


----------



## Dewdman42 (Jan 12, 2020)

Jimmy Hellfire said:


> I've always used expression maps and have just recently learned new things about them I previously never understood. I have now created even more complex and extensive maps and it was totally worth it. It's all cleaner and more compact than before while covering even more articulations and variations of stuff I can choose from freely.
> 
> I would never want to work without them. The feature is not perfect and does need an overhaul, but even in it's current state it's invaluable, and I truly believe that if someone says they're not worth it, they probably never really understood the concept.



Any chance you can elaborate about what you figured out?


----------



## jononotbono (Jan 12, 2020)

Dewdman42 said:


> I don't understand the problem with VePro here



There is no problem with VEPro. The problem is, is that if you are using a midi track connected to a Cubase Rack Instrument and you want to layer an articulation that you can play on that Midi track... and then you duplicate the midi track, you don't have two separate tracks with their own outputs. You have a duplicate of identical parameters and there you cannot layer articulations like that. Hope this helps.


----------



## Dewdman42 (Jan 12, 2020)

I'm still not understanding. By the way I'd rather try to find a way to avoid duplicating a source midi track also. But anyway you can have two midi tracks feeding one rack instrument..right? The only difference would be that the midi channel would need to be different on the duplicated midi track. yes? Might it also be possible to route a single source midi track through another midi track that only does one thing to clone the notes to a second midi channel..or something like that?


----------



## ed buller (Jan 12, 2020)

rgames said:


> When you look at a score the pizz and arco aren't on separate lines. They're on one line with a note on which technique is active so you can see the harmonic structure independent of the playing technique. ExpMaps let you do that in MIDI without having to remember which keyswitch goes with which artic. My template has 30 artics for some instruments, all on one track. There's no way I'd remember which keyswitch goes with which technique across all libraries.
> 
> And here's the other benefit: when set up correctly they work across libraries. So if I have a viola line with legato, staccato and trill and I drag it to a clarinet, guess what? It correctly plays legato, staccato and trill.



this is why I spent so long programing them...It's SUCH a great idea...but TBH i find it goes wrong so often !.....I gave up. 

best

ed


----------



## jononotbono (Jan 12, 2020)

Dewdman42 said:


> I'm still not understanding. By the way I'd rather try to find a way to avoid duplicating a source midi track also. But anyway you can have two midi tracks feeding one rack instrument..right? The only difference would be that the midi channel would need to be different on the duplicated midi track. yes? Might it also be possible to route a single source midi track through another midi track that only does one thing to clone the notes to a second midi channel..or something like that?



The point is, with an instrument track, you duplicate the track and then you have a second instances of the instrument with it's own dedicated output. You now have two different tracks with the same instrument but completely different versions.

With VEPro and Midi... Let's say all 16 Midi tracks are used... and you want to duplicate one of the arts, you can't do that because there are no more midi channels. It is exactly why, when building a template in VEPro, I have already thought of this and created a duplicate with it's own midi channel.


----------



## youngpokie (Jan 12, 2020)

I am using Expression Maps with SA Studio Orchestra in Cubase. It took a really long time to set up, the way I did it was one Kontakt instance per instrument or per group (e.g. Oboe 1 or Oboes a 2 or Violins I). The maps are locked to CC32 for UACC KS and send to channels corresponding to each respective patch. As I use it, I find a mistake every now and then, but I can fix it in 5 seconds and simply re-save the map.

Then in the MIDI Editor, I set up 2 presets: one for articulations and velocity only, the other for controllers. So I use the articulation preset only once per MIDI part and can then edit without any visual clutter in the other.

I really really love them, but there are some bugs. For example, I cannot set multiple types of trill to a single trill symbol and so I have to write trill in words instead of using notation. The other is that the articulation lanes in MIDI editor are the same color and so sometimes not easy to find right away.

I am sure there's a way to make them even more convenient (e.g. the divisi), but I don't have the time to dig into that.


----------



## 24dBFS (Jan 12, 2020)

Used them for a while, stopped using them, used them again and so forth. I can see some of my clients use them a lot and some not at all, so I think it all depends on the workflow. It takes very long to set them up properly for every library we use but once done correctly it really is a time saver. What I am always worried about is that Steinberg will give us some update of Cubase/Nuendo and all the stuff will stop working or just work differently. Had this issue already few times with some of the updates they did. Suddenly orchestral templates done with C9 had to be redone with C10 because some things didn't carry over etc. For now I mostly use combination of Instrument Tracks with multiple articulations switched via UACC (if SF libraries) triggered from my touchscreen or via KS, some are set with Expression Maps but not that many as I used to. Enable/Disable tracks or VEP7 instances is the best combo for my workflow. Cheers!


----------



## Dewdman42 (Jan 12, 2020)

jononotbono said:


> The point is, with an instrument track, you duplicate the track and then you have a second instances of the instrument with it's own dedicated output. You now have two different tracks with the same instrument but completely different versions.


Well if you are using midi tracks feeding rack instrument...then I don't see that being the case. Don't use instrument tracks.



> With VEPro and Midi... Let's say all 16 Midi tracks are used... and you want to duplicate one of the arts, you can't do that because there are no more midi channels. It is exactly why, when building a template in VEPro, I have already thought of this and created a duplicate with it's own midi channel.



I think what I am understanding from you is that in the non-VePro method you have been employing.. you have an instrument check where the midi channel is irrelevant. If you duplicate it, then you are duplicating not only the midi but also the instruments. They are separate instrument tracks. So it can be duplicitous. 

With VEPro you can have two mixer channels listening to the same midi channel..which can provide the layering without any tricky stuff in cubase. Right?


----------



## babylonwaves (Jan 12, 2020)

I think it is a great technology with a rather horrible user interface resulting in a mixed user experience. yes, it takes time to set it up and get used to it but all in all, it can enrich your music immensely. working with attributes allows you to use chords that consist of different playing techniques. i like to have the option to switch top notes to con sordino to make things sound a little less in your face. or, as mentioned before, make ostinatos sound less static by varying the length of single notes (stacc/spicc/spiccatissimo). try to do that with a track per articulation concept ...
i usually have a track for long notes and a track for short notes. so when I really need to cheat and layer things, i borrow the second track for a moment.

it takes discipline for sure but after getting used to it, i would not like to go back anymore. 

on thing i recently started to discover for my self is using maps in a different way. here, i have a kontakt multi loaded with different mixes of HZ percussion. i can switch in between the mixes for every single note in the drum pattern. below shows kontakt in logic but the idea is the same.

all you need is a map which uses midi channels to switch.


----------



## Reid Rosefelt (Jan 12, 2020)

I use the Babylon Waves Cubase expression maps and find them really useful.


----------



## jononotbono (Jan 12, 2020)

Dewdman42 said:


> Don't use instrument tracks.



I'm not.




Dewdman42 said:


> I think what I am understanding from you is that in the non-VePro method you have been employing.. you have an instrument check where the midi channel is irrelevant. If you duplicate it, then you are duplicating not only the midi but also the instruments. They are separate instrument tracks. So it can be duplicitous.
> 
> With VEPro you can have two mixer channels listening to the same midi channel..which can provide the layering without any tricky stuff in cubase. Right?



I'm not using Instrument Tracks. And I am definitely using VEPro so I'm lost as to what you are talking about.

I have to ask, do you use VEPro?


----------



## Dewdman42 (Jan 12, 2020)

Yes of course I use vePro. Sounds like we're missing each other somehow and I don't understand what you said. Good luck then.


----------



## Symfoniq (Jan 12, 2020)

jononotbono said:


> Yeah, I just really want to love Expression Maps but every time I have set them up, I quickly go back to disliking them immensely.
> 
> Maybe I'll ask again next year



Same. I tried several times to use expression maps in Cubase. Just not worth the hassle and quirks IMO.


----------



## jmauz (Jan 12, 2020)

People get so snippy so easily here...


----------



## youngpokie (Jan 12, 2020)

I forgot to mention one other thing - the Expression Maps also have an option for dynamics with velocity and CCs. As I understand it, this option is combining velocity, CC1, CC11 and then creates percentages for dynamics from ppppp to fffff and uses a combination of all 3 to drive them. Not sure, but remember seeing something about VST3 volume. I haven't used that option, but if anyone has, would be interesting to hear...


----------



## rgames (Jan 12, 2020)

What quirks are you guys finding? The only quirks I've encountered in the last several years have turned out to be user error in programming the maps. It can be a bit complicated but it's just a MIDI mapper. It requires attention detail and some thought about the best way to map everything out. But once you set up the maps it does what you tell it to do (at least it does for me!). You can certainly create cumbersome maps but, well... don't do that!

My only real gripe is the controller lane display - it does get cramped. As mentioned above it would be much better to just show which one is active in the lane and give you some way to change it. There is the indicator above but that's not the best way to show it because you can't see what's coming up.

rgames


----------



## Dewdman42 (Jan 12, 2020)

Do you know of any good places to get more information about programming the maps, aside from whatever is included with Cubase docs? I'm a pretty technical guy, but I must admit when i first dabbled with this for one afternoon I was lost and confused.

The way they have decided to handle the controller lane for expression maps mystifies me too. I mean if the current articulation is mutually exclusive to the others then you don't need a 30 row grid like that. I could see something like that being useful if they provided a way to have layered or overlapping articulations or something.. its just wasted space.


----------



## MarcusD (Jan 12, 2020)

tack said:


> As the author of an articulation management system for a different DAW, I'm always interested in these threads, and to see the frustrations users experience with their own DAWs.
> 
> One thing that perplexes me about the Cubase implementation is the UI. I just don't understand how it scales when you have a library with dozens of articulations. It seems like the interface is just dominated by articulation rows.
> 
> ...



Cubse would be better if the articulation list could be collapsed and expended. When expanded you can draw them in as useual. When collapsed , see 1 row listing what articulation is currently triggered. Maybe have a right click pop up list to select a different articulation, to work both ways. That'd be cool. When there's a lot of maps it can feel clumbsy


----------



## youngpokie (Jan 12, 2020)

Dewdman42 said:


> Do you know of any good places to get more information about programming the maps, aside from whatever is included with Cubase docs? I'm a pretty technical guy, but I must admit when i first dabbled with this for one afternoon I was lost and confused.



I used this for inspiration, was super helpful regardless of the library (I don't have this one)

How to create Expression Maps for Cinematic Studio Strings


----------



## Pablocrespo (Jan 12, 2020)

I cannot recommend using attributes instead of directions enough. Please try it. Don’t use the lanes. 

You just select the notes and then pick the articulation from a menu. I even recorded the mouse movements and can trigger them from a touch screen. 

Please do a simple one with attributes and try it for a week or so. I it’s a game changer for me at least


----------



## jononotbono (Jan 12, 2020)

What are the best tutorials on Expression Maps. I believe the greatest one I have ever seen is the one that JXL did in his studio time series.

Hmmm, I might watch it again and have another play about with them.


----------



## youngpokie (Jan 12, 2020)

jononotbono said:


> What are the best tutorials on Expression Maps. I believe the greatest one I have ever seen is the one that JXL did in his studio time series.
> 
> Hmmm, I might watch it again and have another play about with them.


Posted a link up thread


----------



## ed buller (Jan 12, 2020)

rgames said:


> What quirks are you guys finding? The only quirks I've encountered in the last several years have turned out to be user error in programming the maps. It can be a bit complicated but it's just a MIDI mapper. It requires attention detail and some thought about the best way to map everything out. But once you set up the maps it does what you tell it to do (at least it does for me!). You can certainly create cumbersome maps but, well... don't do that!
> 
> My only real gripe is the controller lane display - it does get cramped. As mentioned above it would be much better to just show which one is active in the lane and give you some way to change it. There is the indicator above but that's not the best way to show it because you can't see what's coming up.
> 
> rgames



My Gripes...they don't work consistently....Some libraries ( spitfire ) just don't like em and play up. Sometimes your playback during a Pizz section and it's Arco....drives me fucking spare.............never have that problem now..........!


best

ed


----------



## Jdiggity1 (Jan 12, 2020)

ed buller said:


> My Gripes...they don't work consistently....Some libraries ( spitfire ) just don't like em and play up. Sometimes your playback during a Pizz section and it's Arco....drives me fucking spare.............never have that problem now..........!
> 
> 
> best
> ...


Yep. Admittedly I've had the same issue. It's essentially like the midi note is read before the expression map data is, so there's a delay in switching articulations.
I've worked around this sometimes by inserting a "dummy note" somewhere out of the instrument's range, with the same articulation as the upcoming notes, as if to give it time to prepare.
Doesn't happen all the time, but I suspect it happens more on busy sessions with high latency due to plugins etc.


----------



## ed buller (Jan 12, 2020)

Jdiggity1 said:


> Yep. Admittedly I've had the same issue. It's essentially like the midi note is read before the expression map data is, so there's a delay in switching articulations.
> I've worked around this sometimes by inserting a "dummy note" somewhere out of the instrument's range, with the same articulation as the upcoming notes, as if to give it time to prepare.
> Doesn't happen all the time, but I suspect it happens more on busy sessions with high latency due to plugins etc.



Yup.......total ballache.........I am happy now

best

e


----------



## jeffreycl (Jan 12, 2020)

I've never had any issues using expression maps. I do understand the issue with screen real estate being taken up with articulations. But, I use TouchOSC to manipulate the screen so if I want to change articulations, I press the articulation button and up pops my articulations. After making changes, I press the appropriate button to change to the CC's I want to manipulate next and the articulations go away and I just see the appropriate CC lanes.

However, Pablo's post earlier about using attributes versus direction sure did give me something to consider. I have been using direction but may just change. He's given me something to think about. My whole world is starting to spin out of control...wait...that might just the wine causing that.


----------



## Pablocrespo (Jan 12, 2020)

jeffreycl said:


> I've never had any issues using expression maps. I do understand the issue with screen real estate being taken up with articulations. But, I use TouchOSC to manipulate the screen so if I want to change articulations, I press the articulation button and up pops my articulations. After making changes, I press the appropriate button to change to the CC's I want to manipulate next and the articulations go away and I just see the appropriate CC lanes.
> 
> However, Pablo's post earlier about using attributes (although he referred to them as articulations) versus direction sure did give me something to consider. I have been using direction but may just change. He's given me something to think about. My whole world is starting to spin out of control...wait...that might just the wine causing that.



Oh. Yes. Attributes!!! Not near the computer and my memory failed me. I will edit the post. 

I use a Steinberg CMC PD (as keyswitches mapped to C-2 and forward so I don’t lose any keyboard) to select articulations when recording and trying stuff, 

and then in the key editor I can use the attributes list on top with the touch screen Or the mouse. I can also copy a part from CSSS to spitfire solo strings and the attributes remain (I have them mapped similarly)
It has changed my workflow in a big way.


----------



## jononotbono (Jan 12, 2020)

Jdiggity1 said:


> Yep. Admittedly I've had the same issue. It's essentially like the midi note is read before the expression map data is, so there's a delay in switching articulations.
> I've worked around this sometimes by inserting a "dummy note" somewhere out of the instrument's range, with the same articulation as the upcoming notes, as if to give it time to prepare.
> Doesn't happen all the time, but I suspect it happens more on busy sessions with high latency due to plugins etc.



Yeah this kind of behaviour is totally unreliable and unacceptable. I would like to know if this ever happens with Tom’s @Real JXL template as he’s a guy that will surely have a bulletproof template. As I’ve previously said, I want to love expression maps. Perhaps I’ve been using them wrong in the past. Maybe not.


----------



## rgames (Jan 12, 2020)

jononotbono said:


> Yeah this kind of behaviour is totally unreliable and unacceptable. I would like to know if this ever happens with Tom’s @Real JXL template as he’s a guy that will surely have a bulletproof template. As I’ve previously said, I want to love expression maps. Perhaps I’ve been using them wrong in the past. Maybe not.


Make sure you send the expression map change before the MIDI note. I think you can attach the expression map data to the note and maybe that's what you're doing. That's not a good idea because, remember, it's just MIDI data and you can't be sure how the instrument will react to simultaneous MIDI data. That problem has nothing to do with expression maps, that's a MIDI timing problem.

It's like quantizing a keyswitch to the note it's affecting: sometimes it works, sometimes it doesn't. You have to send the keyswitch in advance of the note. Same thing with Expression Maps.

Again, Expression Maps are just a MIDI mapper. They work as well or as badly as MIDI. So yeah they're not perfect, but they're no worse than MIDI.

rgames


----------



## jononotbono (Jan 12, 2020)

rgames said:


> So yeah they're not perfect, but they're no worse than MIDI.



With Midi on separate tracks there is zero error. Everything always plays as programmed. So I’m not sure I can agree with that.


----------



## rgames (Jan 12, 2020)

jononotbono said:


> With Midi on separate tracks there is zero error.


Well then you have attained a level of perfection that I cannot! Even without expression maps my VIs don't always respond the way they're supposed to.

rgames


----------



## emasters (Jan 12, 2020)

I found them confusing at first. But after creating many expression maps, it's now straight-forward. I can relate to the folks who struggle, as the process does seem more complex that it needs to be. That said, it helps keep the track count down (versions a track per articulation), and is helpful when transposing. What I really like is the approach with Iconica, where HALion will setup the expression maps inside Cubase, as needed - really slick, especially if instruments change in the sampler. Wish more instruments did this. This would really simplify the whole thing.


----------



## jononotbono (Jan 12, 2020)

rgames said:


> Well then you have attained a level of perfection that I cannot! Even without expression maps my VIs don't always respond the way they're supposed to.
> 
> rgames



Not at all. If I have a midi track with data and that midi track is connected to a Rack instrument and the midi channel is assigned to, for example, String Spiccato... Then it ALWAYS plays String Spiccato.

If you use Expression maps, then you never know with 100% accuracy if the art will change.

You said...



rgames said:


> So yeah they're not perfect



Thanks


----------



## Jdiggity1 (Jan 12, 2020)

Expression maps have two 'articulation types' - Direction or Attribute.
Direction works like a latch keyswitch where you trigger it once and all future notes will follow that articulation until you trigger a different one.
Attribute works on a per-note basis, letting you choose a different articulation for each note.
To use a screenshot I posted earlier, you can see that the longer notes are using the Direction type, indicated by the long bars in the articulations lane. The short notes and trills are triggered as Attributes, which is made obvious by the fact each note has it's own little articulation indicator.






It sounds like Richard uses Direction, which allows you to change articulation _before _a note is triggered. That method essentially solves the issue described above, which is caused by using the Attribute method, where it is essentially trying to trigger a keyswitch AND midi note at the same time.

I should also add, this issue of delayed switching does not seem to happen if your expression map is set up to trigger different MIDI channels, which is how I like to use mine _most _of the time.
You set up a custom multi with different articulations on sequential midi channels, and have each articulation in the expression map route to the appropriate MIDI channel.
Note to @jononotbono : If you set it up this way (as a multi), you can still have separate MIDI tracks for each articulation for the purposes of stacking. Just have the expression map loaded on channel 1, and split it out into as many MIDI tracks as you need. Like so...






Now I can have all midi data on the "Floot Exp Map" channel using the expression map to switch between articulations, OR I can use the individual MIDI tracks. Ultimate flexibility.


----------



## RoyBatty (Jan 12, 2020)

tack said:


> As the author of an articulation management system for a different DAW, I'm always interested in these threads, and to see the frustrations users experience with their own DAWs.
> 
> One thing that perplexes me about the Cubase implementation is the UI. I just don't understand how it scales when you have a library with dozens of articulations. It seems like the interface is just dominated by articulation rows.
> 
> ...



the fact that you can edit maps in a Reaticulate in a single file using a text editor with like Brackets is so much nicer to work with than Cubases clunky click fest of an interface. I also like that it is closer to live keyswitching, so you can setup multiple switches independently and do not have to have a slot for every freaking variation/combination.


----------



## studioj (Jan 12, 2020)

babylonwaves said:


> working with attributes allows you to use chords that consist of different playing techniques. i like to have the option to switch top notes to con sordino to make things sound a little less in your face. or, as mentioned before, make ostinatos sound less static by varying the length of single notes (stacc/spicc/spiccatissimo). try to do that with a track per articulation concept ...
> i usually have a track for long notes and a track for short notes. so when I really need to cheat and layer things, i borrow the second track for a moment.


After experimenting a bit with Cubase expression maps and attributes I can not get them to trigger a chord with more than one articulation reliably. Unlike Logic which does this very well. In Cubase I must stagger the notes slightly to get different articulation within a vertical structure. (Using spitfire libs with uacc keyswitch). Are you getting different results?
This was one thing that disappointed me about Cubase Expression maps after working with Logic Articulation settings for a while. Also it’s unfortunate that you can not set the input trigger for the art to be a CC. Only KS or program change. One last gripe- if working with a map that uses different midi channels to change the art, it resets the art to the first articulation on transport start every time. Even if the first slot is set to empty. With just a little update Steinberg could really improve this system. It’s too bad that they’ve let it sit for so long seeing they were the first to get something like this going.
Oh final gripe- file management within map sets is not very well thought out or flexible in many ways.


----------



## Dewdman42 (Jan 12, 2020)

anyone using groups?


----------



## samphony (Jan 12, 2020)

One big downside for me:

no batch import of expression maps


----------



## trumpoz (Jan 12, 2020)

I used Expression Maps in Cubase for a while, but the stopped for the precise reason that I like to stack and layer articulations. Whilst the concept is logical from the point of view of someone who learned how to play music the traditional way, the execution with sample libraries is that there are some cases where stacked articulations just sound more appropriate.


----------



## Dewdman42 (Jan 12, 2020)

Played around some more with it tonight. I think I have a handle on it now. The groups feature is pretty cool I will definitely use that. I would love to hear how other people are structuring their expression maps. As the CSS article showed above, its also possible to combine channelizing with some creative multi-instrument use to achieve a few things.

I experienced the same thing as someone else noted, with poly-articulation chords, it doesn't work right if the chord notes all start exactly on the same spot. I didn't seem to have any problems so far with the keyswitches being late, but I'll be on the lookout for that.

Does anyone know what the DYNAMICS lane does?


----------



## Jimmy Hellfire (Jan 12, 2020)

Dewdman42 said:


> Any chance you can elaborate about what you figured out?



I never understood what the second, third and fourth articulations meant. Previously I was using expression maps in a simple way - just create an entry for each articulation I wanted to cover and enter the respective name in the Art. 1 column (spicc, stacc, long etc.)

I wasn't aware that you could use Art. 1-4 simultaneously and that they could represent different categories or attributes. Admittedly, that's not all that interesting for sample libraries with a simple layout where you have your spicc, stacc, long, legato, marcato, pizz, bartok pizz, col legno, tremolo, that's it. You don't really need any more precise specification.

But for very complex libraries like Dimension Strings, you just need a more intelligent approach to organize this stuff. Or things like Synchron Strings, where you have all these variations - the longs can be soft attack, normal attack or marcato, but also no vibrato, light, strong, or xfade, etc. That's many parameters that, if you wanted to cover all variations of long notes would absolutely clutter up your expression map.

I realized that using the other articulation columns, I could further specify: long - which type of attack? - which type of vibrato? etc. In the MIDI editor itself, it just says "long", but under the list of articulations there are now additional lanes where I can specify which type of attack and vibrato I want for that note. Since these attributes are also available for legatos, it works the same: I pick "legato" and specify the rest below. Since tremolo can have a marcato as well, the same applies. This way, I can reduce the number of lanes in the articulation list while actually covering more specifics.

Another example could be Big Bang Orchestra Andromeda, where every single articulation can be senza piccolo or con piccolo. The best way to handle this is with a sort of global "switch", so Art.2 is perfect for this. The same could be used for global things like normal or muted, etc.


----------



## babylonwaves (Jan 13, 2020)

jononotbono said:


> Yeah this kind of behaviour is totally unreliable and unacceptable. I would like to know if this ever happens with Tom’s @Real JXL template as he’s a guy that will surely have a bulletproof template.


as for issues related to Attributes - usually, it's the instruments that act up. not cubase.


----------



## laurikoivisto (Jan 13, 2020)

i'm using @babylonwaves Art Conductor for expression maps and they work like a charm!


----------



## jononotbono (Jan 13, 2020)

babylonwaves said:


> as for issues related to Attributes - usually, it's the instruments that act up. not cubase.



Im actually going to give Expression Maps another go.


----------



## Pablocrespo (Jan 13, 2020)

Use attributes, and if you have any question send me a PM...if I know I will help you


----------



## Tfis (Jan 13, 2020)

My workflow:

1. Do your patches from scratch
2. Use Attributes
3. First slot must be empty
4. use the same ks for all libs (A0 is always sustain, every instrument)
5. transpose your (low)instruments by 12 or 24, the ks won't be affected
6. you don't need the articulation lane, different colors for the events should do the job.


----------



## jononotbono (Jan 13, 2020)

Tfis said:


> My workflow:
> 
> 1. Do your patches from scratch
> 2. Use Attributes
> ...



why must the first slot be empty? I remember a few years ago I made expression maps for 8Dio Adagio and Agitato and I remember I had to do that. It’s some kind of bug right? And if so, why hasn’t this been fixed yet? I haven’t noticed it because I don’t use Expression Maps and I thought thought would have been fixed by now.


----------



## J-M (Jan 13, 2020)

*raises hand


----------



## jononotbono (Jan 13, 2020)

Also, regarding the workflow of expression maps, is it fair to say that the workflow involved is to use something like a sustain patch, play the par, and then replace the arts with the ones you want. Then once they have been selected and all Attribute data drawn in, you then have to tweak any midi data accordingly to make everything flow and be at correct levels etc

Also, as an example, are people using Spitfire Time Machine articulations mixed with standard articulations. If so, I’m guessing they are bringing up the corresponding CC lane to tweak the TM stretch functions of the TM patch. I usually have TM patches grouped together on separate tracks but this is definitely a different workflow so I’m curious what people are doing.


----------



## babylonwaves (Jan 13, 2020)

jononotbono said:


> why must the first slot be empty?


i'm curious as well.


----------



## fahl5 (Jan 13, 2020)

I do work with them since they were introduced, which made my workflow with orhestral music dramatically more efficient, than working with KS.
I always tend to program kind of universal Maps, which I could use for as many different instruments similary, what might afford to program my own custumated Patch-Presets if ever possible, but that finally makes them easier to use and the results more interchangeable between different tracks.
Since I tend currently to combine different Sample-Libraries this has become more difficult and I am forced to use more different Maps, but I still would never change to a DAW without any compareable function.
I mostly assign the Articulations to an selected midievent via the Cubase Infoline that makes programming detailed "articulated" orchestral music wonderful much easier as it has been ever before.


----------



## Jdiggity1 (Jan 13, 2020)

jononotbono said:


> Also, regarding the workflow of expression maps, is it fair to say that the workflow involved is to use something like a sustain patch, play the par, and then replace the arts with the ones you want. Then once they have been selected and all Attribute data drawn in, you then have to tweak any midi data accordingly to make everything flow and be at correct levels etc



Your expression map will work like a keyswitch, so you don't have to play it in with the first articulation in the EM list. Either choose the most appropriate, or change keyswitches while you play. The EM will know which articulation/keyswitch you are using and apply that to the notes you record.


----------



## Dewdman42 (Jan 13, 2020)

jononotbono said:


> why must the first slot be empty?



I found this related discussion: https://www.steinberg.net/forums/viewtopic.php?t=86999

I was noticing this behavior last night also and it was annoying me also. Steinberg must have some reason for that behavior, but it is rather annoying. Bottom line is that your first slot should either be blank, or perhaps have some kind of default home articulation in the first slot. Whenever you hit stop, it sounds like, the expression map pops back to the first slot. Its empty then no keyswitches will be sent, but channelizing will still be according to that first slot. So that will be somewhat predictable.

The thing I found weird and threw me off, is that when I hit stop, the keyswitch for the first slot is sent. Then when I hit play, if the first note is also using that slot, Cubase doesn't send the keyswitch again (because it doesn't need to, it was sent when I hit stop). 

The bummer here is that if you hit stop, no keyswitch will be sent, but if the last performed articulation required channelization, then you won't be able to play your keyboard and hear that channelized articulation, until you engage it with a keyswitch again.


----------



## jononotbono (Jan 13, 2020)

Jdiggity1 said:


> Your expression map will work like a keyswitch, so you don't have to play it in with the first articulation in the EM list. Either choose the most appropriate, or change keyswitches while you play. The EM will know which articulation/keyswitch you are using and apply that to the notes you record.



And what about using track offset? Different arts have different start times. This easy to adjust when on separate tracks etc


----------



## Rich4747 (Jan 13, 2020)

I use one expression map for all my vst's. Simple quick. Using Attributes is the key. imo I just select what midi note I want to be what. attribute does not use the lanes. just grab notes and set to what you want. In my opinion its much easier to just move my keyswitches to my expression map.


----------



## Dewdman42 (Jan 13, 2020)

jononotbono said:


> And what about using track offset? Different arts have different start times. This easy to adjust when on separate tracks etc



Not supported now.

Here are a couple things not currently handled by Expression Maps:


timing offsets per articulation, in order to compensate for slow attack times


CC's, PitchBend and aftertouch are not channelized along with notes when using a channelizing expression map.


_Easy_ layering for an articulation to more than one channel (with potentially different keyswitches per layer also, etc)
I think layering can probably be achieved with VePro and ExpressionMap channelizing, but the other two need more help in Cubase.


----------



## jononotbono (Jan 13, 2020)

I guess the solution I would personally use to get over not being able to use Track Offset (which is hugely important) would be by creating Macros commands set to nudge Midi data by set values and slap them on a touch screen. For example, -50ms is a good start for Spitfire SSO String Shorts. I guess high light the notes, press the macros button, and they nudge back. Manually. Hmmm.


----------



## youngpokie (Jan 13, 2020)

I’m confused about track offset. I use it with expression maps and I only need to nudge the expression slightly ahead of the note. Works quite well, what am I missing?


----------



## Dewdman42 (Jan 13, 2020)

for now yea. Or you can get your hands dirty with scripting using LuaProtoPlug or something similar. I think some crafty plugins could be used to make up for the above mentioned deficiencies until/unless Steinberg would address them in the ExpressionMap directly..which would of course be much more ideal. 

LogicPro is suffering from the same deficiencies in their own articulation Management.


----------



## Dewdman42 (Jan 13, 2020)

youngpokie said:


> I’m confused about track offset. I use it with expression maps and I only need to nudge the expression slightly ahead of the note. Works quite well, what am I missing?



In my opinion you should not have to nudge the expression maps earlier. People said that earlier, but I haven't experienced that. If anything, that would only be needed perhaps when using DIRECTION articulations. The attribute articulations are in fact attached to the notes and can't be nudged earlier...nor do they need to be. Yet another reason to use Attribute style expression maps. The Direction attributes are maybe more useful for longer running things setup on Group2, like dynamic levels or something...and sure, nudge them ahead of the first note effected.

What we've been talking bout regarding timing is more related to inherent sample attack latency that exists in some libraries, especially strings. And if you mix and match articulations on one source track, then different articulations may have different amounts of attack latency. So you either have to nudge all the notes earlier to make up for the attack latency...or have the articulation management system do it for you, which right now it does not.


----------



## Guy Rowland (Jan 13, 2020)

Wow, it's all coming back to me scanning these posts...

*shudder*

Crazy system.


----------



## jononotbono (Jan 13, 2020)

youngpokie said:


> I’m confused about track offset. I use it with expression maps and I only need to nudge the expression slightly ahead of the note. Works quite well, what am I missing?



If one art has a specific delay time and another art has a different delay time to the other art, then using track offset is only going to offset a specific value. So one of the arts will still be out of time.

Hopefully that doesn’t sound confusing.


----------



## Dewdman42 (Jan 13, 2020)

Guy Rowland said:


> Wow, it's all coming back to me scanning these posts...
> 
> *shudder*
> 
> Crazy system.



I don't really see it that way. I have spent a lot of time with LogicPro's articulation management and understand its limitations, it is also far from perfect, but its very useful. If you have a specific need that it doesn't address, it can be frustrating.

As of now I see Cubase Expression Maps as being slightly more capable then LogicPro's system. Primarily because of the Group concept, which LogicPro does not have. LogicPro makes up for it perhaps by having a scripting system which has been exploited by some third parties to fill in SOME of the gaps, but some of the mentioned gaps above about timing, layering and channelizing still remain there too.

still, in many cases, it works terrifically. Not sure why you would call it a "crazy" system.


----------



## Jimmy Hellfire (Jan 13, 2020)

Guy Rowland said:


> Wow, it's all coming back to me scanning these posts...
> 
> *shudder*
> 
> Crazy system.



I think this might be a bit of PTSD. It actually really is quite a simple concept!


----------



## richhickey (Jan 13, 2020)

rgames said:


> Make sure you send the expression map change before the MIDI note. I think you can attach the expression map data to the note and maybe that's what you're doing. That's not a good idea because, remember, it's just MIDI data and you can't be sure how the instrument will react to simultaneous MIDI data. That problem has nothing to do with expression maps, that's a MIDI timing problem.
> 
> It's like quantizing a keyswitch to the note it's affecting: sometimes it works, sometimes it doesn't. You have to send the keyswitch in advance of the note. Same thing with Expression Maps.
> 
> Again, Expression Maps are just a MIDI mapper. They work as well or as badly as MIDI. So yeah they're not perfect, but they're no worse than MIDI.



This is just misinformation that is going to confuse people.

First - MIDI is a serial protocol and there is no such thing as 'simultaneous MIDI' on a single channel. If there is a CC message and a note on message, one comes before the other.

Second - Cubase expression 'attributes' are _on the notes_, so there's nothing the user can do re: sending them first. Cubase sends them first. Even a little ahead, to deal with one of the two things that break this system (and cause people to blame concurrency, phases of the moon etc):

1) Kontakt has a bug that causes it to process notes before CCs, so if you are trying to use e.g. Spitfire UACC you can get articulation change CCs after the notes. https://www.native-instruments.com/forum/threads/kontakt-wrongly-reorders-cc-events.345041/

2) VST3 (and only 3) separates CCs from notes, breaking the serial MIDI stream into two parts in a way that can't be reconstructed in order. https://www.steinberg.net/forums/viewtopic.php?f=246&t=174160&p=931253#p931251

These two problems cause us all vast amounts of wasted time and are the result of big industry players thinking they know better and ignoring the MIDI standard. Quite frankly, it's like building on sand.


----------



## Dewdman42 (Jan 13, 2020)

richhickey said:


> This is just misinformation that is going to confuse people.
> 
> First - MIDI is a serial protocol and there is no such thing as 'simultaneous MIDI' on a single channel. If there is a CC message and a note on message, one comes before the other.
> 
> Second - Cubase expression 'attributes' are _on the notes_, so there's nothing the user can do re: sending them first. Cubase sends them first. Even a little ahead, to deal with one of the two things that break this system (and cause people to blame concurrency, phases of the moon etc):



+1



> 1) Kontakt has a bug that causes it to process notes before CCs, so if you are trying to use e.g. Spitfire UACC you can get articulation change CCs after the notes. https://www.native-instruments.com/forum/threads/kontakt-wrongly-reorders-cc-events.345041/



Its probably CC's before notes, but we don't know for sure the internals, but my own observations were that specifically when CC automation is used in kontakt, then on any given sample time, all the related CC automation is processed before processing any notes. Which may or my not be considered a bug...there are sound reasons why that would be expected to happen. But on the other hand, this architecture makes it impossible to rely on the serial nature of midi to use different CC values across multiple notes of a chord all on the same clock tick.

But the unfortunate thing is that if you were using CC's in between notes of a chord, to be switches for different articulations on different notes of the chord...all of them on the same sample time, then the last CC message used would win and all notes of the chord would use that articulation. That is if and only if CC automation in kontakt is being used. If the instrument itself is scripted or coded in some way to receive the incoming note and CC messages directly, then I don't necessarily think the ordering would be munged, unless KSP scripting has specific bugs in it.




> 2) VST3 (and only 3) separates CCs from notes, breaking the serial MIDI stream into two parts in a way that can't be reconstructed in order. https://www.steinberg.net/forums/viewtopic.php?f=246&t=174160&p=931253#p931251



I was not aware of this, thanks for pointing this out. If that is true, then CC's are not acceptable to use as keyswitches when using VST3 instruments. If that is true, then a similar situation as Kontakt's CC automation would quite possibly occur where all of the CC parameters would be processed for a given sample time, before processing any notes. And perhaps rightly so, but it then makes it impossible to interleave CC's between notes of a given sample time and expect them to remain in the intended order.

I hope that using a slightly different start time for each note, should resolve that problem in both of the above cases though.


----------



## Dewdman42 (Jan 13, 2020)

I think the above general observation about separate serial streams is probably also why the Expression Map system cannot handle poly-articulation chords (which LogicPro can handle by the way). 

The ExpressionMap must be processed before any notes. So all related keyswitches for that midi clock tick are first sent, according to the ExpressionMap slot..then finally any notes.

I believe Steinberg could still fix that though, since with the attribute articulation type, each note has the specific info it needs to send the right keyswitches serial in front of each note. It doesn't need to handle all the of the expressionMap's before doing all the notes. This should be a feature request. Being able to do this is useful for divisi handling. However, its also possible to nudge the notes of a chord slightly so that everything will interleave serially as desired.

But still the above issue about CC's being handled in separate serial queues means the instruments themselves can't receive one serial list...and so... If that is true about VST3, then non-note keyswitches should be avoided...which is troubling. Seems Steinberg has overlooked this possibility of needing various different kinds of expression to be applied differently to different notes of a chord on the same midi tick.


----------



## richhickey (Jan 13, 2020)

Dewdman42 said:


> Its probably CC's before notes, but we don't know for sure the internals



It's notes before CCs. You can reproduce in Logic with this script. Just put the script in as a midifx, load the script, drop kontakt on the channel and load its KSP midi monitor in verbose mode. I don’t see the note second consistently until about 6 msecs delay:


```
/*
test kontakt event order
*/

var NeedsTimingInfo = true;

function HandleMIDI(event) {
if(event instanceof NoteOn && event.velocity > 0)
    {
    var info = GetTimingInfo();
    var cc = new ControlChange(event);
    cc.number = 2;
    cc.value = 42;
    cc.send();
    event.sendAfterMilliseconds(0); //increase this until order is stable, 5-6ms
    }
    
else
    event.send();
}
```


----------



## Dewdman42 (Jan 13, 2020)

Hmm, that's interesting. I do get this same results now.

I don't think it used to do that, did it change with Kontakt6 or something?

The thing is, Spitfire UACC would totally break if this was actually happening, and CSS too; so it makes me wonder whether the KSP engine is just reporting it wrong. It makes absolutely no sense that NI would process the noteOn's before CC's on the same beat... and actually for this mode, it makes no sense that they would be on separate queues either.

Well anyway if that's true, then CC keyswitches should really be avoided at all costs...both for VST3 instruments and for any Kontakt instruments even if not not VST3.


----------



## Dewdman42 (Jan 13, 2020)

and frankly you should not have to delay the note by 6ms in order to get them in sequence! That is even worse and something seriously broken if its true, but I'm still doubtful about that...I think this could be a KSP multi-script issue. That could still be an issue for many people though.


----------



## Tfis (Jan 13, 2020)

Never had problems with Kontakt and expression maps.


----------



## DS_Joost (Jan 14, 2020)

The more I read about it the more I am very, very glad I just use on articulation per track.

Once in my career, I dealt with huge templates and expression maps and Lemur on an iPad and all that jazz... until I realized I was putting much more time into maintaining a template and all the technical hufflepuff that comes with it instead of making music.

It was at that moment that I threw all of that overboard. It's really freeing not having to deal with what is essentially lipstick on a pig. It's fluff. Very technical fluff. And it has zero to do with actually making music. Which is what we all want to do, right?


----------



## Markus Kohlprath (Jan 14, 2020)

Pablocrespo said:


> Use attributes, and if you have any question send me a PM...if I know I will help you


This and change the color in the midi editor to sound slot. Then every midi event has its own color per articulation. No better way to get a fast overview IMO.


----------



## Alex Fraser (Jan 14, 2020)

DS_Joost said:


> The more I read about it the more I am very, very glad I just use on articulation per track.
> 
> Once in my career, I dealt with huge templates and expression maps and Lemur on an iPad and all that jazz... until I realized I was putting much more time into maintaining a template and all the technical hufflepuff that comes with it instead of making music.
> 
> It was at that moment that I threw all of that overboard. It's really freeing not having to deal with what is essentially lipstick on a pig. It's fluff. Very technical fluff. And it has zero to do with actually making music. Which is what we all want to do, right?


Haha, I know what you mean.

Having previously detested old school keyswitching methods, I've found Logic's own home-brew effort to be just the right balance of tech and "not having to fuss with it." Although I have been tempted to add TouchOsc on the iPad into the mix. Thanks for reminding me not to!


----------



## babylonwaves (Jan 14, 2020)

Spitfire users should make use of UACC KS (not UACC). That's the most reliable way. And, from what I can see, KS messages which consist of a Note and a single CC are also fine, you just have to know how you build the Expression Map in this case. What's not so fine is multiple CCs in certain configurations, because Kontakt will not deal gracefully with those situations.


----------



## richhickey (Jan 14, 2020)

Dewdman42 said:


> Hmm, that's interesting. I do get this same results now.
> 
> I don't think it used to do that, did it change with Kontakt6 or something?
> 
> ...



I'm still using Kontakt 5 and it is broken there too. Not fixed (yet) in 6.

And yes, if you do a search you'll see people have plenty of problems/gremlins with UACC - e.g. Babylon Waves suggesting people use UACC KS (which don't get reordered) instead, when using Logic.

It is my understanding (but I have not tested) that Cubase sends expression map attribute CC messages ahead of time specifically to accommodate this and similar behavior by plugins. That's why other people will say it works fine for them (mostly).

Note this is not a problem with UACC, it's a problem with Kontakt. For me, it made working with OT Berlin series and notation programs/DAW emaps quite difficult. That's why, for all their growing pains, I welcome the dedicated players from these companies. At least they can support them themselves.


----------



## babylonwaves (Jan 14, 2020)

richhickey said:


> Babylon Waves suggesting people use UACC KS (which don't get reordered) instead, when using Logic.


when using cubase as well.


----------



## DS_Joost (Jan 14, 2020)

Alex Fraser said:


> Haha, I know what you mean.
> 
> Having previously detested old school keyswitching methods, I've found Logic's own home-brew effort to be just the right balance of tech and "not having to fuss with it." Although I have been tempted to add TouchOsc on the iPad into the mix. Thanks for reminding me not to!



The thing is, the more tech you incorporate into your setup, the more room for technical problems. Each and everyone of these things has to work perfectly or your flow is disrupted.

I tend to think about flow as a delicate thing. Once you have it, you don't want to stop. There's nothing more frustrating as to have to stop in your tracks to fix your Ipad, or your internet connection drops out and you can't connect, or Cubase suddenly throws out your controller mapping (has happened to me before), or your VEPro project is corrupted and you have to grab a backup, or... I guess you know what I mean.

Because flow is so delicate and breakable, I tend to minimize the things that can break. I have Studio One, it's rock solid, and what I use are track presets. I type CTRL+F, search for HS 1st VLN Leg, and boom, there's your preset, you press enter and you can immediately start recording.

There are numerous ways you can improve save and loading times, or times where an instrument has to load, or workflow improvements. The danger, however, is to way too easily fall into the trap of thinking about shaving off a couple seconds for so long and spending so much time working on shortcuts, that you are busy doing technical busywork instead of what you are actually supposed to do: focus on notes, and expression and orchestration, because those are the cornerstones of what we actually do.

I know my workflow could be faster. But the thing is, I don't care. I know my workflow works now, and tomorrow, and the day after that. I know the chance of technical problems is minimized because I do not work with three different pieces of software and hardware at the same time. But the most important thing is, I do not have to think about expression maps, about articulations triggering or not, about loading order of my software, about whether I should EQ inside VEPro or inside my DAW, about my internet connection, about the fact that VEPro only takes VST2 not VST3 instruments, and my DAW accepting both, about latency, about which instruments has it's keyswitches where, about disabling or enabling instruments in order of importance, about incoming or outgoing midi ports and channels, and all that.

CTRL+F, type name, enter. Play in notes. If wrong, undo and repeat. Rinse and repeat. Again and again and again. Dive into the midi editor. Transpose notes, try different combinations. Massage the CC's a bit. Not with difficult and unwieldy tools, just by drawing freeform.

Having less ability to think about shortcuts forces you to become better at playing stuff in, to become better and be able to rely on your knowledge of musical instruments and how they are going to sound. A free mind is open to experimentation. And an experimenting mind is a focused one.

Not all that is new is automatically an improvement. While my save and load times may be a little longer than most, I think my time spend on actual composition is actually bigger.

Unless of course I am procrastinating by posting lengthy replies on an internet forum!


----------



## Alex Fraser (Jan 14, 2020)

DS_Joost said:


> The thing is, the more tech you incorporate into your setup, the more room for technical problems. Each and everyone of these things has to work perfectly or your flow is disrupted.
> 
> I tend to think about flow as a delicate thing. Once you have it, you don't want to stop. There's nothing more frustrating as to have to stop in your tracks to fix your Ipad, or your internet connection drops out and you can't connect, or Cubase suddenly throws out your controller mapping (has happened to me before), or your VEPro project is corrupted and you have to grab a backup, or... I guess you know what I mean.
> 
> ...


Absolutely, 100%.
My music tech setup is incredibly basic compared to most user rigs - purposefully, as nothing kills my flow more than technical issues or the added cognitive load that comes with managing it all.

Actually, I think music tech has gotten _more_ complicated in the last few years, not less. But that's a post for another day.


----------



## Dewdman42 (Jan 14, 2020)

babylonwaves said:


> Spitfire users should make use of UACC KS (not UACC). That's the most reliable way. And, from what I can see, KS messages which consist of a Note and a single CC are also fine, you just have to know how you build the Expression Map in this case. What's not so fine is multiple CCs in certain configurations, because Kontakt will not deal gracefully with those situations.





richhickey said:


> I'm still using Kontakt 5 and it is broken there too. Not fixed (yet) in 6.
> 
> And yes, if you do a search you'll see people have plenty of problems/gremlins with UACC - e.g. Babylon Waves suggesting people use UACC KS (which don't get reordered) instead, when using Logic.
> 
> ...



Interesting. What, if anything, has EvilDragon had to say about this issue?

So I had a chance to run a test with CSS. CSS is able to use CC58 to switch articulations, or regular keyswitches. In any case, I also enabled the multi-script midi monitor. Then I ran Richhickey's LPX script to feed it notes with CC58 right in front of the note on same timestamp.

What happened is that CSS changed the articulation and played the note with the correct articulation(good), while the multi-script midi monitor displayed it as the note being before the CC(bad). 

I was not able to get CSS to fail.

When I tried adding some milliseconds of delay for the Note (in the script), then eventually the multi-script midi monitor would show the CC before the note...and right around 5ms, it would be inconsistent, sometimes correct, sometimes reverse order...But CSS always played the right thing, which means somehow CSS is able to process the CC before the note, while the KSP engine somehow is reporting otherwise.

My feeling is that Kontakt is specifically broken somehow in the KSP engine. Its possible that Spitfire is stumbling over this more than CSS for some reason. But anyway, it doesn't really matter...end users don't care what the reason is, if kontakt is unreliable in this regard, then its unreliable.


----------



## A.G (Jan 14, 2020)

richhickey said:


> Note this is not a problem with UACC, it's a problem with Kontakt.



Hi guys,
I'm happy to outline some info here as a Kontakt Instruments & MultiScript developer who has a long experience with Logic & Cubase articulation systems creation.

1. Kontakt MultiScript is broken indeed. I found that CC sending is delayed and the delay is equal to the DAW audio buffer setting. If you change the DAW audio buffer say from 256 to 512 samples the CC delay will change accordingly.
The KSP MultiScript Monitor is correct - I have created much simple KSP monitoring and MIDI event processing and the CC delay still persists.

2. The CC delay bug is not related to the KSP Instrument level. If you insert a Monitor in the Instrument script editor you will find that CC is sent before the Note event as expected.
That's why UACC with CC32 and other similar Instruments (which use CC switching) work as expected!

BTW. We at Audio Grocery are on a dead line to release *Kontakt 64 v2.0* (KSP MultiScript which allows you to use all Kontakt 64 channels via any DAW with an Intelligent CC, P.Bend, Poly & Ch. AT cloning). I had to kick out the CC switching method by replacing it with a Velocity KS and Program Change methods offered by a UI menu.

Here is an animated image where I demonstrate the Kontakt MultiScript & Instrument KSP CC receiving behavior.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> I was not able to get CSS to fail.


I wonder, did you try sending _multiple_ CC58 events? For example, have an articulation for advanced legato sustains con sordino. This requires 3 CC58 events.

I ask because another issue I've observed is that Kontakt coalesces proximate CCs unless they are followed by a note. So when you send the 3 CC58 values, the CSS patch only actually ever sees the _last_ one.

You actually have to add a fairly nontrivial delay between the events to avoid this aggregation -- empirically, enough delay such that the event is processed in the next audio block. (Which is to say, the amount of event coalescing is proportionate to the size of the audio buffer. Similar in principle to the delay issue A.G. mentioned.)

I've since reverted to note-based keyswitching with CSS for this reason.

I personally haven't ever experienced any raciness with Spitfire UACC such that CC32 fails to deliver before the note. And even though UACC is CC-based, the system is more reliable than CSS because articulations are described by a single CC, not multiple, so coalescing isn't a factor.


----------



## tack (Jan 14, 2020)

A.G said:


> BTW. We at Audio Grocery are on a dead line to release *Kontakt 64 v2.0* (KSP MultiScript which allows you to use all Kontakt 64 channels via any DAW with an Intelligent CC, P.Bend, Poly & Ch. AT cloning).


Is that similar to FlexRouter? FlexRouter allows addressing the 64 channels via some other "keyswitch" (like a note, program change, CC, etc.) Of course the number of source channels can still only be 16, but you can route into any of the 64 channels internal to Kontakt.


----------



## Dewdman42 (Jan 14, 2020)

I didn't try that, but I will. Thanks for the heads up. In general I guess using note velocity on C-2 is probably the way to go instead of using a CC in general. VST3 is also problematic across the board for CC's.

I have some suspicions about why Kontakt might be doing that, which has to do with adhering to VST3.


----------



## Dewdman42 (Jan 14, 2020)

tack said:


> I ask because another issue I've observed is that Kontakt coalesces proximate CCs unless they are followed by a note. So when you send the 3 CC58 values, the CSS patch only actually ever sees the _last_ one.



This is a problem I had in the past when I was trying to use CC automation to drive Kirk Hunter stuff from an articulationset and script. It worked fine except when there was a poly-articulation chord, with all notes on the same timestamp. In that case, the last CC on that timestamp would effect all notes of the chord, even though LogicPro was sending CC-note-CC-note-CC-note. 

I was assuming this was related to CC's being separated from notes somewhere in kontakt to go through the automation mapping section...and then Kontakt would not know any way to re-correlate them with the notes since all 6 events have the same timestamp.

I am thinking with CSS, if its not using CC automation, then this problem is not related to automation per say, but still the CC's are being separated from notes at some early stage and then kontakt is unable to re-correlate them. The only logical thing at that point is to play all CC's followed by all notes, but the interleaving is lost. 

anyway, when I was working through that before with Kirk Hunter I was using just one CC in front of each note of a chord, but its the same issue if you have 3 CC's going to a single note. The last one wins if Kontakt is processing all CC's on a given timestamp (or process block) before all notes. Which is just stupid as can be...but its quite possible they are doing that...and the VST3 standard appears to follow the same line of thinking.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> But still the CC's are being separated from notes at some early stage and then kontakt is unable to re-correlate them.


I'm not sure what you mean here? I mean, CCs are inherently independent events from notes.



Dewdman42 said:


> In that case, the last CC on that timestamp would effect all notes of the chord, even though LogicPro was sending CC-note-CC-note-CC-note.


Well that's interesting. Possibly I over-generalized about notes "resetting the clock" (so to speak) on coalescing CCs. In fact, it's also possible this whole behavior is more DAW-specific than I originally thought. I thought I had convinced myself enough to blame Kontakt by inserting a MIDI logger FX before Kontakt and seeing what I expected to see, but I probably ought to have tested other DAWs before making the conclusion.


----------



## Dewdman42 (Jan 14, 2020)

tack said:


> I'm not sure what you mean here? I mean, CCs are inherently independent events from notes.
> 
> 
> Well that's interesting. Possibly I over-generalized about notes "resetting the clock" (so to speak) on coalescing CCs. In fact, it's also possible this whole behavior is more DAW-specific than I originally thought. I thought I had convinced myself enough to blame Kontakt by inserting a MIDI logger FX before Kontakt and seeing what I expected to see, but I probably ought to have tested other DAWs before making the conclusion.



Traditionally CC's and Notes have been stored in a single buffer queue. That is what is passed to a plugin through a callback function. The plugin can then iterate through that buffer one event at a time, in order; and render sound to the audio buffer according to the timestamps of each event. They can all have the same timestamp and will be rendered in the same place, but they are still processed by the plugin one at a time in the same order they exist in that buffer. This is why for decades people have been able to put CC-note-CC-note-CC-note on the same timestamp, but the CC's would be handled by the instrument in between each note while doing the processing. This would effectively change the settings of the instrument in between each note rendering, etc. 

VST3 has separated CC from notes..and now according to that standard, the plugin midi buffer is NOT supposed to have any CC's, only notes. That means the host must have had to separate the notes from the CC's and put them into separate buffers in order to comply with VST3 requirements. This loses the intended ordering though when timestamps are the same.

Kontakt is not VST3, but they must be making some similar assumption. At some early stage they are separating CC's from Notes and putting those events into separate buffers. Or perhaps they aren't splitting them into separate buffers like VST3, but simply making the brain dead assumption that all CC's in a process block should be processed before any notes are processed, which actually is the more likely thing that is happening now that I think about it.

What Kontakt SHOULD be doing is going through the one singular midi buffer (VST2) processing each event in the order they appear in the buffer queue. With VST3 its not even possible to do that, apparently. Its also possible that NI has been changing Kontakt in order to comply with VST3 and has broken the serial nature of the the midi protocol in the process. Any and all VST3 plugins should be suspect of this happening I might add.


----------



## Dewdman42 (Jan 14, 2020)

Basically with any VST3 instrument....or Kontakt...we should never assume it will be ok to use CC's as switches, especially if you have poly-articulation chords, or need several CC's in a row before the note. Then don't use them.

For normal CC expression, the approach of VST3 and Kontakt is not too terrible.. If there are three CC7 messages in a process block, then it just uses the last one and starts rendering notes. Makes perfectly fine sense in that case. But they have forsaken the expectation that some have had in the past to assume that all midi events, including CC's and Notes, can have an order that is deterministic based on a serial ordering. That is now gone from VST3, and it sounds like in kontakt too.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> VST3 has separated CC from notes..and now according to that standard, the plugin midi buffer is NOT supposed to have any CC's, only notes. That means the host must have had to separate the notes from the CC's and put them into separate buffers in order to comply with VST3 requirements. This loses the intended ordering though when timestamps are the same.
> 
> Kontakt is not VST3, but they must be making some similar assumption. At some early stage they are separating CC's from Notes and putting those events into separate buffers.


Really interesting info about VST3 there, and also a plausible hypothesis of what Kontakt may be doing behind the scenes. Another idea -- much more of a reach, mind you -- is that there may be parallel (multithreaded) processing of events which results in some raciness and general unpredictability of events with the same timestamp. Disabling multithreading in the engine could disprove that hypothesis. (Yours seems more plausible.)

I'm sure there was some rational justification for this buffer separation of CCs and notes but at first blush I find it baffling and limiting. What about non-CC events like program changes, aftertouch, SysEx, etc.? Are these relegated to the same buffer as CCs?


----------



## Dewdman42 (Jan 14, 2020)

one of the reasons DAW's generally put each plugin on a thread...and well behaved plugins generally don't fork more threads for audio processing except under very careful circumstances...is exactly for the reason of avoiding race conditions. Kontakt should not ever be allowing that kind of race condition. I believe kontakt may assign different instruments to different threads if you have that turned on..which may or may not interfere with the thread load balancing algorithm being used by the host DAW. But this should really not be a factor for the problem we're talking about IMHO. If so that would be really stupid multi-threaded programming, frankly speaking.

As far as PC, AT, PB... I believe those are not supposed to be in the VST3 midi buffer either. Steinberg feels PC, for example, should not ever be in midi, that should be a higher level VST preset thing...and should never be in the midi buffer. So as I understand it, no PC messages in VST3 midi buffer. Not sure about PB and AT, but I believe they are the same, they are supposed to be handled in VST3 as parameters, like automation, separate from MIDI...and in their view the midi buffer should just have note data, NoteExpressions were added later as a way to attach things like CC and PB to notes...etc..but they are abstracting a layer above the midi stream and separating out the events in the process...and thus losing the serial order of the buffer that was inherently always there up through VST2.


----------



## Dewdman42 (Jan 14, 2020)

If you think that is bad, just wait, its going to get worse. We have MIDI2.0 and MPE coming down the road now...and VST3 was not designed with them in mind. These three competing standards are now trying to reconcile their differences, and I predict years and years of struggle in that regard. Here's a recent presentation from Steinberg you might find interesting:



The part that burned my hide is when the guy from Steinberg said that hosts will be expected to work out the translations between these things. yea right. So that means we can expect 10 years of different hosts handling all of these things slightly differently...


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> I believe kontakt may assign different instruments to different threads if you have that turned on.


Kontakt definitely distributes voice processing over threads even from a single patch. It also definitely does not distribute sample loading/decompression over threads. 



Dewdman42 said:


> If so that would be really stupid multi-threaded programming, frankly speaking.


Yes, but then most thread-related bugs are really stupid once you discover them. 

In this case, if the order of events with the same timestamp is undefined per spec, then nondeterminism isn't technically wrong and could just be a side effect of some optimization of parallel processing. Not that I think it's truly the explanation for the behavior here, but since we've wandered off into speculationland, I figured I'd pack my bags too.


----------



## Dewdman42 (Jan 14, 2020)

tack said:


> Kontakt definitely distributes voice processing over threads even from a single patch. It also definitely does not distribute sample loading/decompression over threads.



how do you know that?

I didn't say they never fork threads, I said it has to be done very carefully. If they create race conditions then its poor programming pure and simple. I don't think thread forking happens as much as people think. When you look at your cpu meter its showing average load across at least one second of computer time which is an eternity, so all the cores may be getting utilized, but it doesn't mean they are all being utilized at the same time.




> Yes, but then most thread-related bugs are really stupid once you discover them.



multi-threaded programming is a deep and complicated topic. There are many situations where it should in fact be avoided because of both race conditions and locking conditions causing cores to be underutilized, or extra overhead involved with managing threads switching in and out of context. Its not as simple as just forking threads and cranking on the data. With midi and audio the situation is further restricted because audio processed can't be done until the plugins and midi events that were in front it have done what they were supposed to do first. A midi or audio signal chain fundamentally has to be processed mostly serially, with only a few opportunities for plugins to carefully process some things through a thread that are not dependent on that.



> In this case, if the order of events with the same timestamp is undefined per spec, then nondeterminism isn't technically wrong and could just be a side effect of some optimization of parallel processing. Not that I think it's truly the explanation for the behavior here, but since we've wandered off into speculationland, I figured I'd pack my bags too.



Non-determinism is absolutely wrong according to decades of midi functionality that has worked that way. Steinberg has tried to change the rule to say that timestamps are golden and serial order doesn't matter, but it has worked that way for decades where the order does matter... Anyway, be that as it may....VST3 does not respect order between different types of events. And nor does Kontakt.


----------



## rgames (Jan 14, 2020)

richhickey said:


> First - MIDI is a serial protocol and there is no such thing as 'simultaneous MIDI' on a single channel. If there is a CC message and a note on message, one comes before the other.


That's the point - if they're simultaneous in the sequencer then they'll sometimes be out of order in MIDI because it's serial and I haven't seen consistency in which one gets sent first.

That's also why I've had trouble with the ones attached to the notes (attributes? can't remember which is which). Those don't always work for the instruments I use because sometimes the keyswitch is sent first, sometimes not. With the ones in lanes you can make sure they're sent in the right order.

rgames


----------



## Dewdman42 (Jan 14, 2020)

rgames said:


> That's the point - if they're simultaneous in the sequencer then they'll sometimes be out of order in MIDI because it's serial and I haven't seen consistency in which one gets sent first.



no. Well yes with VST3 and Kontakt. hehe. But no, with VST2 and for decades..midi events are present on a buffer queue that has a deterministic order...even if you have several events in a row with the same time stamp in your event list, they will be processed by the plugin in the same order they appear... that is how it has worked for decades. same with a serial midi cable...except that doesn't have any timestamps for the actual rendering, but the midi events are always processed in the order they transmit down the cable, which until recently has always been the same order they appear in your event list.

VST3 has done away with that deterministic ordering though..and apparently kontakt does not respect it very well either, FWIW...

but expression maps are not midi events...by the way. Attribute expressions can't be nudged earlier by the way. They are attached to the notes and Cubase makes sure to send them first.

However with Kontakt you probably had some problems with CC's...see earlier comments on this thread. That is bad design in kontakt frankly. But all VST3 instruments should be suspect of this as well until further notice.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> how do you know that?


Depending on which statement you're referring to, how I determined that sample loading/decompression wasn't parallelized is documented in the paper I linked to in my previous post, and determining voice processing on a single instrument is trivially examined through Process Explorer. You can see the individual threads (according to the engine config for multiprocessor support) and watch them scale up in CPU utilization as you play more voices.



Dewdman42 said:


> multi-threaded programming is a deep and complicated topic.


Speaking as someone who's been coding for over 30 years and professionally engineers highly multithreaded (and distributed) infrastructure services, you'll get no argument from me. I didn't mean to trivialize it. It's why languages like Rust are so exciting -- the idea of provably (data) race-free code (notwithstanding compiler bugs anyway) is groundbreaking.




Dewdman42 said:


> A midi or audio signal chain fundamentally has to be processed mostly serially, with only a few opportunities for plugins to carefully process some things through a thread that are not dependent on that.


This isn't exactly true, as long as you're willing to tolerate increased latency. The processing journey of any given block is serial, yes, but you can parallelize different moments in time. This is what Reaper's anticipative FX processing does. In fact, technically that type of processing can be done across FX on a single track. I had mistakenly thought Reaper did that too (it turns out it doesn't, but it _could_). The diagram I made in this post maybe better explain why audio is not fundamentally resistant to parallel processing, provided the lowest possible latency is not a requirement.



Dewdman42 said:


> Non-determinism is absolutely wrong according to decades of midi functionality that has worked that way.


That's really the best argument. Even if it's not in the spec, it becomes a de facto spec if it works a certain way for decades. And, really, it's not like processing MIDI is particularly demanding, so no conceivable optimization could justify breaking that behavior.

Thanks for the YouTube link. Will watch it tonight.


----------



## rgames (Jan 14, 2020)

Dewdman42 said:


> midi events are present on a buffer queue that has a deterministic order...even if you have several events in a row with the same time stamp in your event list, they will be processed by the plugin in the same order they appear...


We're saying the same thing. The issue is that simultaneous MIDI events (i.e. events with the same time stamp), even if they are sent in the correct order, don't always get processed the right way due to scripting in the instrument.

I will say, however, that was based on my experience with ExpMaps a decade ago. So maybe simultaneous events are handled better now. I just keep using the lanes because I'm used to them and I don't think about it any longer. Well, until now 

rgames


----------



## Dewdman42 (Jan 14, 2020)

tack said:


> Depending on which statement you're referring to, how I determined that sample loading/decompression wasn't parallelized is documented in the paper I linked to in my previous post, and determining voice processing on a single instrument is trivially examined through Process Explorer. You can see the individual threads (according to the engine config for multiprocessor support) and watch them scale up in CPU utilization as you play more voices.



We can see there are some threads, we can see cores are spinning, honestly unless you know the devs, we have no idea what threading strategies they are using and mostly I see a lot of conjecture on the internet, including this forum about what plugins are doing on cores. Most of the time, plugins are not forking threads for audio processing the main callback. And they probably should not be actually. they might have other threads for GUI tasks or other tasks that are unrelated to the main audio thread...something that can absolutely be done on another thread without blocking the main audio thread or thrashing with it over a resource.




> This isn't exactly true, as long as you're willing to tolerate increased latency. The processing journey of any given block is serial, yes, but you can parallelize different moments in time.



Well sorta. Audio is serial in nature. So is midi. And later audio depends on the tails of earlier audio. You really can't process the later audio until the earlier audio is done being processed. In digital its done in chunks of time, ie, the process block.

Within the confines of a single plugin it might be possible for a plugin programmer to process just that one isolated process block in a couple threads...as long as later points of audio don't depend on results from earlier points of audio within that process block...and frankly the overhead of doing this would probably not be worth it. Its somewhat theoretical, i really don't think most plugins are doing anything like that. Kind of debating over something we don't actually know the answer without seeing the code.

we can see threads being forked for other purposes of course, such as handling the GUI or other housekeeping that we don't want the main audio thread to do, and the main audio thread won't be held up waiting for that other thread.

I really do not believe that possibility has anything to do with Kontakt's inability to keep CC's and notes in order. And if it does, I'm still calling it really stupid programming and exactly the kind of situation where multi-threading should be avoided.



> This is what Reaper's anticipative FX processing does. In fact, technically that type of processing can be done across FX on a single track. I had mistakenly thought Reaper did that too (it turns out it doesn't, but it _could_). The diagram I made in this post maybe better explain why audio is not fundamentally resistant to parallel processing, provided the lowest possible latency is not a requirement.



it was just a matter of time before Reaper was brought up on this cubase thread, wasn't it. 

Outside the confines of a single plugin, I think its really really hard for a host DAW to spin off multiple threads for a single audio signal chain, because later audio is very dependent on earlier audio. Earlier audio can create delays and tails and things that the later audio needs to know about. It really can only be done within the confines of a single plugin's process block where the DSP programmer knows exactly what can be done concurrently...

I am not privy to Reaper's source code, but I would gather that anticipative FX processing definitely attempts to process tracks as far ahead as possible...and if you have lots of tracks, then there could be lots of threads, but same problem...the audio signal chain has to be handled in a serial manner at that level and few opportunities to concurrently process a single audio path. You can slice up the time of the process block into smaller time slices and assign each to a thread, but the second slice can't start working until the first slice is totally done, etc.. You aren't going to gain that much but a lot of complicated multi-threaded programming and overhead associated with it.

We're floating off topic now..



> That's really the best argument. Even if it's not in the spec, it becomes a de facto spec if it works a certain way for decades. And, really, it's not like processing MIDI is particularly demanding, so no conceivable optimization could justify breaking that behavior.





exactly. And they could have solved it too. If they want to abstract non-note data to an abstract layer outside the midi stream that is fine with me, but then they needed to still use an index to preserve the original order of events as intended. It could have definitely been done. It was just ignored as a requirement with an emphasis placed on timestamps being bible scripture and order becoming irrelevant. But that does break some situations, and particularly bad for articulation management.


----------



## Dewdman42 (Jan 14, 2020)

rgames said:


> We're saying the same thing. The issue is that simultaneous MIDI events (i.e. events with the same time stamp), even if they are sent in the correct order, don't always get processed the right way due to scripting in the instrument.



No we still aren't saying the same thing. you are making a far reaching generalized statement that is not completely factual. 

The specific problem is that CC's interleaved with Notes within a single buffer of time cannot be trusted for ordering, and only with VST3 and Kontakt. Also if you have a sequence of CC switches from the same controller, then Kontakt will only respect the last one.

In all other VST2 instruments other then kontakt, it will work fine!

notes will always be in the correct order with other notes, no matter if you are using kontakt or VST3! the problem is only related to when you have interweaving CC and note events within the same audio buffer for kontakt, and the same timestamp for VST3.

If you have a series of CC events (with the name Ctrlr#), then kontakt ignores all but the last one. But all other instruments should be fine.

ATTRIBUTE expression maps are attached to the notes. It fundamentally has to work properly or the program is broken. However, if you try to use Cubase expression maps with Kontakt or VST3 instruments...and if any of the switches are CC switches...then the above problems could be present. The problem is in kontakt, not Cubase.

Direction expression maps its not clear to me whether they could sometimes be late, I would hope not, but its possible and you can always nudge them earlier if you want to, no biggie. 

The one problem that Cubase does have, related to this topic, is that if you try to create a chord with different attribute expression map attached to each note. Only one of them wins, including with any instrument, this is not a kontakt problem. That is a cubase problem. Cubase appears to process all the expression maps for the audio buffer before processing all the notes. Therefore the last expression map to send a series of switches wins and the whole chord will use that articulation. That could be fixed in Cubase, people should request it. 

Or just stagger the timing of your chords a bit in that case.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> We can see there are some threads, we can see cores are spinning, honestly unless you know the devs, we have no idea what threading strategies they are using


It isn't a stretch. You see the threads, you see the CPU utilization of each thread, you load a single patch, you incrementally add voices to the patch and observe the utilization of each thread uniformly increase as voices are added. Then, you gradually reduce the number threads the Kontakt engine is configured to use, and observe the number of voices that can be sustained before dropouts progressively decreases. The conclusion that voices are processed in parallel is impossible to escape.



Dewdman42 said:


> it was just a matter of time before Reaper was brought up on this cubase thread, wasn't it.


 Well the basic approach isn't unique to Reaper. It just happens to be what I'm familiar with and have spent a lot of time examining.



Dewdman42 said:


> You can slice up the time of the process block into smaller time slices and assign each to a thread, but the second slice can't start working until the first slice is totally done, etc.. You aren't going to gain that much but a lot of complicated multi-threaded programming and overhead associated with it.


But this _is_ how things like anticipative FX processing work -- parallelizing the processing of different moments in time (and adding latency in the process) -- and although it certainly adds complexity to the code, the proof is in the pudding: with anticipative FX enabled, the core utilization and amount of FX that you can cram in a project increases dramatically. Even with as little of 50ms of a head start you can realize some significant gains. And it's not just because you're preprocessing a serial audio chain to better absorb transient spikes, but it's truly to increase parallelism.



Dewdman42 said:


> We're floating off topic now..


Wayyyy off topic. But isn't this fun?


----------



## Dewdman42 (Jan 14, 2020)

you can't parallel process something that you don't know the input yet. 

you can certainly process way in advance a lot of stuff in a daw, in a non-parallel manner per audio-path. That would be anticipative. But still the lookahead processing has to process from a common starting point and has to process the audio serially from there... it can just do it faster then real time, ahead of time.

There is nothing about the word "anticipative" that indicates parallelization. One would assume that concurrent programming would be used whenever possible and it makes sense, but fundamentally, in order to crunch DSP, you need the input data, and for a serial audio path, you don't have the needed data until the previous chunk of time is done. You really can't parallelize that at the DAW level.


----------



## jononotbono (Jan 14, 2020)

Truly loving how much conversation has been sparked so far by the ole Expression Map situation. It's definitely a huge Elephant in the (DAW) room! 

Please, can I just turn my computer on and everything just work everytime!?


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> There is nothing about the word "anticipative" that indicates parallelization. One would assume that concurrent programming would be used whenever possible and it makes sense, but fundamentally, in order to crunch DSP, you need the input data, and for a serial audio path, you don't have the needed data until the previous chunk of time is done. You really can't parallelize that at the DAW level.


I think we're at an impasse here. I'm not sure if you looked at the diagram in the post I linked to, but it better explains how you _can_ parallel process a serial audio chain (by accepting a bit of increased latency) and distribute that processing over multiple threads. I'm not sure I'm able to say anything more compelling about it. (The thread was able parallel processing a single track, but the principle is the same for parallel processing across multiple tracks as well.)


----------



## tack (Jan 14, 2020)

jononotbono said:


> Please, can I just turn my computer on and everything just work everytime!?


It's nothing short of a miracle that it (mostly) does.


----------



## Dewdman42 (Jan 14, 2020)

more latency doesn't change anything. you can't run later DSP on anything until earlier DSP is done. The longer latency just means its doing more overall lookahead,.....but it's not more parallel other than perhaps you have one thread doing the lookahead processing and another thread handling the playback of what was already calculated ahead of time.


----------



## Dewdman42 (Jan 14, 2020)

You can't process future DSP until the historical DSP is done, its as simple as that. It doesn't matter how far ahead your look ahead is.

maybe you should create a new thread for this.. Its way off topic here.


----------



## rgames (Jan 14, 2020)

Dewdman42 said:


> The problem is in kontakt, not Cubase.


Correct. But the upshot is the same: simultaneous MIDI data causes problems.


----------



## tack (Jan 14, 2020)

Dewdman42 said:


> You can't process future DSP until the historical DSP is done, its as simple as that. It doesn't matter how far ahead your look ahead is.


True from the perspective of a single audio block. But while the audio block from project time t is traversing through the chain, an audio block from time t+1 can be working its way through the chain at an earlier stage. This adds a latency of 1 block, but it allows parallel processing of the two blocks, each of which is sitting at a different stage of the FX processing pipeline. Scale this up to n blocks, to provide parallelism across n threads, at a cost of n audio blocks of latency. That can be done theoretically. Meanwhile, in practice, anticipative FX in Reaper works such that tracks without interdependencies can be processed in parallel, so you definitely get more than 2 busy threads.



Dewdman42 said:


> maybe you should create a new thread for this.. Its way off topic here.


Nah, not much more I think I can say on this. Mods can feel free to move the posts out if the so deem.


----------



## jononotbono (Jan 14, 2020)

tack said:


> It's nothing short of a miracle that it (mostly) does.



I always say this. But then I always say, "I remember when I used to just play Guitar and record songs"


----------



## Dewdman42 (Jan 14, 2020)

on a portastudio


----------



## jononotbono (Jan 14, 2020)

tack said:


> Nah, not much more I think I can say on this. Mods can feel free to move the posts out if the so deem.



Please don't. It's all relevant. You wait till I Lara Croft Swan dive back into Expression Maps. I'll be posting like a dedicated whore and it will be information from all of you that will help me climb the mountain. Whether there's an end or a top to it. No one has yet found out...


----------



## DS_Joost (Jan 15, 2020)

jononotbono said:


> I always say this. But then I always say, "I remember when I used to just play Guitar and record songs"



Which was the point of my post


----------



## stigc56 (Jan 15, 2020)

The main problem about Expression Maps is time! It takes forever to make them, the interface is clumsy and illogical. During the work with them I'm often tempted to change the presets in VEPro as well or if it's local, and then more time is simply gone forever. Time that could be used practicing piano or reading VI-Control 
We should be far better at sharing. I've said it before and I will say it again share your maps, and save some time. So here is my maps for VSL Synchronized WoodWinds.


----------



## richhickey (Jan 15, 2020)

rgames said:


> Correct. But the upshot is the same: simultaneous MIDI data causes problems.



There is no such thing as "simultaneous MIDI data" and that phrase will only further engender misunderstanding among those who don't actually know how VST/AU and MIDI work at a technical level.

No two MIDI _messages_ (on the same port) can ever occur "simultaneously" because it is a serial protocol, defined by a series of bytes. DAWs add timestamps to indicate an intended time for _realization_, because the data is sent in chunks/buffers and not processed in real time, so the target VI needs to know at what offset in the result buffer to generate audio. _But the messages still have order._

If a system gets message A _followed by_ message B, both to be rendered at timestamp X, and processes message B before A it is _broken._ (Kontakt)

If a system gets message A _followed by_ message B, both to be rendered at timestamp X, and splits then into two lists such that their relative order is lost, it is _broken_. (VST3)

I don't think this is something we should just accept. Nor is it something we should generalize about as being something essential about MIDI - it's not. These are bugs/design flaws.


----------



## InLight-Tone (Jan 15, 2020)

Well JunkieXL uses them?!? Kidding aside, I think Expression maps are great. I'm in Logic now and using their built in articulation system but similar systems.

The main draw for me is that you can reduce a template down by 100s if not 1000's of tracks and still have the same information available.

I'm using a largish disabled track template, the same as I did in Cubase. I'm finding Logic's implementation to be much more efficient than Cubase by far. When I activate a track I like having all the articulations there so I can go through them instead of going to track after track when they are separate. I am completely allergic to VEP.

When composing it's nice to have Expression maps so you can concentrate on a performance like a real player by switching articulations during a given line.
That being said I often do the one articulation per track style writing, but you can just duplicate a track and switch the articulation, with the same result, so instead of having 10 separate tracks, you have one or two.

Having Expression maps gives the best of both worlds and greatly reduces track count and clutter. You're only going to use those String FX once in a great while, no need to have a dedicated track for them...


----------



## A.G (Jan 15, 2020)

stigc56 said:


> The main problem about Expression Maps is time! It takes forever to make them, the interface is clumsy and illogical.


The torment was over X-DAW Art PRO is a fact - (coming soon)!
*X-DAW* offers brand new batch functions, so you can create a complete Map preset for seconds...

Thanks for your Cubase Maps sharing!
I used one of your free presets to demonstrate the upcoming *X-DAW Art PRO* (if you do not mind).
*X-DAW* will offer tons new features for a quick batch Articulation Maps creation... In the demo animated image below I show only a few quick conversions:

• Load a Cubase Expression Map and convert it to a Logic Articulation Set.
• Edit the imported Cubase Expression Map preset and store it as a new Cubase Map preset.
• Teleport the preset to AG Lemur Articulation Remote Workstation.

NOTE 1: *X-DAW* can import any Logic Articulation Set and convert it to a Cubase Expression Map or teleport it to AG Lemur Remote Workstation as well.

NOTE 2: AG Art PRO Logic customers can convert Cubase maps and use the AG Scripter Text points, Art ID or Text + Art ID Combo editing directly in the Logic main window or in the Piano Roll and other editors.

NOTE 3: *X-DAW* is compatible with OS Catalina (we are making the final tests).
Stay tuned...


----------



## VinRice (Jan 15, 2020)

I use both Logic and Cubase. The Logic implementation has less functionality but is simpler to understand and more reliable. Not a surprise really.

For 'straight' orchestral writing they are invaluable, being able to see the line in its entirety on one track keeps your head in the game. I've balanced all the articulations in the Spitfire libraries to taste. For more media-driven stuff where sonic novelty and 'quirk' are the order of the day I will generally create a project-specific template where each 'sound source' needs its own track for conceptual and mixing purposes. Horses for courses.


----------



## Tfis (Jan 16, 2020)

@stigc56 
Why did you make an expression map for each instrument and not just one for all woodwinds?


----------



## chrisr (Jan 16, 2020)

A.G said:


> The torment was over X-DAW Art PRO is a fact - (coming soon)!



This looks great - I'll soon be jumping back into expression maps and this looks like it could be a big time saver. Will keep an eye out for further announcements.


----------



## babylonwaves (Jan 16, 2020)

rgames said:


> That's also why I've had trouble with the ones attached to the notes (attributes? can't remember which is which). Those don't always work for the instruments I use because sometimes the keyswitch is sent first, sometimes not. With the ones in lanes you can make sure they're sent in the right order.


really? I've never seen a problem with that.


----------



## stigc56 (Jan 16, 2020)

Tfis said:


> @stigc56
> Why did you make an expression map for each instrument and not just one for all woodwinds?


Because they don't have the same articulations. Differences between 1st Clarinet and 2nd Clarinet ex.


----------



## A.G (Jan 16, 2020)

chrisr said:


> The torment was over X-DAW Art PRO is a fact - (coming soon)!
> 
> This looks great - I'll soon be jumping back into expression maps and this looks like it could be a big time saver. Will keep an eye out for further announcements.



*X-DAW Art PRO* Cubase <=> Logic articulation maps import/export, conversion are cool indeed, however there are a lot of other ultimate features.... In the upcoming Videos, I'll demonstrate how to create a complete Articulation Map preset including all articulation names, KS assignments etc in 30 seconds - for example, for any Spitfire BBC library. The number of the Instrument Articulations can be ANY (30, 50, 100 etc) - X-DAW will do all in seconds cause we created ultra batch creation/editing functions, which are very simple to use by the way.
Stay tuned


----------



## Dewdman42 (Jan 16, 2020)

how does it go about converting ExpressionMap groups into LPX articulation sets?


----------



## A.G (Jan 16, 2020)

Dewdman42 said:


> how does it go about converting ExpressionMap groups into LPX articulation sets?


DAWs articulation systems are not absolutely the same. In this regard, we developed a special engine which decides what to convert and how to convert. In any case, all essential articulation switching assignments are converted so the VIs work perfectly after Cubase <=> Logic conversion (the Cubase users can make a Maps multi selection and change the Attribute/Directional type if they need that in the X-DAW Editor). 

BTW. Logic Articulation system is still poor in comparison to Cubase one. For example, Instruments such as Sample Modeling (The Trumpet, Trombone, Tuba etc) which require articulation switching "on the fly" (during the note sustaining) are not supported by the Logic Art system since it does not offer a "Directional" type. In this case, Logic X-DAW users must use AG Art Scripter plugin which supports articulation Text points switching ("Directional").


----------



## Dewdman42 (Jan 16, 2020)

I understand they are different, I was just wondering how you are resolving that differences

I will assume, since you didn't provide an answer to my question, that groups are not translated to articulation sets, since that feature is not there. I guess that means that all expression map "slots" are converted to a corresponding articulation ID in logicPro. Can you confirm that?

The group feature of expression maps mainly provides a way to have a more consolidated list of choices in the articulation lane. When translated to LogicPro, that would simply be a much longer list for the articulationSet, display all slots. Yes?


----------



## A.G (Jan 16, 2020)

Dewdman42 said:


> I guess that means that all expression map "slots" are converted to a corresponding articulation ID in logicPro. Can you confirm that?


I confirm, the VI articulation switching in Logic will work like in Cubase - watch carefully the X-DAW animated image I provided before.



Dewdman42 said:


> The group feature of expression maps mainly provides a way to have a more consolidated list of choices in the articulation lane. When translated to LogicPro, that would simply be a much longer list for the articulationSet, display all slots. Yes?


At this moment we do our best to keep X-DAW as simple as possible.

BTW: After the 1st release, we will listen to all user suggestions and will do our best to implement extra features if DAW to DAW allow that (or if we see that there is some advantage).


----------



## Dewdman42 (Jan 26, 2020)

My main question for you is about the Cubase Expression Map grouping feature. This allows to have up to four simultaneous groups of switches, each group being a mutually exclusive group of keyswitches which can be somewhat independent of each other. So you can activate up to 4 "articulations" at the same time. For example, you could have one group which is dynamics related switches...and another group which is the specific sample sound to use, for example... and in Cubase you can select both rows at the same time and have both in effect.

We obviously can't do that with LogicPro, we have one articulationID. Though with A.G. you have the possibility to use the timeline instead of articulationID, but anyway its not clear to me whether I could, for example, have two independent timelines, or articulationID as one group and a timeline maybe as the other one...in order to essentially have two groups at the same time in the same way as cubase expression maps?

If so....then will X-DAW convert back and forth between the LogicPro way and the Cubase way of handling even those advanced scenarios?

I saw you said somewhere you are going to add the forwarding of PitchBend, ProgramChange and Aftertouch when channelizing is used, in addition to CC's. is that true? Will that show up in ArtPro 6.4 or only in X-DAW or when can we expect to see that?

Also another question I have is whether your LogicPro version is able to recognize the midi port and handle properly. AU3 supports up to 8 ports going to a single AU3 instrument plugin, so the maps we create with a tool like AG need to be able to assign a different articulation Set to each source track, with up to 127 source tracks using up to 8 ports feeding into a single AG scripter engine in order to feed into one instance of VePro AU3, for example. What does it currently do related to midi ports in that regard? By the way Scripter is able to see and set a port attribute to all midi events, so I think this should be possible, but I can't see anything in your GUI that specifies anything about port, so not sure what, if anything, your product is currently doing to account for that aspect, or will in the future.


----------



## A.G (Jan 27, 2020)

Dewdman42 said:


> My main question for you is about the Cubase Expression Map grouping feature. This allows to have up to four simultaneous groups of switches, each group being a mutually exclusive group of keyswitches which can be somewhat independent of each other. So you can activate up to 4 "articulations" at the same time. For example, you could have one group which is dynamics related switches...and another group which is the specific sample sound to use, for example... and in Cubase you can select both rows at the same time and have both in effect.



I'm sorry if I could not get your Cubase Expression Maps Groups description correctly but as far as I remember the Groups purpose is quite different from your description. For example, how you assign another Output Mapping for say Art. 2 group to trigger another Articulation KS or CC?
Could you give an example with a real Software Instrument which can change the Articulations as you describe ?

BTW. AG Logic Articulation system supports what you want via the AG Scripter Text points (up to 16 separate Instrument specifics independent articulation control for example). I'm waiting for your Cubase Groups reply and example Instrument, after that I'll outline how to do that in Logic using AG Maps Editor & Scripter.


----------



## Dewdman42 (Jan 27, 2020)

it works as I described. How do you think it works? perhaps language is a barrier here. 

Some instruments can benefit from having separate sets of keyswitches in parallel. I gave one example already earlier. Each set is a mutually exclusive set of keyswitches which can be used in parallel. This could be for example when there are a set of keysxwitches related to dynamics and a separate set of keyswitches related to specific articulation to use. In such a scenario that is exactly what Cubase expression makes possible with their group feature, up to 4 groups.

So related to the A.G automation point option, are you saying that I can have multiple automation parameters being used, PER INSTRUMENT, where two or more automation lanes would effect at the same time one instrument, with one set of keyswitches associated with one automation lane and another set of keyswitches for the other lane? If so, that is cool to know and yes, your X-DAW should translate that to/from Cubase, because that is exactly what ExpressionMap groups are for.

I took a bit more diving in AGPro last night, but sadly, you still do not support AU3 midi ports, so I can't use it. it has a lot of other nice features though, the automation points are a key feature for sure...and adds to what LogicPro can do.


----------



## A.G (Jan 27, 2020)

Mr. Guru, you still do not provide a Cubase art groups Instrument example I'm waiting for...

Well, here is a real Instrument example:
Sample Modeling Trumpet v3. 
Some of the Articulations are triggered via a single KS and some other are triggered via two KS - this can be assigned in Cubase Output Mapping which is the same for Art. 1, Art. 2, Art 3, Art. 4. The Trumpet Mute types are powered by CC100 values and need a separate independent articulation control. Can you assign Trumpet Mutes in Art. 2 expression map column (group 2), so the Mutes change as expected?

Here is another Cubase Guru who published some info about the Expression Groups (it is quite different from what you say):








Expression Map Groups


Hi everyone, so I’ll start with my goal and then go into what I’m doing to achieve this. Then hopefully you can help me figure out why its not working. So I use LASS and have an expression map built with 5 long note articulations: Sus (arco) midi channel 1 Sordino midi channel 2 Harmonics...




www.steinberg.net


----------



## Dewdman42 (Jan 27, 2020)

I have at least one instrument that can benefit from multiple groups, including Kirk Hunter instruments, perhaps some others as well, any time I want to have primary keyswitches change the sound, but other independent keyswitches that I want sent automatically in an independent way. 

I am not familiar with Sample Modeling stuff, but from the sounds of what you are explaining, if there is an independent Mute switch that can be used independently of the main articulation sound..then yea that would be a great example of using Cubase ExpressionMap groups. In the expression map you still have to setup a long list of all possible combinations as "slots", but the way it appears to the user on the piano roll is as two independent groups of articulations which can be used in parallel. So you can use one group for the primary sound and the second group for whether you want muted or not, independently and in parallel.

When you translate something like that to LogicPro, without AgPro, then you can really only have a super long list of articulations which equate to every expression map slot. All combinations. But that is not as nice to work with as cubase expression maps which consolidates it down to a reduced footprint and usage in the pianoroll. However with AGpro I think you could convert that into parallel automation lanes, yes?


----------



## A.G (Jan 27, 2020)

Dewdman42 said:


> I have at least one instrument that can benefit from multiple groups, including Kirk Hunter instruments, perhaps some others as well, any time I want to have primary keyswitches change the sound, but other independent keyswitches that I want sent automatically in an independent way.



Could you provide a Cubase demo project with a demo MIDI track and a few notes in a region.
Create an expression map with at least three Articulation Sound Slots where you have mapped Art.1 column/group to the primary KS and other KS set in Art.2 column for the same Slot (I wonder how you map separate KS for Art.1 & Art.2 in case that the Groups 1-4 offer a single Output Mapping per slot). 

Draw any articulation change in the MIDI region.
I'd like to track the articulation change output in a MIDI monitor to see if the Art.2 KS automation will be sent.


----------



## Dewdman42 (Jan 27, 2020)

If I get some time later I will, can't today.


----------



## Dewdman42 (Jan 27, 2020)

here ya go

This was using just a small subset of the keyswitches in Kirk Hunter Concert Strings 2. KH UI looks like this:






_*(Note the displayed note pitches are one octave different in KH ui compared to Cubase UI)*_

So in the articulations lane above, you can use C#1 - C2 to select the articulation, and then you can use a separate parallel keyswitch in the Divisions lane to select whole, half, quarter, solo using keyswitches D#3 - F#3.

So we have an example of two independent sets of keyswitches which can be used independently. Normally in LogicPro you'd have to create a unique articulation id for every possible combination of those two lanes (and actually more because the Features lane adds another dimension of possible keyswitch combinations..which results in a really long and difficult to use list in LogicPro.

For Cubase...

So basically you have to create a *slot* for EVERY POSSIBLE COMBINATION, At that point its just like LogicPro, super long list of possible combinations. I will do simple example right now only to handle Smooth and Pizz with 4 divisions to choose from:







I used *Direction* type articulations for the Section size selector. Doesn't have to be. But I think it kind of makes sense in this case to enable a division and leave it that way until further notice even while the articulation changes around .

The above expression map shows up in Cubase Piano roll looking like this:






Notice how I select the division groups separate from the articulation group. they can be independently selected from the UI. Cubase then selects the appropriate slot row from the expression map to apply which has the complete combination of keyswitches needed.

I have attached the CPR file also and you can monitor the midi to see it in action. It works.


----------



## richhickey (Jan 28, 2020)

Dewdman42 said:


> For Cubase...
> 
> So basically you have to create a *slot* for EVERY POSSIBLE COMBINATION, At that point its just like LogicPro, super long list of possible combinations. I will do simple example right now only to handle Smooth and Pizz with 4 divisions to choose from:



Yeah. These systems just don't scale and they neglect the basic fact that the VIs already know how to do the combinations. If only Cubase expression map direction groups could be set to generate _independent_ control messages (like the VIs expect) then we could avoid the combinatorial explosion. The Dorico folks say they have ideas about this but right now the story there is the same as Cubase.


----------



## Dewdman42 (Jan 28, 2020)

I don’t understand the problem you are complaining about. Expression map Grouping provides a way to do that. The only downside is that inside the expression map editor you have to create a lot of slots but once it’s setup it works as you are hoping for. No?


----------



## Dewdman42 (Jan 28, 2020)

One upside of cubase’s need to create a slot for every combination is that it’s possible for “combinations” to be More or different then the sum of its parts. It’s cumbersome to setup in the typical simple cases but this provides more flexibility and could be used in advanced hybrid expression maps.

the editor could be better. There ought to be a way to simply define each keyswitch once and have it automatically build the slot list with all combinations.


----------



## youngpokie (Jan 28, 2020)

Dewdman42 said:


> One upside of cubase’s need to create a slot for every combination is that it’s possible for “combinations” to be More or different then the sum of its parts. It’s cumbersome to setup in the typical simple cases but this provides more flexibility and could be used in advanced hybrid expression maps.
> 
> the editor could be better. There ought to be a way to simply define each keyswitch once and have it automatically build the slot list with all combinations.



I've been following you in this thread and would really appreciate it if you could do a tutorial or a write up on Groups and combinations specifically (you had divisi indicated and it really peaked my curiosity). I've been thinking about how to set up divisi in Spitfire Studio Strings/Brass on horns, etc, and I think it might be possible with midi channels and groups but not quite sure how to go about it yet.


Thanks


----------



## Dewdman42 (Jan 28, 2020)

I’m not familiar with the specifics of spitfire but tell me more about what you want to accomplish and I’ll put some thought into it


----------



## richhickey (Jan 28, 2020)

Dewdman42 said:


> I don’t understand the problem you are complaining about. Expression map Grouping provides a way to do that. The only downside is that inside the expression map editor you have to create a lot of slots but once it’s setup it works as you are hoping for. No?



No they don't. They force you to say the same thing over and over. I want to say exactly once that e.g. non-vib is CC1-0 and vib is CC1-127, put them in a group (for exclusion purposes, nvib turns off vib and vice-versa) and make that into a direction (for shared single editing lane). I want one CC1 message sent _only_ whenever vib changes, not on every note.

I do not want to say over and over that vib+this sends CC1 and vib+that sends CC1 for any library that has vib on its own controller (e.g. OT).

There are lots of libs that put vib, accent, mutes etc on individual controllers (OT, Synchron etc). The Cubase expression map system is oriented around (small) lists of combo artics in a single address space.

More details here: https://www.steinberg.net/forums/viewtopic.php?f=246&t=135922&p=936764#p936764


----------



## Dewdman42 (Jan 28, 2020)

I agree it’s cumbersome to set up in the expression map editor, as I mentioned already; but it’s not cumbersome like that to use in the piano roll. Steinberg could simply improve the editor a bit and the entire setup process would be non issue.


----------



## Dewdman42 (Jan 28, 2020)

And again, think a bit about what I said about flexibility. Being able to have different set of switches for each combination that is different then the sum of its parts could potentially be useful.

That is the paradigm cubase went for. It’s better then any other daw! Yes you have to create a big long slot list but what the musician sees in the piano roll will be short and sweet without repeats.

further you can create hybrid setups. For example let’s say you are using an instrument that does not have a switch for divisi. So you have to use a multi instrument with whole division on channel 1, half division on channel 2, etc. So then you can create a group like I did above where the channel and switch will be combined in each slot in a way the instrument doesn’t know about.


----------



## Dewdman42 (Jan 28, 2020)

Regarding the question about keyswitches being sent redundantly to instruments that is probably the safest way but I agree it would be nice if during playback cubase kept track of which switches need to be sent and which don’t need to be sent redundantly. Logicpro is smarter about that. I don’t see that as a big problem though. It’s more explicit. Slightly more midi traffic in some situations.


----------



## richhickey (Jan 28, 2020)

Dewdman42 said:


> And again, think a bit about what I said about flexibility. Being able to have different set of switches for each combination that is different then the sum of its parts could potentially be useful.
> 
> That is the paradigm cubase went for. It’s better then any other daw! Yes you have to create a big long slot list but what the musician sees in the piano roll will be short and sweet without repeats.



Paying for flexibility you don't use isn't a win. And I think you are underestimating how large the combinations can be for some libraries (e.g. Synchron Dimension Strings, OT Berlin Strings).

And no, it's not strictly better than Logic. In Logic, I do only the non-directive artics with articulation IDs. I do vib/mute/accent/sul etc in a Scripter script and expose their discrete values as script parameters. They each get their own editing lane, with picklists, and I only need to make changes when the state changes. You are not limited to 4 groups. The editing and visibility is better than Cubase's with its wasteful and unscalable 2D use of space for directions in the piano roll. Plus each script parameter can get its own automation and controller mapping - emaps force a flattening of control as well.

A script with 5 parameters, plus ~30-40 articids does a job like Synchron DS that would take several hundred entries in a Cubase expression map. No matter how much they improve the emap editor, that's not something I want to create or maintain.

For flexibility, I'll take Scripter any day. There's really no contest - script parameters are strictly more powerful, easier to setup, easier to enter, and more capable of control/automation than emap directions.


----------



## Dewdman42 (Jan 28, 2020)

Its only in the editor. If Cubase improved their expression map editor I don't think you'd be complaining about the flexibility. ExpressionMap files are XML, there is a software dev project for someone! Hide the slots entirely, expose a more simple interface that generates expressionMAP xml with hundreds of slots. 

BTW - I didn't say Expression Maps are "strictly" better the logic in an all encompassing way. They each have pros and cons, but in this area related to groups, cubase wins because Logic has nothing (excluding of course that you can program your own Scripts to do whatever you want with articulationID, automation points or various means)

FWIW, I am still primarily using LogicPro because of Scripter and primarily because also Scripter fully supports up to 8 AU3 midi ports...so I can have advanced articulation management with a combination of semi-crippled articulationSet and Scripter to augment that. Its working pretty good, but its still a home grown programmed solution, most musicians are not able to do that. I think Cubase Expression Maps are quite good, I don't understand all the animosity about it. I think a lot of people don't understand how groups work and get easily confused setting up their expression map and/or irritated by having to create "hundreds" of slots...I agree that is cumbersome to setup.


----------



## tack (Jan 28, 2020)

Dewdman42 said:


> That is the paradigm cubase went for. It’s better then any other daw!


I'm not sure if it's necessarily better than Reaticulate, as it works quite similarly. But works differently in regards to Rich's gripe (inasmuch as it will only send the MIDI event when vib/nonvib changes).

Reaticulate also supports groups in a similar way (and also only supports 4 -- a number I'd like to increase in future versions), but also provides a feature I wonder if is available in other articulation management systems: something I (poorly) call Conditional Output Events.

From the release notes:


> For example, consider a library such as Berlin Brass with its expansion packs, where trumpet articulations can be performed unmuted, or with straight mutes, or with harmon mutes. You _could_ have separate programs for each articulation with each type of mute -- and this is a perfectly cromulent approach to be sure -- but it's now also possible to have a single program for each articulation and the type of mute be defined in another group.
> 
> [bank definition pruned]
> 
> ...



Does Cubase do this as well?

Rich mentioned the combinatorial explosion when it comes to articulation layering, and indeed that's a problem with Reaticulate as well that I would like to find a proper solution for (at least for those cases where patches natively support it, or a more conventional art-per-channel multi setup). I have ideas on how to implement it, but it requires a pretty serious design overhaul.


----------



## Dewdman42 (Jan 28, 2020)

tack said:


> I'm not sure if it's necessarily better than Reaticulate, as it works quite similarly. But works differently in regards to Rich's gripe (inasmuch as it will only send the MIDI event when vib/nonvib changes).



I didn't know you also implemented the grouping feature similar as cubase expression maps. Good to know! Though I can't stand using Reaper in general, so its moot for me.


----------



## Dewdman42 (Jan 28, 2020)

tack said:


> but also provides a feature I wonder if is available in other articulation management systems: something I (poorly) call Conditional Output Events.



Please tell us more about this.


----------



## tack (Jan 28, 2020)

Dewdman42 said:


> Though I can't stand using Reaper in general, so its moot for me.


After 20 years and several times as many bottles of whisky, it can actually be made pleasant to use!


----------



## tack (Jan 28, 2020)

Dewdman42 said:


> Please tell us more about this.


There's a bit more context on the website:

Release notes: https://reaticulate.com/news.html#conditional-output-events
And the filter_program bullet on the specification page: https://reaticulate.com/reabank.html#output-events-specification (need to scroll down a bit as I can't direct-link to the bullet)
But long story short, it's a way for you to contextualize an articulation based on the state of another group. (Contextualize as in emit different MIDI events upon activation based on another group.)


----------



## Dewdman42 (Jan 28, 2020)

youngpokie said:


> I've been following you in this thread and would really appreciate it if you could do a tutorial or a write up on Groups and combinations specifically (you had divisi indicated and it really peaked my curiosity). I've been thinking about how to set up divisi in Spitfire Studio Strings/Brass on horns, etc, and I think it might be possible with midi channels and groups but not quite sure how to go about it yet.
> 
> 
> Thanks



So like I said earlier, I'm not super familiar with Spitfire. But assuming that you need to use separate instances inside kontakt for each division. you can put each division on a separate midi channel and then setup Cubase Expression Map more or less like this... here is example, the actual keyswitches would be different for you...but anyway....

Just setup the keyswitches for the specific articulation and then when you create the combination slots, you specify that one single keyswitch AND the midi channel depending on the division. Here is simple example with two articulations and 4 divisions:












Start with that idea and see where you get with it...


----------



## youngpokie (Jan 28, 2020)

Dewdman42 said:


> I’m not familiar with the specifics of spitfire but tell me more about what you want to accomplish and I’ll put some thought into it




OK. Here's an example: there are let's say 3 existing patches for legato: Violins I (16), Violins I (8 A) and Violins I (8 B). These legato articulations are mutually exclusive for playback, so I figured they could be all in one Group.

So then I was thinking I could trigger legato (16) via a normal keyswitch on channel 1, and then use a combination of 3 additional keyswitches to:
- turn off (16) and 
- trigger (8), A and B, on channels 2 and 3 respectively, 
all from the sound slot and a single legato articulation.

That way, I play unison legato monophonically, then press key switch(es) and play a two note melody and trigger divisi from the other patches. What I am not sure about is how to make the maps recognize two voices for each of the divisi patches rather than only pick the highest or lowest note and play it in both patches at the same time (essentially skipping the lower or upper voices anyway).

Appreciate your thoughts.


----------



## Dewdman42 (Jan 28, 2020)

I'm heading out the door right now, so will look more closely later tonight...but..the legato stuff I have to think about to see what might complicate things there in terms of whatever you're talking about. But anyway look at the example I gave a minute ago and maybe you'll think of some stuff. 

Yes your'e right about the midi track voices, Expression Maps don't do anything about splitting voices to different instruments or anything like that. You'd have to use a kontakt multi-script for that aspect of it. But you could have a particular channel devoted for say half division, then have expression map send half division to that channel , as I showed above, then use a multi-script on that channel to split the part to two different kontakt instruments holding the A and B instruments.

or if its not a kontakt instrument, then you'd have to use an extra scripter plugin such as LuaProtoPlug or BlueCatAudio PlugNScript to split the two voices on that channel to two channels before sending to Spitfire.


----------



## Dewdman42 (Jan 29, 2020)

youngpokie said:


> What I am not sure about is how to make the maps recognize two voices for each of the divisi patches rather than only pick the highest or lowest note and play it in both patches at the same time (essentially skipping the lower or upper voices anyway).
> 
> Appreciate your thoughts.



So like I said earlier, you could try to use a kontakt multi-script to auto-divisi two voices to two channels... or... if you don't want to mess with that, you can handle it with expression maps, but basically you have to use *ATTRIBUTE* style articulations and then you can assign different articulations to the upper and lower voice (which correspond to whether you want them going to patch A or patch B.















But another question I have is why do you need to use a different patch for the upper and lower voice? really I think you just need to sound both upper and lower voice to single half division patch. The point is that the patch itself should be thinner sounding and less volumous so that two voices going through it sound balanced compared to a single voice going into a whole-division patch. Right? I might be missing something... If they are monophonic patches in order to support legato..maybe that is why? 

In any case, like above you can just explicitly assign *attribute* style articulations to each upper and lower voice, then they should be channelized to the correct patch... Works for me here with KH that way.


----------



## A.G (Jan 29, 2020)

Dewdman42 said:


> I have attached the CPR file also and you can monitor the midi to see it in action. It works.


Thanks for the Cubase project attachment and the nice image graphics!
I confirm that all works as expected in your example project.

I'm sorry to say but the Art Groups design is a crappy Cubase visual multi lane solution only. The lack of separate "Output Mapping" for each Art. 1, Art. 2, Art. 3, Art. 4 assignments force you to create additional slots for all possible combinations which reduces the vertical Articulation view in the Piano editor. For example, if the Instrument comes with 30-40 Articulations and several separate articulation control you must create in Art.2, 3, 4 then the Piano editor may exceed 100 lanes, so I guarantee that you will not be able to work with that number of narrow lanes.

AG Editor Multi Presets do that in a very elegant way via AG Scripter Multi text point articulation lanes in Logic.
Here is a non official Video:


----------



## Dewdman42 (Jan 29, 2020)

Yea I think the Editor in Cubase basically sucks. All someone needs to do is to make an editor that basically doesn't require you to actually look at the long list of slots generated from 4 groups. The fact that Steinberg is using their approach of searching through the slots, I don't particular think is a "crappy" design, but the Editor could be better and could easily hide that detail from the user by populating the long table automatically.

I quite like using the expression map lanes when groups are used properly, albeit after having to setup many slots to achieve the goal. 

if you look at the current Cubase Editor the right bottom pane has the list of keyswitches that the user will see in the pianoroll. All they should have to do is edit that list and then the slot list should be automatically generated, with the possibility to override slots for advanced users I suppose.


----------



## A.G (Jan 29, 2020)

Dewdman42 said:


> The fact that Steinberg is using their approach of searching through the slots, I don't particular think is a "crappy" design,



The Cubase Editor design is very old and crappy without batch Art maps creation & multi selection maps editing functions. You need a day to build a complete Expression Map preset for a complicated Instrument, while this may take a few minutes in X-DAW which preset can be exported as:
- Cubase Expression Map.
- Logic Articulation Set.
- AG Logic Scripter (supporting Art Text Points, Art ID & Program switching) with CC cloning.
- iPad AG Lemur layouts.

Now X-DAW can import Cubase Expression Maps & Logic Articulations sets - which allow you endless possibilities.
Hope you see the AG power design?

BTW. Hope you watched the Vimeo Video in my previous post, where I show the X-DAW Multi mode Art Groups solution for Logic. Can you realize that AG system offers 16 Art Groups (Cubase offers only 4 Groups)? What about if you load 8 AG Scripters which means 16 * 8=128 separate Art Group Lanes. 128 Groups blow the Cubase design totally!


----------



## Dewdman42 (Jan 29, 2020)

Me? I am not going to be able to use AG for now because it does not properly support AU3 midi ports in LogicPro. I have already moved on to making my own scripts for Logic. for Cubase Expression Maps I am content to use the Cubase Expression Map editor for now.


----------



## A.G (Jan 29, 2020)

Dewdman42 said:


> I am not going to be able to use AG for now because it does not properly support AU3 midi ports in LogicPro.


It does. AG Editor preset can be saved as a Logic Articulation set. The LPX Art sets work with AU3.

Please optimize your posts! You get more than 90% in any topic here by duplicating info or your personal workarounds.

This will help the community to follow the topic easily!


----------



## Dewdman42 (Jan 29, 2020)

That negates the the point of using your product. I have more features in my own scripts at this point. Please stop spamming us about your product Ivan. I know where to find it if I want to try it


----------



## youngpokie (Jan 30, 2020)

Dewdman42 said:


> ...you can handle it with expression maps, but basically you have to use *ATTRIBUTE* style articulations and then you can assign different articulations to the upper and lower voice (which correspond to whether you want them going to patch A or patch B.
> ...
> But another question I have is why do you need to use a different patch for the upper and lower voice? really I think you just need to sound both upper and lower voice to single half division patch.



OMG, thank you so much for this! I'm going to experiment tonight if I find the time, this looks almost exactly like what I was thinking about. I wasn't sure from looking at the screengrab if it's possible to run the same articulation in divisi yet, like for example legato in both voices, but can't see why not...

To answer your point about the patches, Spitfire Studio String includes separately recorded divisi patches with some differences between them, and the combination of these sounds better to me than doubling a half section. That might also be because in the piece I'm working on right now there's non-stop legato in a fairly elaborate melody, and divisi across the entire string section on melody highlights.

So, overall it would be incredible if I can have a single track for Violins I (16) unisons and divisi (8 x 8) at the same time, for both writing and track count, and my hopes just went up that it just might be possible.

Thanks a million!


----------



## jononotbono (Feb 6, 2020)

Rewatching the JXL Expression Map Studio Time Video. Has anyone here successfully balanced all of their library patches using the Controller Output Mapping and setting CC7 in context with all other patches? The mountain of work ahead of me is almost making me sweat just thinking about this. Are you just making the patches within a library balanced with each other or are you doing EVERY library you own? If you don't get the balance right between arts, I guess it's a case of tweaking the Expression Maps until it's correct. I'm up for the challenge but holy shit, this is an extreme amount of work. Wondering which lunatics here have done it and what golden advice you have before I start trekking off down this dusty ole road! 

Also, have you set up Arts on a touch screen using Biodirectional communication so when you select a track, the touch screen displays the arts for each midi channel? See, I'm thinking about making a main page on a touch screen and in the centre of the main page, there is a little box that is where the arts show up and change on a track by track basis. I think this is key to making this whole work flow not dogs brown.


----------



## jononotbono (Feb 6, 2020)

I'm just kind of wondering why anybody would bother using CC7 in the output mapping if, for example, you have 16 arts in Kontakt on separate Midi Channels and you can therefore balance the volumes in Kontakt? I guess if you ride a volume fader then the balance would be out and by programming CC7 in the output setting, it will never exceed the value you set. Brain is a bit fried at minute but that's the only reason I can think of why you would do that.

Also, what about libraries that have more than 16 arts for one instrument and therefore you need to use more than 1 Midi Port?Am I right in thinking an Expression Map can only use 1 Midi Port (per Expression Map)?


----------



## Dewdman42 (Feb 6, 2020)

I don't think its a good idea to use CC7 for level-balancing your keyswitches articulations. CC7 effects the entire channel so as soon as you have some overlap of notes or poly-articulation chords it will get messy. A better approach is to put those articulations each on a separate listening midi channels and adjust the level of each single-articulation instrument.


----------



## Dewdman42 (Feb 6, 2020)

and yea to your last statement, I agree, but if they are on separate channels, that is good for what I just said, but putting the CC7 message into the expression map just means you don't have to worry about setting the instrument when you are loading them in, they can all be set at unity gain and the CC7 messages will set them balanced to each other. Its an automatic way to re-load that level balancing configuration


----------



## Dewdman42 (Feb 6, 2020)

But another consideration is that if you choose to use CC7 for level balancing your template, then that means you can't effectively use CC7 in your source track expression. Which may be fine, depending on how you like to work, or may not be. if you move a CC7 slider in your source track and it gets sent to all the articulation channels somehow, they would suddenly not be level balanced anymore. So basically CC7 would become a verboten controller in your actual midi tracks, when used that way. Use CC11 and never touch CC7.

So in that light, it would be better to do as you suggested in your last post...just set the instrument levels independent of CC7. Then allow CC7 to to bring ALL articulations up and down together while maintaining that balanced state, it would then work similar as CC11. Some people do like to use both CC11 and CC7 for expressive programming in their tracks.


----------



## jononotbono (Feb 6, 2020)

Dewdman42 said:


> and yea to your last statement, I agree, but if they are on separate channels, that is good for what I just said, but putting the CC7 message into the expression map just means you don't have to worry about setting the instrument when you are loading them in, they can all be set at unity gain and the CC7 messages will set them balanced to each other. Its an automatic way to re-load that level balancing configuration



Ok, I'm a little confused. So what you are saying is to set the max CC7 in the Output setting as Tom is doing in his video? And not use Kontakt CC7 (on each art with different midi channels)?

What happens when you want more volume with an art but you've set it? Just tweak it louder?



Dewdman42 said:


> But another consideration is that if you choose to use CC7 for level balancing your template, then that means you can't effectively use CC7 in your source track expression. Which may be fine, depending on how you like to work, or may not be. if you move a CC7 slider in your source track and it gets sent to all the articulation channels somehow, they would suddenly not be level balanced anymore. So basically CC7 would become a verboten controller in your actual midi tracks, when used that way. Use CC11 and never touch CC7.
> 
> So in that light, it would be better to do as you suggested in your last post...just set the instrument levels independent of CC7. Then allow CC7 to to bring ALL articulations up and down together while maintaining that balanced state, it would then work similar as CC11. Some people do like to use both CC11 and CC7 for expressive programming in their tracks.



And so, If I'm understanding this correctly...

In the Expression Maps, set the output setting to set a Max Vol level with CC7. Do this with everything to balance everything how I want. Then, surely, if I touch a fader that's assigned to CC7, wouldn't that just affect the volume of the art with that Midi Channel and not all of them at the same time? Isn't setting the CC& level in the Expression Map Setting not just affecting the Kontakt CC7 level? I'm not in front of Cubase to test this at the minute hence why I'm asking.

If I can get away with using my Current VEPro template that has all separate arts on seperate midi channelsand only rebuild my Cubase template that would be good but I'm just trying to understand this vol balancing between Expression Maps and Kontakt Volume.

Sorry, been a long day... so far.


----------



## Dewdman42 (Feb 6, 2020)

We're getting the area of pros and cons and everyone may have their own slightly different workflow, so take any of comments as merely observations to take into consideration.... I haven't watched JXL's video either, so please forgive me if I say something stupid related to that...



jononotbono said:


> Ok, I'm a little confused. So what you are saying is to set the max CC7 in the Output setting as Tom is doing in his video? And not use Kontakt CC7 (on each art with different midi channels)?
> 
> What happens when you want more volume with an art but you've set it? Just tweak it louder?



First, yes. If you don't have any kind of _CC forwarding feature_ in place, then you could put a CC7 event on channel 7, for example, and that would only effect that one articulation. See my comments about CC7 use later on this post.

Secondly, i misspoke earlier based on my own bias, which is that I prefer to have one source track, with all expression and notes programmed on that one source track. That means I want to have not only notes forwarded to channels 1-16, but also any CC11, CC1, etc.. all forwarded automatically to the destination channels where the articulation instruments are listening. In LogicPro I have a script that does that forwarding aspect and I'm working on something to do that in cubase too.

In that light, changing CC7 in the source track would negatively impact all articulations and would bring them all to the same CC7 level that first time I tried to use CC7 in my source track or if i moved any CC7 slider anywhere on that source track.

If you don't have something automatically forwarding your CC events, then this won't be an issue for you.

That being said, if you don't have some kind of forwarding capability, then you have a different problem. How do you go about programming expression into your track? You would have one source track with all the notes, and an expression map that channelizes all the notes to channels 1-16 for the articulations, but then you'd have to setup, I guess, 16 source tracks just to have the CC11, CC1, (CC7) or other expressions for each of the articulations. Me personally, at that point it starts to break down as a working model, I'd rather just have 16 separate source tracks with both notes and CC's each on their own track for each articulation....which negates the whole point of using expression maps.

see what I mean?

So anyway, if you use expressionMap channelizing, you will need to generally solve the problem of how to program in your crescendos and descrendos and other expressive controllers and all of that to channelize to the 16 channel where your articulation instruments are listening. This is not only CC7 but any other controllers.




> In the Expression Maps, set the output setting to set a Max Vol level with CC7. Do this with everything to balance everything how I want. Then, surely, if I touch a fader that's assigned to CC7, wouldn't that just affect the volume of the art with that Midi Channel and not all of them at the same time? Isn't setting the CC& level in the Expression Map Setting not just affecting the Kontakt CC7 level? I'm not in front of Cubase to test this at the minute hence why I'm asking.



Without the CC forwarding stuff, yes you are correct, touching CC7 on channel1 should only change the articulation listening on channel1. Still I prefer to have CC7 pretty much always set to 127 and rarely if ever change it. See below.



> If I can get away with using my Current VEPro template that has all separate arts on seperate midi channelsand only rebuild my Cubase template that would be good but I'm just trying to understand this vol balancing between Expression Maps and Kontakt Volume.
> 
> Sorry, been a long day... so far.



See above for what will happen to CC7 messages, depending on how you're working.

Regarding CC7 in general, I like to think of it more as a set-it-once parameter that I don't want to change expressively during the track. But there are differing opinions on how to use it and there is no hard and fast rule. Some people can and do use CC7 expressively in their tracks and if that works for them, then great!

But my thoughts about CC7 in general is that it should generally be set to 127 and left there. The reason is because I want the actual instrument to be producing its maximum dynamic range. If I need to adjust the volume level of the instrument I will almost always prefer to do that in the mixer that is hosting it with a fader. A mixer fader has more precision then CC7, and basically I can adjust the actual volume of each instrument, while still preserving the complete dynamic range that was programmed into that instrument, captured by the samples, etc. Using CC7 less then 127 is like compressing it. IMHO.

But in any case, whether we are talking about CC7 or the mixer fader, for orchestra work I still view these as set-once settings at the start of the track, and I will use CC11 or other programmed CC's to effect the dynamics of each instrument. I view CC7 or the mixer fader as simply a way to balance out the fact that some instrument was programmed by the maker too darn soft or too darn loud and needs to have a one-time balance adjustment. But expression in the musical piece, I avoid like the plague using mixer automation, or CC7 automation to do it. 

It may be necessary to tweak something, maybe there is some situation where you want to make the staccato articulation louder then it has been all the way up to a certain point of the composition, you want it to stand out or get buried, so you want to "express" that with CC7...I think you are asking that. But the thing is, you are venturing into unrealistic orchestration at that point. the right way to make staccatos softer would be to tell the instruments to play softer, which means lower velocity, lower CC11 or whatever the instrument provides for expression...leaving CC7 entirely alone. 

I view CC7 to be used as more of an overall template balancer... But like I said, I would much prefer to balance my template with mixer faders then with instrument volume (CC7) for the reason I stated above. But either way, I see it as a set it and forget it setting...used to compensate for one instrument being generally programmed louder or softer then others.

Not everyone agrees though, some people use CC7 extensively for expression and its entirely up to you.


----------



## babylonwaves (Feb 6, 2020)

@jononotbono

there are MIDI messages which are supposed to be an information for the entire midi channel and other which are more atomical, such as Notes and Poly Aftertouch. Control Changes are messages for an entire MIDI channel. Like Program Changes. That fundamental understanding of MIDI - nothing complicated.
My advice: Attaching a CC to an expression cannot work reliably. Maybe, a bit, for Directions if you ignore that the next CC will influence the tail volume of what happened before. But for Attributes, it'll just create chaos.

HTH


----------



## Uiroo (Feb 6, 2020)

@jononotbono Which libraries do you want to make expression maps for? You could check some expression maps others uploaded out and check how they work for you. I can send you mine if that helps.

I didn't include volume correction via CC7 in my expression maps. I just hope the articulations are relatively consistent in volume, and if not I compensate that with modulation and velocity.


There are some tricky things about expression maps if you want to have a lot of articulations in one track, just in case you might want to do that.
I have 33 for the first Violins from SSS, for example.

The problem you'll run into if you try that is, things will get buggy if you try to program keyswitches on the second midi channel in kontakt if you've that done already for the first midi channel.
Keyswitches will get stuck.
The solution for that is:
Have a sound-slot at the very top assigned to midi channel 2 that triggers all keyswitches, with no remote assigned. Sounds weird but works. So C-2, C#-2, D-2, etc, all in one sound-slot.
Unfortunately that somehow didn't do it for getting keyswitches to work on midi channel 3 in kontakt, so I have single articulations on channel 3-16 that just get triggered via midi output alone.

So 1 and 2 can have keyswitches, 3-16 can't, but that way you can have so many articulations that the expression map gets crowded enough :D

But having ONE track for 1st violins in SSS instead of...
Violins 1
Violins 1 Legato Sul G
Violins 1 Decorative Techniques
Violins 1 Core techniques
Violins 1 Performance Legato
Violins 1 Legacy Legato performance
... is just awesome.

I mean, having 6 tracks with keyswitches(I'LL NEVER TOUCH THOSE AGAIN!) for ONLY the first violins alone, that's just madness :D


----------



## Dewdman42 (Feb 6, 2020)

not sure what you mean about keyswitches getting stuck. can you please clarify?


----------



## JamieLang (Feb 6, 2020)

I use them whenever I do strings. I'm not a "MIDI composer", per se, where I have some huge template and it's either/or...I'm not ABOUT to put different articulations on different tracks. 

....the alternative workflow is to use key switches like the instrument designer intended, as I see it. Maybe this is me being "String centric" about this--but a legato line with some marcato on notes to accent tied into some quick detache can NOT be achieved with separate tracks per articulation--because it would mean separate INSTRUMENT INSTANCES on a different MIDI channel--the legato won't "end" the same way going from 4 notes to "silence" and 4 notes changing to marcato...how do you add any kind of portamento to any of the above? 

So, when you play a line in...and you want to make a bunch of notes marked....or slide between others...you do that with "one track per articulation" how? I can just loop the phrase, select notes and choose their articulations...until the phrase sounds/feels natural. from one window, FWIW.

Now...I don't get super fancy with them. I've often dreamed of spending the time to get maps set up to be able to very quickly toggle between libraries and it "make up for differences" as best to ease just changing out this string lib for that one...but...honestly--I just set up some basic ones ad hoc for the project/instrument/part I need them for.


----------



## jononotbono (Feb 6, 2020)

JamieLang said:


> I use them whenever I do strings. I'm not a "MIDI composer", per se, where I have some huge template and it's either/or...I'm not ABOUT to put different articulations on different tracks.
> 
> ....the alternative workflow is to use key switches like the instrument designer intended, as I see it. Maybe this is me being "String centric" about this--but a legato line with some marcato on notes to accent tied into some quick detache can NOT be achieved with separate tracks per articulation--because it would mean separate INSTRUMENT INSTANCES on a different MIDI channel--the legato won't "end" the same way going from 4 notes to "silence" and 4 notes changing to marcato...how do you add any kind of portamento to any of the above?
> 
> ...



Have you got any examples of your string programming online? Be interested in hearing something you’ve done to hear your technique!


----------



## Uiroo (Feb 6, 2020)

Dewdman42 said:


> not sure what you mean about keyswitches getting stuck. can you please clarify?


What happens is that if you change to another articulation via expression map, the one you had previously is still active, so you have two articulations at the same time. 

Now that I think about it, I don't know if every kontakt library has the function to have two articulations activated simultaneously, so that might not be an issue with every library. With Spitfire it's an issue.


----------



## Dewdman42 (Feb 6, 2020)

That is an allowable situation though. Why is it causing a problem?


----------



## jononotbono (Feb 6, 2020)

babylonwaves said:


> @jononotbono
> 
> there are MIDI messages which are supposed to be an information for the entire midi channel and other which are more molecular, such as Notes and Poly Aftertouch. Control Changes are messages for an entire MIDI channel. Like Program Changes. That fundamental understanding of MIDI - nothing complicated.
> My advice: Attaching a CC to an expression cannot work reliably. Maybe, a bit, for Directions if you ignore that the next CC will influence the tail volume of what happened before. But for Attributes, it'll just create chaos.
> ...



Seriously considering buying your art conductor package for Cubase. For that price it would be a quick start to trying out this.


----------



## Uiroo (Feb 6, 2020)

Dewdman42 said:


> That is an allowable situation though. Why is it causing a problem?


Well, it's a problem if you don't want it to happen.


----------



## Dewdman42 (Feb 6, 2020)

what is happening that is wrong?


----------



## Dewdman42 (Feb 6, 2020)

Uiroo said:


> What happens is that if you change to another articulation via expression map, the one you had previously is still active, so you have two articulations at the same time.



you said two articulations are active at the same time, but what is the problem that you're having EXACTLY. This is a problem because why? Articulation 1 sends a key switch followed by a note, and while that note is still sustaining, the next note sends a different keyswitch followed by the note... you said something about "stuck" switches. What do you mean by "stuck"


----------



## Uiroo (Feb 6, 2020)

Ok, example:
I click on Col Legno via Expression Maps, Col Legno gets triggered, cool.
Now I click on Longs, play some notes and Col Legno AND Longs gets triggered, but I only wanted Longs, because the Col Legno Expression Map is no longer active.


----------



## Dewdman42 (Feb 6, 2020)

can you post a copy of the expression map you're trying to use?


----------



## Uiroo (Feb 6, 2020)

Not necessary, it does work with the workaround. The problem I talked about is hard to explain, but if someone ever has any similar issues, he can try the solution I proposed.


----------



## jononotbono (Feb 7, 2020)

@babylonwaves 

So, I've just been having a look at theCubase E Map list. It's an impressively large collection. I'm curious about Spitfire Symphonic Libraries (and SCS). Have you included Time Machine Shorts?


----------



## jononotbono (Feb 7, 2020)

Here's a general question about Expression Maps.

I understand what Direction and Attribute does. But my question is, Why would you ever use Attribute? I ask this because it's blatantly obvious that arts do not get selected (all the time) unless you have the dat slightly before the notes so surely using Attribute, and you click on each note, you are going to have to draw another one just before it? Or draw them all just before? Why then, wouldn't you just use Direction and be done with it?

In this JXL video, Tom shows how an art doesn't play and he's using Direction. I'm imagining Attribute being a complete shit show if this is the case?!

Here's the video at exactly that spot. If anything shows Expression Map unreliability, this video clip is it! Restore my faith people!


----------



## Dewdman42 (Feb 7, 2020)

attributes do always send the keyswitch before the note. Rumors to the contrary are not true. Also if you move the note the attribute moves with it. Poly articulation chords are possible with it. 

direction has its usefulness also but my reccomendation would be to consider using attribute most of the time and only use direction when you specifically need it

The main reason people have troubles with the order is when they are trying to use cc switches into kontakt or any vst3 instrument. There are design flaws in those when attempting to use cc switches.


----------



## jononotbono (Feb 7, 2020)

Dewdman42 said:


> attributes do always send the keyswitch before the note. Rumors to the contrary are not true. Also if you move the note the attribute moves with it. Poly articulation chords are possible with it.
> 
> direction has its usefulness also but my reccomendation would be to consider using attribute most of the time and only use direction when you specifically need it


Then I am curious why JXL is using direction? Needless to say, he's one of the biggest film composers in the world right now, and let's be safe to say, this guy knows his tech probably better than most. Sure, we all have ways to do things but if your recommendation is to use Attribute, why doesn't he? Curious, did you watch the video I just linked? If attribute was more reliable, why wouldn't Tom be doing that?


----------



## jononotbono (Feb 7, 2020)

@babylonwaves

Hey man, reading the manual and also curious about the standardization of Keyswitching you have used. Here's a screen shot from the manual. So, whereabouts are Staccatissmo, Short Dig, etc that are all in SCS and SSS?


----------



## kimgaboury (Feb 7, 2020)

You would think that the big orchestral libraries would all come with extensive expression maps, alternate versions etc, already set up. I'd pay extra for that. Who wants to spend hours setting that up (when u don't have assistants).


----------



## jononotbono (Feb 7, 2020)

kimgaboury said:


> You would think that the big orchestral libraries would all come with extensive expression maps, alternate versions etc, already set up. I'd pay extra for that. Who wants to spend hours setting that up (when u don't have assistants).



Well, I agree but people like to set stuff up how they want it and it's impossible to provide this kind of stuff for everyone. I too am looking for a way of easing the suffering but I don't think it's possible! Unless of course you buy some of these packs that are kind of just to get someone started. Well, that's my impression anyway!


----------



## Dewdman42 (Feb 7, 2020)

His video comes up as unavailable for me.

Did he explain why he’s using direction? Direction has certain specific advantages and perhaps he wants that. I have no idea why he chose to use it in that case. As I said above, when you need one of those advantages by all means use direction!

but for one thing I am more confident that attributes will always fire before notes they are attached to. Directions are not attached to notes, so maybe they will fire first or maybe not? If they are cc switches then you could have problems either way.

directions arent attached so you can make broader strokes instead of having to articulate every note. Directions can trigger after the note attack, as a feature. Etc there are pros and cons both ways


----------



## jononotbono (Feb 7, 2020)

Dewdman42 said:


> His video comes up as unavailable for me.



That's a shame because you really should watch it. It's basically the best video out there on Expression Maps. When you have the time, search for "*Creating Expression Maps [Studio Time: S3E13]"* on Junkie XL's You Tube Channel.


----------



## babylonwaves (Feb 7, 2020)

jononotbono said:


> Hey man, reading the manual and also curious about the standardization of Keyswitching you have used. Here's a screen shot from the manual. So, whereabouts are Staccatissmo, Short Dig, etc that are all in SCS and SSS?


the diagram just shows the lowest octave. the KS in there are universal for every library. from C-1 and higher you find the rest of the remaining KS, your shirt digs etc etc.
this way you find the most important ones always in the same place and the rest further up. i have the universal ones on my meta grid main page and if I need the more "exotic" ones, i just select them from the list in cubase/logic.
as for your other question about the time machine patches, i believe we don't have them - simply because I never did look at them. doh! but thanks for pointing this out, they'll come with the next sub release.
thinking about it, you could also just use the combined template (which contains everything) and pick the shorts there - they will trigger the right articulations in the TM patches.


----------



## Alex Fraser (Feb 7, 2020)

babylonwaves said:


> as for your other question about the time machine patches, i believe we don't have them - simply because I never did look at them. doh! but thanks for pointing this out, they'll come with the next sub release.


I was planning to use the standard Babylon "CB" maps in Logic - the Time Machine articulations respond to the same UACC KS velocities as the standard patches.

But now you've said this, I'll be refreshing my inbox repeatedly for the update.


----------



## jononotbono (Feb 7, 2020)

babylonwaves said:


> the diagram just shows the lowest octave. the KS in there are universal for every library. from C-1 and higher you find the rest of the remaining KS, your shirt digs etc etc.
> this way you find the most important ones always in the same place and the rest further up. i have the universal ones on my meta grid main page and if I need the more "exotic" ones, i just select them from the list in cubase/logic.
> as for your other question about the time machine patches, i believe we don't have them - simply because I never did look at them. doh! but thanks for pointing this out, they'll come with the next sub release.
> thinking about it, you could also just use the combined template (which contains everything) and pick the shorts there - they will trigger the right articulations in the TM patches.



Interesting and thanks

Yes, the TM patches are great and I use them a lot now so I would definitely add those to the maps. Perhaps they would be on a separate midi track actually. Main reason is that these bad boys use a lot of ram so if they are all loaded in VEPro on their own Midi Port and instance of Kontakt, they can be disabled to save ram instead of having them always enabled. 

Can you explain to me how expression maps work using different Midi Ports? For example, in SSS, there are over 30 arts for VLN1. Needless to say, they have to use a few Midi Ports as you can only have 16 per port. Is this easy to do with expression maps?

I noticed with your Berlin Strings Maps that you have Expression maps for each part of each section (1 for shorts, 1 for sustains etc)? How come these aren't all in one Expression Map? Is it because of Midi Ports? 

Just tryin to get my head around all of this and minimize wasting my time.


----------



## Dewdman42 (Feb 7, 2020)

Cubase treats different ports as being seperate tracks so as far as I can see you can only have 16 Midi channels per expression map. The track itself has to choose which port to send all midi to.

Maybe there is something wrong with the video link you posted today at a certain time point, that says unavailable. The one hour video I searched for yesterday and started to watch but he was taking too long to say what he needed to say and so I haven’t had an hour to sit through it. I plan to at some point.


----------



## jononotbono (Feb 7, 2020)

Dewdman42 said:


> Cubase treats different ports as being seperate tracks so as far as I can see you can only have 16 Midi channels per expression map. The track itself has to choose which port to send all midi to.
> 
> Maybe there is something wrong with the video link you posted today at a certain time point, that says unavailable. The one hour video I searched for yesterday and started to watch but he was taking too long to say what he needed to say and so I haven’t had an hour to sit through it. I plan to at some point.



Go to 10 mins in and watch from there. You will instantly see how unreliable Expression Maps are.


----------



## Dewdman42 (Feb 7, 2020)

With direction only?


----------



## jononotbono (Feb 7, 2020)

He doesn't use Attribute so yes, this is with Direction. It would be helpful if you watched his video as he talks about all sorts of stuff including using CCs and Volume balancing in the Output Settings.

And as I have previously said, JXL knows his stuff. More than most people. There is a reason why he doesn't use Attribute. And my guess is... How can attribute possible change the arts properly if they are attached to the note and aren't detected before the notes? It's impossible. It's why people use Direction and Drag them slight before each note. Attributes are ON the note, not before. Anyway, thanks for all your input here but let's not go round in circles with this stuff. Ah what am I talking about. It's VI-C!


----------



## Dewdman42 (Feb 7, 2020)

jononotbono said:


> He doesn't use Attribute so yes, this is with Direction.



In my view that's why. 



> It would be helpful if you watched his video as he talks about all sorts of stuff including using CCs and Volume balancing in the Output Settings.


I hear you. Can't today



> And as I have previously said, JXL knows his stuff. More than most people. There is a reason why he doesn't use Attribute. And my guess is... How can attribute possible change the arts properly if they are attached to the note and aren't detected before the notes? It's impossible. It's why people use Direction and Drag them slight before each note. Attributes are ON the note, not before. Anyway, thanks for all your input here but let's not go round in circles with this stuff. Ah what am I talking about. It's VI-C!



no. Attributes are attached to notes. Cubase knows to send the keyswitches before the notes because they are attached.

Directions are not attached to notes, so that is more like a CC lane. It might happen before the note or might not...you would need to nudge it before if you want to make sure its before. In my view, Cubase OUGHT to always send directions before notes on any given sample time, but hey..take that up with Steinberg. In the meantime, if you want to use directions, then you might have to nudge them earlier. Attributes you should not have to nudge.

The advantage of directions is that you can put it in once, like a CC lane message ...and its good until further notice. A direction simply sends the keyswitches at the point where you put it... I have heard it stated here by Ivan that if you put a direction after a note is already sustaining, it may send those keyswitches at that point in time also..which some rare sample libraries do have keyswitches you can engage while holding a note...so that would come in handy for that. etc..

Directions are convenient if you have a long phrase of staccatos you can set it once before the phrase (nudged early) and then you don't have to attach attributes to each note. But you do have to nudge it early.

Attrbutes are attached to the notes and just work.

Try it you will see.


----------



## jononotbono (Feb 7, 2020)

Dewdman42 said:


> Attrbutes are attached to the notes and just work.



So you are saying Attribute never misses an art?

I've used Direction many times (always having to put them slightly before each art change) before as I always found Attribute not to work well. I made expression maps for the whole 8Dio Adagio and Agitato libraries a few years back (and gave them away here somewhere) but it's been so long I haven't bothered since and they would have been bad no doubt. Although I do remember having to put an empty Sound Slot as the first for some bug at the time. Yes, I will try Attribute when I make an expression map this weekend. Having the click each one in, for every note? This sounds like a massive pain in the ass. And that's only the start if they aren't reliable. But hey, you are saying they are fine.

Anyone else use Attribute?


----------



## lucor (Feb 7, 2020)

jononotbono said:


> So you are saying Attribute never misses an art?
> 
> I've used Direction many times (always having to put them slightly before each art change) before as I always found Attribute not to work well. I made expression maps for the whole 8Dio Adagio and Agitato libraries a few years back (and gave them away here somewhere) but it's been so long I haven't bothered since and they would have been bad no doubt. Although I do remember having to put an empty Sound Slot as the first for some bug at the time. Yes, I will try Attribute when I make an expression map this weekend. Having the click each one in, for every note? This sounds like a massive pain in the ass. And that's only the start if they aren't reliable. But hey, you are saying they are fine.
> 
> Anyone else use Attribute?


Attributes are BY FAR the superior method imo. So much easier to change things around.
And you don't have to "click each one in", you just select all notes that you want to be e.g. Spiccato and then up top in the midi editors inspector there is a drop-down menu where you can assign your articulation.


----------



## jononotbono (Feb 7, 2020)

lucor said:


> Attributes are BY FAR the superior method imo. So much easier to change things around.
> And you don't have to "click each one in", you just select all notes that you want to be e.g. Spiccato and then up top in the midi editors inspector there is a drop-down menu where you can assign your articulation.



I shall give it a go.


----------



## Dewdman42 (Feb 7, 2020)

It hasn't happened to me yet, but I am just getting started with cubase, so there is that. But I also did some midi monitor testing to confirm it was always sending the keyswitch in front of the note...and it was. The attribute method basically works just like LogicPro articulation Id's. it sends the keyswitch in front of the note and the NoteOff (of the keyswitch comes after the note ends. Logic behaves exactly the same way with articulation ID.

I have read on some other forums people saying to only use attribute type expressions because of various problems they had trying to use directions. I think it doesn't need to be all one way or the other though. Directions have their place. its just that they aren't actually attached to the notes, so you might have to nudge them a little earlier when you do use them.

As far as assigning to the notes, you can lasso a group of notes and then above the pianoroll there is an atttribute control you can assign the same attribute to a bunch of notes that way all in one go. So its not as bad as having to click on the expression map lane for every note.. As to whether that is more work then clicking on one direction and then nudging it earlier..I leave that for you to decide


----------



## Dewdman42 (Feb 7, 2020)

and to re-iterate...be careful with CC switches and kontakt.


----------



## jononotbono (Feb 7, 2020)

Dewdman42 said:


> and to re-iterate...be careful with CC switches and kontakt.



Why? JXL Clearly shows that they work in his expression maps. I'm not understanding why you are against using CCs in Expression Maps? What are the potential problems?


----------



## Dewdman42 (Feb 7, 2020)

CC's with Kontakt is what I'm saying be careful with. In some cases it might work with other instruments just fine and even with kontakt it can work sometimes, but sometimes you may have troubles. SO when you have an option to use a normal keyswitch instead of a CC switch, I recommend that.

There are some other long winded threads on here about it if you want to read more, but kontakt has some design flaws in the way it handles CC's.. so basically there are some situations where CC switches will not work as intended in kontakt. Also VST3 instruments have this problem.

This comes up especially when using either LogicPro articulationSets or Cubase Expression Maps because they try to send the keyswitches on the same sample time.


If you have a chord with different articulations on each note of the chord. Then what happens is the whole chord will end up with all the same articulation, one of the note's articulations will be applied to all the notes of the chord. The work around is to stagger the midi events slightly so that they aren't on the same timestamp. Then the individual articulations will be recognized. You do not need that work around if you are using note-based keyswitches. This only happens with CC switches.


If for some reason you need to send a series of CC switches before a note and they are both using the same CC-num. Only the last CC will be received by kontakt. For example, if you had to send CC58=5, CC58=10, (note). Kontakt would only see CC58=10, (note).

This is due to a design flaw in kontakt. Many times people are having problems with expression maps I would argue they are suffering with kontakt flaws. It generally shows up more prominently with expression maps because if they have a quantized chord, then the expression map is trying to send stuff on the same timestamp, which is exactly what it should do, but Kontakt is just lame about receiving it. 

VST3 instruments also generally have this flaw...More so issue#1 then issue#2.

In any case, if you have an option to avoid CC switches, then I say do so....at least with kontakt.


----------



## Dewdman42 (Feb 7, 2020)

Regarding JXL's CC7 approach, I already wrote my opinion about that. That isn't really because of the kontakt flaw though.. well it might be if you had a chord with two different articulations on each note of the chord(for example with divisi writing).

But in general if you never have polyphonic tracks...or you know for sure that at any given time there will never be two different notes at the same time with different articulations on any given track..or that you will never need a sequence of CC switches per note.. Then cc switches can be fine, including CC7 if you must.


----------



## Uiroo (Feb 7, 2020)

jononotbono said:


> In this JXL video, Tom shows how an art doesn't play and he's using Direction. I'm imagining Attribute being a complete shit show if this is the case?!


That specific clip worried me quite a bit before I started making expression maps, but I never had this bug since I worked with expression maps.
Works great.

I'm using direction, why anyone would use attribute is beyond me.
The great thing about direction is that they don't trigger the articulation at a specific point but for the whole area until there is a new direction.

So no playback madness when skipping parts in your piece because you didn't play through all the necessary keyswitches.

edit: if you really want to assign the articulation to each note, ok, that might solve it, not really my preferred method.


----------



## Dewdman42 (Feb 7, 2020)

I had JXL video playing in the background while working on some other stuff... One other interesting workflow he pointed out is that with Directions you can setup the articulations before recording any notes. Then you can record your notes while the articulations change automatically as you play. Pre-programmed patch changes. Might come in handy. You can't do that with attributes. The attributes have to be attached to notes that are already recorded. So that is a cool use case for DIRECTIONS.

He didn't cover GROUPS at all, which I feel is a very powerful feature of Expression Maps. When using groups then it suddenly makes a lot of sense to use DIRECTIONS for maybe the secondary group. For example, lets say your first group is the basic articulation, (ie, staccato, spiccato, etc.), then the second group is divisi. Might make sense to put the second group as DIRECTIONs.. or might not. Perhaps dynamics could be better there. For example in his demo he could have put light, medium, strong as group 2, and then staccato, long as the first group. I don't think he really setup that expression map the best way for what he was trying to achieve honestly. But the GROUP feature is hard for a lot of people to wrap their head around. so there is that.

He also didn't cover dynamics Mapping and he seems to be throwing CC7 into his articulations as an alternative to using Cubase Dynamics Map.. As I said earlier, whatever works for you, but I would still advise against putting CC7 into your fundamental expression map. Keep dynamics separate. CC7 effects the whole channel so unless you are also channelizing the articulations to separate instruments, that can cause issues eventually...and anyway, really he should have put CC7 into group 2 rather then embedded in group 1.


----------



## Dewdman42 (Feb 7, 2020)

Uiroo said:


> So no playback madness when skipping parts in your piece because you didn't play through all the necessary keyswitches.
> 
> edit: if you really want to assign the articulation to each note, ok, that might solve it, not really my preferred method.



Actually attribute type articulations are more likely to be sure that you can start playback anywhere and be sure the keyswitches will always be sent. But its good to hear that DIRECTION articulations also _chase_ when they need to.


----------



## Uiroo (Feb 7, 2020)

Dewdman42 said:


> I had JXL video playing in the background while working on some other stuff... One other interesting workflow he pointed out is that with Directions you can setup the articulations before recording any notes. Then you can record your notes while the articulations change automatically as you play. Pre-programmed patch changes. Might come in handy. You can't do that with attributes. The attributes have to be attached to notes that are already recorded. So that is a cool use case for DIRECTIONS.


Yeah, that's quite cool actually. I often program stuff first to get the part writing down and then play it in after that, so that's quite handy


----------



## Pablocrespo (Feb 7, 2020)

jononotbono said:


> So you are saying Attribute never misses an art?
> 
> I've used Direction many times (always having to put them slightly before each art change) before as I always found Attribute not to work well. I made expression maps for the whole 8Dio Adagio and Agitato libraries a few years back (and gave them away here somewhere) but it's been so long I haven't bothered since and they would have been bad no doubt. Although I do remember having to put an empty Sound Slot as the first for some bug at the time. Yes, I will try Attribute when I make an expression map this weekend. Having the click each one in, for every note? This sounds like a massive pain in the ass. And that's only the start if they aren't reliable. But hey, you are saying they are fine.
> 
> Anyone else use Attribute?


Here using attribute for years, never ever missed an art (not using groups or anything fancy), do a couple of stress test and you will see if it happens to you.

and if you have your touchscreen app and it can record mouse movements have a button for every attribute in the dropdown menu in the info line, you will smack your head for not trying it before: 

Select some notes, press the pizz button, there you go, all pizz, and so on.
It speeds your workflow tenfold.


----------



## jononotbono (Feb 7, 2020)

Pablocrespo said:


> and if you have your touchscreen app and it can record mouse movements have a button for every attribute in the dropdown menu in the info line, you will smack your head for not trying it before:



Well, I was going to set up a Biodirectional thing so the art buttons change on each midi track. So essentially, each button fires off a Note value. Or a program change. That in itself is a massive amount of work soI'm going to build a small Exp Map in a bit as I'm now behind Cubase. Why record mouse clicks? Interesting.


----------



## Akarin (Feb 7, 2020)

jononotbono said:


> Why? JXL Clearly shows that they work in his expression maps. I'm not understanding why you are against using CCs in Expression Maps? What are the potential problems?



Mostly because Cubase doesn't retrigger CC value changes mid-phrase. If you stop the playback mid-phrase and restart it, it will revert to the default articulation. This doesn't happen with keyswitches.


----------



## Pablocrespo (Feb 7, 2020)

jononotbono said:


> Well, I was going to set up a Biodirectional thing so the art buttons change on each midi track. So essentially, each button fires off a Note value. Or a program change. That in itself is a massive amount of work soI'm going to build a small Exp Map in a bit as I'm now behind Cubase. Why record mouse clicks? Interesting.



I use mouse recording because doesn´t let you select the attributes menu by going to the info line and press tab until you reach it, it goes over it (Steinberg magic!) so, a keyboard macro is not possible.

But I use dtouch to record the mouse and set a button to activate it.
It has an expression map section that changes with the keyswitches but I haven´t used yet. You can select a midi channel and the expression map dinamically change I think


----------



## jononotbono (Feb 7, 2020)

Pablocrespo said:


> I use mouse recording because doesn´t let you select the attributes menu by going to the info line and press tab until you reach it, it goes over it (Steinberg magic!) so, a keyboard macro is not possible.
> 
> But I use dtouch to record the mouse and set a button to activate it.
> It has an expression map section that changes with the keyswitches but I haven´t used yet. You can select a midi channel and the expression map dinamically change I think



Ah, gotcha. Well, I have a Slate Raven so Batch Commander can do this. Hmmm, there seems to be a lot of different ways of doing this stuff!


----------



## babylonwaves (Feb 7, 2020)

jononotbono said:


> Can you explain to me how expression maps work using different Midi Ports? For example, in SSS, there are over 30 arts for VLN1. Needless to say, they have to use a few Midi Ports as you can only have 16 per port. Is this easy to do with expression maps?
> 
> I noticed with your Berlin Strings Maps that you have Expression maps for each part of each section (1 for shorts, 1 for sustains etc)? How come these aren't all in one Expression Map? Is it because of Midi Ports?



you can combine the core/deco/legato instruments in SF libraries into one big thing using the same MIDI channel and it all just works (when you use UACC KS to control the instruments). that's why, there is only one instrument for the entire set of articulations. you can't do this with OT Capsule and since their default are not designed to offer this comfort, Art Conductor supports what avail in their stock instruments only.


----------



## jononotbono (Feb 7, 2020)

babylonwaves said:


> you can combine the core/deco/legato instruments in SF libraries into one big thing using the same MIDI channel and it all just works (when you use UACC KS to control the instruments). that's why, there is only one instrument for the entire set of articulations. you can't do this with OT Capsule and since their default are not designed to offer this comfort, Art Conductor supports what avail in their stock instruments only.



Ok that makes sense. So a decent feature request for Cubase would be to use different midi ports with Expression Maps so you could technically have 48 x 16 arts in one map. Silly number I know but that’s the maths of it, if it was possible.


----------



## Dewdman42 (Feb 7, 2020)

I agree that would be super cool, but its unlikely because of the way Cubase handles midi track routing. Each track has to choose a port, and then that track has a bunch of attributes including which expression map for that port. It wouldn't matter if expression maps themselves had a PORT attribute unless they completely upgrade midi track routing in cubase, which is unlikely.

I'm not sure you really want so many things in one expression map anyway, it would be ridiculous to try to maintain that many slots. Could easily be literally thousands of slots, especially if you use the groups feature at all. Not only that, but then you would have to have this ginormous singular expression map that is laid out exactly like your overall project template with very little ability to shift tracks around or reuse in different ways. Much better to have smaller modular expression maps.

But i hear you, there isn't a very good way to handle multi-channel articulations across more than 16 channels across multi ports. Only LogicPro can do that right now.

Cubase generally is a bit limited by the fact that each midi track can only send to one midi port at a time. I have an idea for a VST3 plugin that could work around that limitation, but its just a fantasy right now. I also have some ideas about using scripting, but it requires still some midi routing improvement, either with Cubase or one of the third party plugin hosters like BlueCatAudio Patchworks or something to be improved. Otherwise, you're just simply limited to 16 articulation channels per instrument...


----------



## JamieLang (Feb 10, 2020)

jononotbono said:


> Have you got any examples of your string programming online? Be interested in hearing something you’ve done to hear your technique!



I don't really have strings solo'd or anything, but there are strings I did on the album here...

jamielang.bandcamp.com

from memory--Girl I used to Know and With Each Passing Hour have them...likely others, but it's been a while. I just did the strings for a new record last year--which found me doing an abnormally high amount of them (I think 5 or 6 of the 9 songs?)...I guess I could post some solo'd stuff from it--but, I don't think I have a terribly unique technique--it's LITERALLY how most string instruments** were intended to be used. It was, as example to go back--the advantage in the VSL Vienna Instruments over the Gigastudio version of their samples with that add on that served as a kind of early input scriptor...I forget what that was called...good riddance. 

**prior to the latest versions of the uber input scripted "playable" Kontakt instruments where they're shooting for you not controlling articulations at all, but instead having AI determine what makes the most sense.


----------



## Guitarsound77 (Feb 11, 2020)

Dewdman42 said:


> I agree that would be super cool, but its unlikely because of the way Cubase handles midi track routing. Each track has to choose a port, and then that track has a bunch of attributes including which expression map for that port. It wouldn't matter if expression maps themselves had a PORT attribute unless they completely upgrade midi track routing in cubase, which is unlikely.
> 
> I'm not sure you really want so many things in one expression map anyway, it would be ridiculous to try to maintain that many slots. Could easily be literally thousands of slots, especially if you use the groups feature at all. Not only that, but then you would have to have this ginormous singular expression map that is laid out exactly like your overall project template with very little ability to shift tracks around or reuse in different ways. Much better to have smaller modular expression maps.
> 
> ...




CUBASE CAN send midi more than one midi port . just use midi sends . there is 4 midi sends + main midi out .


----------



## Dewdman42 (Feb 11, 2020)

Midi send does not change the situation. There is no way to specify on a per note basis which send to send the midi to. All that does is duplicate all midi to a couple of other ports if you want. The track inspector only allows you to specify which channel to send midi too. If you use a send, then for example, all events on midi channel 4 will go both to the main midi port and each fo the configured sends... These sends are not recognized as independent midi channels by anything inside the track, including the inspector, expression maps, midi plugins, etc. They can only designate channel 1-16. Then sends will duplicate whatever is there to a few other ports if you wish. That is not good enough to support expression maps addressing multiple ports.


----------



## shawnsingh (Feb 19, 2020)

Dewdman42 said:


> I had JXL video playing in the background while working on some other stuff... One other interesting workflow he pointed out is that with Directions you can setup the articulations before recording any notes. Then you can record your notes while the articulations change automatically as you play. Pre-programmed patch changes. Might come in handy. You can't do that with attributes. The attributes have to be attached to notes that are already recorded. So that is a cool use case for DIRECTIONS.
> 
> He didn't cover GROUPS at all, which I feel is a very powerful feature of Expression Maps. When using groups then it suddenly makes a lot of sense to use DIRECTIONS for maybe the secondary group. For example, lets say your first group is the basic articulation, (ie, staccato, spiccato, etc.), then the second group is divisi. Might make sense to put the second group as DIRECTIONs.. or might not. Perhaps dynamics could be better there. For example in his demo he could have put light, medium, strong as group 2, and then staccato, long as the first group. I don't think he really setup that expression map the best way for what he was trying to achieve honestly. But the GROUP feature is hard for a lot of people to wrap their head around. so there is that.
> 
> He also didn't cover dynamics Mapping and he seems to be throwing CC7 into his articulations as an alternative to using Cubase Dynamics Map.. As I said earlier, whatever works for you, but I would still advise against putting CC7 into your fundamental expression map. Keep dynamics separate. CC7 effects the whole channel so unless you are also channelizing the articulations to separate instruments, that can cause issues eventually...and anyway, really he should have put CC7 into group 2 rather then embedded in group 1.




Well I haven't been through this entire thread, but wanted to jump in because I'm just now discovering the potential greatness of groups. Here is what I think I understand about groups:

- "articulations" are what appear in the midi editor, not "sound slots"

- you can have up to four simultaneous articulations that apply to each midi note - one articulation from each group.

- cubase will try its best to find a sound slot that matches the simultaneous articulations that are specified. that sound slot then has a corresponding output mapping which is what cubase outputs / transforms.

I wish it were possible to associate key switches to articulations too, in addition to assigning to sound slots.

I see how it may become a bit of nightmare to define all possible sound slots for multiple simultaneous articulations. But after that initial setup, it seems like the workflow could be better for me - it would allow me to make keyswitches more consistent across libraries (for initial midi recording, prototyping, and for copy-pasting across libraries), while still having the ability to manually edit with all the unique articulation options of each library for better end results. But it will be a lot of work to set it up...


----------



## Dewdman42 (Feb 20, 2020)

Yes correct. Groups are a great way to work but setting them up is a complete pain because the expression map editor basically sucks


----------



## Sevenfold (Mar 11, 2020)

A.G said:


> Expression Map Groups
> 
> 
> Hi everyone, so I’ll start with my goal and then go into what I’m doing to achieve this. Then hopefully you can help me figure out why its not working. So I use LASS and have an expression map built with 5 long note articulations: Sus (arco) midi channel 1 Sordino midi channel 2 Harmonics...
> ...



The post Ivan referenced on Steinberg's forum was helpful for understanding Expression Map Groups. For me it showed a bit clearer the problem that Cubase was likely trying to solve initially: as a composer it would be natural to include both Legato and Sordino articulations at once, especially while scoring a piece. I could create a Sound Slot that has a "Legato and Sordino" articulation, but they are really separate arts. So I'd want to mark notes or passages with both articulations.

If I have a sound library that includes a specific instrument for that combination, I'd want that "Legato and Sordino" instrument to play it. But if I don't, I'd likely want at least a Legato instrument to play it. Using the Groups feature, Cubase can fall back to the primary articulation, Legato. It seems very flexible to me, and if I have a library with many separate arts I'd have to map them out anyway.










But then I tried to create what the above forum post described... and it got murky. If I go into the Score Editor as mentioned, I get 2 Con Sordino arts:






First of all, the Score Editor version of the Expression Map list is horrible. The names are cut off at 6 characters. I already applied the "legato" art at this point, and wanted to add the Con Sordino to the score. Which "con so" is the one I want though? When I selected the first one, Cubase ended up applying 2 articulations from the same group, Group 1. It just played the "Legato" instrument. Wrong guess, so I chose the 2nd Con Sordino and it displayed as "con sord.2" in the Score Editor, which is nonsensical. But at least it somehow played the correct combination "Legato + Sordino" instrument. I guess success? It seems Cubase won't allow the same articulation name twice, so it forces a number at the end. Somehow it knows that "con sord." and "con sord.2" are the same articulation though, but it's not useful for scoring.






I'm thinking I'll just scratch the Group feature off my list of useful features for now. It was exciting then it wasn't. I'm more confused now than before I thought I understood how to use it.


----------



## youngpokie (Mar 12, 2020)

Sevenfold said:


> The post Ivan referenced on Steinberg's forum was helpful for understanding Expression Map Groups.



I’m not clear why you have two sordino articulations in your example. My understanding (perhaps wrong) is that if you create legato and separately in different groups, you can then create a combined Legato con Sordino articulation.


----------



## Sevenfold (Mar 12, 2020)

youngpokie said:


> I’m not clear why you have two sordino articulations in your example. My understanding (perhaps wrong) is that if you create legato and separately in different groups, you can then create a combined Legato con Sordino articulation.



I was building out the example in the Cubase forum post I referenced, but you're correct that it wouldn't make sense to have the same articulation in more than one group. That would eliminate some of the confusion but I'm still struggling with the workflow. It feels a bit rigid rather than fluid. There are a number of times where it would be useful though.


----------



## Dewdman42 (Mar 12, 2020)

in order to have pizzicato and legato, and separately mutes and non-mute:

you would put pizz and legato as each being in group 1, and sordino and non-sordino as group 2. You need 4 total sound slots for all combinations of the 4.

it’s also may be possible to consider a missing group two value to mean non-sordino I think, but still you need all 4 sound slots. If you use attribute expressions, then a non entry for sordino can mean that but if you try to use direction type then you will need an explicit mute-off articulation

i will post graphics later of this example


----------



## Dewdman42 (Mar 12, 2020)

Here are some screen shots...

Basically, if you use *ATTRIBUTE* style expressions, you can do it like this:






Then you will see on your lanes this:






In the above example, the first note is con sardino and the second two notes do not have it. 

The above only works if the Con Sordino articulation is an *ATTRIBUTE* type expression, which means it will need to be explicit set on every muted note.

If you want to use the *DIRECTION* expression type, then you will need both a mute and unmute articulation. It would look something like this:






With that above, the lane has both a mute and unmute lane...and with this mode you can turn on mute once for a phrase and turn it off again later until further notice, etc..


----------



## jononotbono (May 25, 2020)

Ok, so enough is enough. I'm about to start building expression maps.

Should I use Program Changes or Keyswitches?


----------



## Jimmy Hellfire (May 25, 2020)

Dewdman42 said:


> In the above example, the first note is con sardino and the second two notes do not have it.



I do love me some con sardino. True gelato of course is a must. I wonder why nobody samples prosciutto bowing though!


----------



## Dewdman42 (May 25, 2020)

Hehe yea. My iPhone spell check gets worse every year.


----------



## jononotbono (May 25, 2020)

What's best to map out the Expression maps? Key switches, Midi Channels, CC Values, Program Changes?


----------



## Dewdman42 (May 25, 2020)

It depends


----------



## jononotbono (May 25, 2020)

Well I don't wanna use CC Values. I guess selecting Short arts in CS2 or CSS is fine to use a fader but I'd rather not have a fader to do any of that. I want to use buttons on my touch screen to select them.


----------



## method1 (May 25, 2020)

I've been using Metagrid, that way I can combine various note selection presets with articulations.
I wish MG had a computer based editor though, it can be slow going setting up each button.


----------



## jononotbono (May 25, 2020)

And so the pain and suffering begins.


----------



## jononotbono (May 25, 2020)

Ok, so looking at the JXL Expression map video again. Specifically at the Output Mapping section. Is he using CC7 as an attempt to balance his arts in his template or does this act as a midi. chase kind of thing where the volume is always going to be at 90? Or the volume will never exceed more than 90?

Is anyone else here including a CC7 max volume?


----------



## method1 (May 25, 2020)

Not familiar with the library but that's a velocity ceiling on that articulation.


----------



## jononotbono (May 25, 2020)

method1 said:


> Not familiar with the library but that's a velocity ceiling on that articulation.



No. It's not velocity. It's volume. CC7.


----------



## method1 (May 25, 2020)

yeah oops, volume.


----------



## Reid Rosefelt (May 25, 2020)

Jimmy Hellfire said:


> I do love me some con sardino. True gelato of course is a must. I wonder why nobody samples prosciutto bowing though!



Also _al dente_. Nobody samples that either.

It's kind of like a Bartok snap, but with a bit more oomph, as it's done with needlenose pliers.


----------



## jononotbono (May 25, 2020)

ok so something I'm trying to get my head around. If you have 1 midi track and you use this, for example, for VLN1. But you have more than 16 arts. If I create a 2nd midi track and use a different VEPro port, I obviously can have 16 more arts but is there a way of having all 32 arts on 1 midi track? Some libraries, such as Spitfire Chamber Strings, have more than 16 arts for VLN1.


----------



## Mr Mindcrime (May 25, 2020)

jono - This may be a little off topic, but here goes. Every time I sit down to create a one-size fits all template I realize I have way more libraries than I probably should have. It would take me weeks maybe more, to create a template that had every articulation (not to mention mic positions) at my immediate disposal...fully routed and mixed. So the question is...when you create an orchestral template, do you put everything you own in it, or just a basic orchestral representation? And if it's the latter, do you set up blank tracks to be able to quickly insert patches as needed? How do you choose what to put in your main template?????


----------



## jononotbono (May 25, 2020)

Well, as an example, I have every single art from Spitfire Chamber Strings in my template. And SSS, SSB. The list goes on really. Never see the point in buying stuff never to use it. However, I do want to lower my track count.

So as an example, here are all Spitfire Symphonic Strings Ensemble Patches. There are 23 patches. How do I have all 23 of these in one Midi track? If I do it with Midi channels I can only do 16. I feel like my brain is about to click with this stuff but I need to get through the final barrier haha


----------



## rgames (May 25, 2020)

jononotbono said:


> ok so something I'm trying to get my head around. If you have 1 midi track and you use this, for example, for VLN1. But you have more than 16 arts. If I create a 2nd midi track and use a different VEPro port, I obviously can have 16 more arts but is there a way of having all 32 arts on 1 midi track? Some libraries, such as Spitfire Chamber Strings, have more than 16 arts for VLN1.


It sounds like you're using MIDI channel. Can you use CC? Then you get up to 127. I have 35 artics loaded into each of my string tracks using CCs.


----------



## rgames (May 25, 2020)

jononotbono said:


> Well, as an example, I have every single art from Spitfire Chamber Strings in my template. And SSS, SSB. The list goes on really. Never see the point in buying stuff never to use it. However, I do want to lower my track count.
> 
> So as an example, here are all Spitfire Symphonic Strings Ensemble Patches. There are 23 patches. How do I have all 23 of these in one Midi track? If I do it with Midi channels I can only do 16. I feel like my brain is about to click with this stuff but I need to get through the final barrier haha


Ok - you can definitely use CC because that's exactly how I have it set up. Four or five MIDI channels and a bunch of CC32 values to select the artic on each channel.


----------



## Dewdman42 (May 25, 2020)

jononotbono said:


> ok so something I'm trying to get my head around. If you have 1 midi track and you use this, for example, for VLN1. But you have more than 16 arts. If I create a 2nd midi track and use a different VEPro port, I obviously can have 16 more arts but is there a way of having all 32 arts on 1 midi track? Some libraries, such as Spitfire Chamber Strings, have more than 16 arts for VLN1.



no you can’t. Each midi track has to point to one and only one midi port in cubase


----------



## jononotbono (May 25, 2020)

rgames said:


> Ok - you can definitely use CC because that's exactly how I have it set up. Four or five MIDI channels and a bunch of CC32 values to select the artic on each channel.



Thanks man. So I have this working Using Program Messages and everything assigned to Midi Channel 1. Drawing in the data in Articulation lanes works fine.

So the question I now have is what is it I should be assigning on a touch screen to I can select the different program changes? This is probably horrendously simple but I can't seem to select the arts via touch screen yet.


----------



## method1 (May 25, 2020)

If you are using Metagrid, the SF UACC arts are already included, so select from the list.
Otherwise send midi cc 32 and the matching value.


----------



## jononotbono (May 25, 2020)

Trying to avoid using UACC so every library I have is constant and uses the same method. But sure, UACC has worked for me in the past. Also not using MG at the minute.


----------



## method1 (May 25, 2020)

OK, so then it'd be CC32 at the chosen value for the articulation.


----------



## jononotbono (May 25, 2020)

ok, I'll try that thank you!

I've just noticed that having different Program change numbers on each art but having all on same Midi channel is actually outputting all arts with the same midi channel number so they all play together which is obviously bad. 

Is there a reason why JXL is using Key switches in his Exp Map video instead of Prog Changes?


----------



## rgames (May 25, 2020)

jononotbono said:


> So the question I now have is what is it I should be assigning on a touch screen to I can select the different program changes? This is probably horrendously simple but I can't seem to select the arts via touch screen yet.


Not sure if you mean a Lemur-type tablet setup or a touch screen computer. If the former, just set it up so that the expression map uses PCs and then assign the appropriate PCs to the buttons in the Lemur project. That way your tablet/Lemur will send the PC, Cubase will receive it and assign the appropriate artic. That's how I have it set up. The blue buttons on the upper left hold all the artics and each one has unique PC. Those get sent to the expression map I showed above and the expression map assigns the appropriate artic.


----------



## g.c. (May 25, 2020)

No.2 with RGames.
Using Program Changes and Kontakt Banks
In the Kontakt Bank slots I load my basic 20 working artics, then leave slots 21 to 30 open , then load the rest of what I want available for the instrument into the bank slots 31-127. (127-the extreme).
Everything is then Globally purged in Kontakt.
I assign all Expression Maps just as if they were individual Kontakt instruments, each Map to an independent midi channel and independent Stereo Output, no omni.
So for me, a String Pizz Trem is not an every day artic, so it sits purged in that collection of arts from 31 to 127. When I do need it, I drag it into slot 21, or whatever and play it.
The gui gets real small after about 25 expression slots loaded into the Cubase Key Editor, and I don't want all of the many artics active all of the time.
I also coordinate all of the Key Switch assignments in my Exp Maps to match those in my VSL Matrixes, so I only have to learn 1 Key Switch system for the entire template.
Keeping the gui thinned out also gives me the visual space for notating both the articulation name and its corresponding Key Switch to be read from the Key Editor controller lane.
I have 7 basic exp map templates setup governed by instrument family ranges.
ie-Orch Basses, Tuba, Contra Bassoons-Bcls etc all fit 1 of the 7 templates, as do Flutes. Vlns, etc another, and so on.
g.c.


----------



## babylonwaves (May 26, 2020)

jononotbono said:


> Is there a reason why JXL is using Key switches in his Exp Map video instead of Prog Changes?


from what i've found, program changes are okay if you use directions. but with attributes you really should use notes. also, with program changes it's going to be harder for you to program the maps because notes you play to try out, program changes are a bit more abstract for many reasons.

when you have to use multiple MIDI channels, make sure that your controllers are all send to all channels, everything else ends up with a lot of controllers where you can't always see which channel (=articulation) they sent to.

a universal system for the articulations you really use will save you a lot of time. if you keep the number of articulations low you'll be able to learn it. or you can put those on a trigger controller, basically anything you can attach MIDI strings on buttons.


----------



## stigc56 (May 26, 2020)

jononotbono said:


> And so the pain and suffering begins.


Have been there!!!


----------



## method1 (May 26, 2020)

Just a punt for @babylonwaves Art Conductor, he's done most of the work standardising articulations already, a huge time saver for the non-masochists among us


----------



## Dewdman42 (May 26, 2020)

babylonwaves said:


> from what i've found, program changes are okay if you use directions. but with attributes you really should use notes. also,



One clarification. I think the above is mainly true if you are using VST3 instruments or kontakt. With other instruments it may be ok to use PC messages in attributes. I also prefer to use Keyswitches though to avoid any issues ever.


----------



## jamieboo (May 28, 2020)

I use expression maps exclusively. But I also have multiple tracks for each instruments.
This is so I can still layer if I need to, and I find it makes the logistics of CC lanes more workable.


----------



## jonvog (Oct 24, 2020)

I was wondering how you deal with the bug introduced in 10.5? Have you encountered it? Have you found a way to work around it? https://www.steinberg.net/forums/viewtopic.php?f=286&t=177485


----------



## Akarin (Oct 24, 2020)

jonvog said:


> I was wondering how you deal with the bug introduced in 1ß.5? Have you encountered it? Have you found a way to work around it? https://www.steinberg.net/forums/viewtopic.php?f=286&t=177485



Never encountered this issue. I wonder if it's an issue of some hardware sending MIDI messages.


----------



## Dewdman42 (Oct 24, 2020)

That is most likely user error, not a bug. Direction expressions need to placed a few midi ticks earlier then the notes they are supposed to effect. Attribute style expressions don’t need that UNLESS you are trying to send cc switches to kontakt, in which case attribute shouldn’t be used probably, use direction a few ticks early, that’s a bug in kontakt.


----------



## method1 (Oct 24, 2020)

It's not user error unfortunately, I sporadically have similar problems with 10.5 and expression maps, even with directions ahead of the note. In the case of attributes sometimes articulations randomly change.
Steiny needs to sort this out. And the disabled track problem, that one is a real bummer.


----------



## Dewdman42 (Oct 24, 2020)

Well you say that but lots of people aren’t reporting that. I agree expression maps are fiddly to set up but that means it can sometimes be hard to set them up “properly”. Send us a cubase project you’re having problems with and maybe we can help.


----------



## Dewdman42 (Oct 24, 2020)

Are using kontakt by the way?

Are you using vst3 instruments?


----------



## Uiroo (Oct 24, 2020)

Dewdman42 said:


> Direction expressions need to placed a few midi ticks earlier then the notes they are supposed to effect.


Uh, are you sure about that? 
When I'm writing everything is quantised and my direction expressions are aswell, doesn't cause any problems.


----------



## jonvog (Oct 24, 2020)

It's clearly a bug as far as I can tell. It doesn't happen on 10. And I always tend to put direction expressions a bit earlier. Maybe I'm gonna try atributes just for the sake of it. The weird thing is: often articulations get changed back to default in the middle of the phrase with no expression maps element even near. I mostly use BBCSO Pro with expression maps. So any chance that this is a bug with the new spitfire player / BBCSO?

EDIT: Then again, it doesn't happen with 10.0, so propably it is a Cubase thing unfortunately...


----------



## Dewdman42 (Oct 24, 2020)

Yes they should be early if you want to be sure. There are a few forum threads in this topic if you search.

direction expressions should be thought of kind of like a cc automation lane except it sends switches which could be keyswitches.

They are not attached directly to any notes. They are entirely seperate events. So it’s not always deterministic which will be for sure sent first between notes and direction events when they are in the same timestamp. Numerous people have indicated that they had seemingly random problems solved by nudging the direction changes slightly ahead.


----------



## Dewdman42 (Oct 24, 2020)

jonvog said:


> It's clearly a bug as far as I can tell. It doesn't happen on 10. And I always tend to put direction expressions a bit earlier. Maybe I'm gonna try atributes just for the sake of it. The weird thing is: often articulations get changed back to default in the middle of the phrase with no expression maps element even near. I mostly use BBCSO Pro with expression maps. So any chance that this is a bug with the new spitfire player / BBCSO?



I’m still inclined to say the problem is either related to BBCSO or accidentally you are somehow sending notes which happen to also be functioning as keyswitches or a cc that is, etc


----------



## jonvog (Oct 24, 2020)

Dewdman42 said:


> I’m still inclined to say the problem is either related to BBCSO or accidentally you are somehow sending notes which happen to also be functioning as keyswitches or a cc that is, etc


I hoped so badly (and still do), that it's just me. But often I would play a phrase, the bug happens, play it again, happens again, play it a third time, everything plays as it's supposed to.
I guess expected behaviour in a DAW is that it plays everything exactly the same way every time (apart from randomization effects and stuff like round robins, LFOs etc. going on).


----------



## jonvog (Oct 24, 2020)

Guaranteed Repro of Expression Map Bug


Hi again… So I finished making a second test project and put preferences and EM together in a zip file Microsoft OneDrive - Access files anywhere. Create docs with free Office Online. Video here (0:35) another video here (2:04) Other notes this uses Kontakt 6 and...




www.steinberg.net




Just saw this thread. Someone apparently built a reproducable scenario. 
So weird, that it does only happen to very few people...


----------



## Dewdman42 (Oct 24, 2020)

Ok well if you think it’s a bug then contact steinberg.


----------



## jonvog (Oct 24, 2020)

Yeah, opened a ticket a few weeks ago, nothing till today. I was just wondering if anyone over here has come across this and even has some workarounds, as you guys seem to have a fairly good knowing of how expression maps work... seems like we have to wait what steinberg says. thanks anyways!


----------



## Dewdman42 (Oct 24, 2020)

jonvog said:


> Guaranteed Repro of Expression Map Bug
> 
> 
> Hi again… So I finished making a second test project and put preferences and EM together in a zip file Microsoft OneDrive - Access files anywhere. Create docs with free Office Online. Video here (0:35) another video here (2:04) Other notes this uses Kontakt 6 and...
> ...



I loaded the above project into cubase 10.5.20, plays fine as expected, no problems, I let it play in a loop for 5 minutes, no problem at all.


----------



## jonvog (Oct 24, 2020)

I tried it myself as well and hadn't issues with this specific project either. But turns out it is indeed a spec questions. I set up a similar test scenario with 120 Kontakt instances and the problems started to occur. Especially when I don't just let it run but stop and start quickly in between or clicking into another point in the timeline while playing.

Here's what I wrote over ot the steinberg forum, just if you guys wanna test it. I hope it's not against the rules to crosspost in this circumstance.



> Ok. I tried it. First everything went without problems. But then I thought, what if my system is too fast for this and I built a modificated benchmark with lots of kontakt instances. The more kontakt instances I had, the quicker I ran into problems. I ended up with 120 instances, ymmv depending on buffer settings and system power. So at least in my case it is clearly system spec related. I don't know if it is more of a RAM issue, or a CPU issue, or whatever. An easy way to trigger the faulty behavior if it doesn't do it on it's own is quickly stopping and starting by hitting the spacebar twice while the test is running.
> Anyways, here is my project, if you want to try it with an easily scalable test project that is bigger than the one by J Buckingham, but very similar. Just duplicate the tracks (some tracks) if you have a very beefy system or delete some. Requierments: Kontakt w/ factory selection.
> 
> 
> ...



Sorry for hijacking this thread this way. But I just can't believe there aren't many more people experiencing this and for me it makes 10.5 unusuable (or expression maps unusuable), so I am mybe a bit emotional regarding this topic


----------



## Dewdman42 (Oct 24, 2020)

jonvog said:


> I tried it myself as well and hadn't issues with this specific project either. But turns out it is indeed a spec questions. I set up a similar test scenario with 120 Kontakt instances and the problems started to occur.



to me that implies a chase condition, and see my early comments about the DIRECTION's needing to be nudged early. In this test case, the DIRECTION expression is not nudged earlier and every time it loops around, hard to say what happens on that first note for sure.



> Especially when I don't just let it run but stop and start quickly in between or clicking into another point in the timeline while playing.



Ok, that is somewhat interesting, I would reckon that is a chasing issue and its possible that DIRECTION expressions do not chase. 



> Sorry for hijacking this thread this way. But I just can't believe there aren't many more people experiencing this and for me it makes 10.5 unusuable (or expression maps unusuable), so I am mybe a bit emotional regarding this topic



I hear you that can be frustrating. Try switching to ATTRIBUTE style in any case.


----------



## jonvog (Oct 24, 2020)

Could be that it is somehow related to chasing. Directions do chase. The bug doesn't happen all the time, only sometimes and when under considerable load. Sometimes it reverts only some of the tracks and so on.
I tried it with attributes. Does happen as well, but a lot less. So it seems to be an improvement, but still not the ultimate solution unfortunately. But thanks a lot, anyways! And I think attributes really improve things. I'll see, if it improves things enough to let me work with 10.5. We'll see.


----------

