# Templates - track for every articulation or every instrument? What do you in YOUR template?



## Yogevs

I haven't decided yet on what I rather have. A separate track for each articulation, or have multi-patches and a separate track per instrument.
If Logic had sub-folders I would have probably just go with track per articulation and then have sub-folders for each instrument but since that is not the case I haven't decided yet.
So far I've been doing the "per articulation" method so I would have great control over the sound per articulation (as staccato sometimes need different EQ than Legato) and that it would be clearer to me just by looking what's going on.
BUT - I do fear it is limiting my ability to write as having to deal with different tracks probably just artificially cause me to write in a way rarely use different articulations at once. And it also kind of hard to grasp (there are a TON of tracks).

What do you in YOUR template?


----------



## BenG

One track per instrument like it would appear on a full score. Really prefer the more minimalist approach with no clutter getting in the way of my writing.


----------



## Yogevs

BenG said:


> One track per instrument like it would appear on a full score. Really prefer the more minimalist approach with no clutter getting in the way of my writing.



Do you actually use samples for final products? Or usually write for live performers? I guess that would also be a factor (what if I need volume change between articulations?)


----------



## Uiroo

Yogevs said:


> Do you actually use samples for final products? Or usually write for live performers? I guess that would also be a factor (what if I need volume change between articulations?)


You change the volume. The cc for main volume is handy for that, so it stays in the midi (volume automation can be a pita). 
I actually never do that though, with the libraries I have the articulations are fairly consistent as far as i can tell. 

I also much prefer the workflow with one instrument, one track, but that's with Cubase and Expression Maps. Makes it easy to transfer everything into the notation programm.


----------



## Yogevs

Uiroo said:


> I also much prefer the workflow with one instrument, one track, but that's with Cubase and Expression Maps. Makes it easy to transfer everything into the notation programm.


That's probably not possible anyway though with all the different libraries everyone has


----------



## Kent

Uiroo said:


> You change the volume. The cc for main volume is handy for that, so it stays in the midi (volume automation can be a pita).
> I actually never do that though, with the libraries I have the articulations are fairly consistent as far as i can tell.
> 
> I also much prefer the workflow with one instrument, one track, but that's with Cubase and Expression Maps. Makes it easy to transfer everything into the notation programm.


In Logic I use the Articulation Sets, and I can set a per-Articulation CC7 level. Bit of a chore to set up but then I never have to worry about it again!


----------



## Uiroo

kmaster said:


> In Logic I use the Articulation Sets, and I can set a per-Articulation CC7 level. Bit of a chore to set up but then I never have to worry about it again!


Yeah that works in Cubase, too. Tom Holkenborg does it. I somehow don't find a us for it.


----------



## I like music

I have only one library per section, and none of them use keyswitches (exception being pizzicato, tremolo, harmonics). And since the woodwinds and the brass don't have recorded ensembles, my whole template reads like this:

Flute 1, Flute 2, Piccolo, Oboe 1, 2 etc ... same for brass, same for strings.

It results in there being very few tracks in the template. I _think_ I have it to a point where all I need to use is dynamics and don't ever touch the volume automation. It has made life much simpler.

The only real pain in the ass is when I realise that say all 6 horns needed to be half a dynamic higher. Now I need to go and edit each horn separately.


----------



## Haakond

I have one track per instrument and use Babylon Waves for articulation switching in Logic. I think having one track per articulation would be very overwhelming for me


----------



## Germain B

I go with one track per instrument, too, loaded with a multi-articulations patch. I find it way more convenient for the workflow, using Cubase and its expression maps. I never had to tweak the volume depending of the articulations used.

I also have some ensembles tracks (trumpets a3, horns a4...) the same way but mostly for quick sketch, then I split the track between individual instruments (trumpets 1, 2 and 3).


----------



## blackzeroaudio

For the most part I use 1 track per instrument...only exception is for Strings I have 2....different tracks for longs and shorts for each instrument. 

Just easier for my workflow that way.


----------



## awaey

Yogevs said:


> do fear it is limiting my ability to write as having to deal with different tracks probably just artificially cause me to write in a way rarely use different articulations at once. And it also kind of hard to grasp (there are a TON of tracks).
> 
> What do you in YOUR template?


you can find the answer ,
((https://vi-control.net/community/threads/synthestration-com-midi-mock-up-project-files.95405/ ))


----------



## BenG

Yogevs said:


> Do you actually use samples for final products? Or usually write for live performers? I guess that would also be a factor (what if I need volume change between articulations?)



Mostly samples (sadly). As for articulations, all changes are handled in Kontakt with separate stereo outs for each articulation. I even have custom multi's from different samplers that are All handled with expression maps.


----------



## Yogevs

Seems like the consensos here is one instrument per track . I guess I should AT LEAST try it out (and Logic's Articulation Set why not).


----------



## Rory

Yogevs said:


> Seems like the consensos here is one instrument per track . I guess I should AT LEAST try it out (and Logic's Articulation Set why not).



You might find it useful to check out Spitfire's template for its BBC SO library and to watch this recent video by Christian Henson and Jake Jackson:


----------



## Rory

Further to the above post, this screen capture shows how the BBC SO template works in Logic (there are versions for other DAWs), with the Woodwinds stack open. In the video, which runs 30 minutes and is quite detailed, Henson and Jackson explain _why_ it is set up this way.

If you want to check out the template, it can be downloaded at https://www.spitfireaudiothepage.com/templates. There are three versions. This happens to be the "Hybrid" version for BBC Pro. There are also versions for BBC Core and BBC Discover. While the screen capture has "Stem 9 Stack" at the top, it comes as an unassigned stack at the bottom. For my own purposes, I have assigned it to a piano and moved it to the top.


----------



## Yogevs

One thing I didn't understand in the video is why is he recording the tracks (or stems) instead of just bouncing


----------



## Rory

Yogevs said:


> One thing I didn't understand in the video is why is he recording the tracks (or stems) instead of just bouncing



My recollection is that they explain that. Note that they also talk about distinguishing between instrument tracks with preassigned articulations and what they call FX Tracks, which are also articulation tracks. The Strings Stack screen capture below shows the two types.

Henson and Jackson apparently spent a lot of time on this over a couple of years. My working assumption is that they know what they're talking about. Henson has made a few videos about templates, and in my view there's a lot to be learned from them, even if you don't adopt all of it.


----------



## JonS

Yogevs said:


> I haven't decided yet on what I rather have. A separate track for each articulation, or have multi-patches and a separate track per instrument.
> If Logic had sub-folders I would have probably just go with track per articulation and then have sub-folders for each instrument but since that is not the case I haven't decided yet.
> So far I've been doing the "per articulation" method so I would have great control over the sound per articulation (as staccato sometimes need different EQ than Legato) and that it would be clearer to me just by looking what's going on.
> BUT - I do fear it is limiting my ability to write as having to deal with different tracks probably just artificially cause me to write in a way rarely use different articulations at once. And it also kind of hard to grasp (there are a TON of tracks).
> 
> What do you in YOUR template?


Ideally I would prefer one articulation per track with each track having its own MIDI channel. However, this can become absurd very quickly since if you own a lot of VI libraries this would require many computers to run so many instances of VEPro as one can only have 768 MIDI channels per VEPro Instance. That may seem like a lot of channels but if you typically fill up one 16 channel instance of Kontakt with one instrument and the 16 articulations for that one instrument then you will only be able to have 48 instruments per VEPro Instance. No wonder top composers have 6 or more servers just for VEPro.


----------



## JohnG

Yogevs said:


> Seems like the consensos here is one instrument per track . I guess I should AT LEAST try it out (and Logic's Articulation Set why not).



Maybe that's the consensus but I don't see how people can work that way. 

Using a single midi track for all articulations of any instrument (from trumpet to cello) causes huge problems with many libraries because the attacks of one articulation ("sustain") are often noticeably behind, in time, from others ("spiccato").

It's most obvious at a fast tempo, in which one instrument is clearly dragging, but the same thing arises at slow tempos, where everything swims around in a way it never would with good live playing.

The best live players are actually far more accurate than samples. So when you mix in a "sweetening" session with live players, all these problems move from "sort-of-tolerable" to "glaring." It's not even a matter of sliding everything forward or backward -- it's just "off."

I've been at this for some time and I am still, even with libraries I've owned for a good while, constantly fiddling with midi offsets.


----------



## Ashermusic

JohnG said:


> Maybe that's the consensus but I don't see how people can work that way.
> 
> Using a single midi track for all articulations of any instrument (from trumpet to cello) causes huge problems with many libraries because the attacks of one articulation ("sustain") are often noticeably behind, in time, from others ("spiccato").
> 
> It's most obvious at a fast tempo, in which one instrument is clearly dragging, but the same thing arises at slow tempos, where everything swims around in a way it never would with good live playing.
> 
> The best live players are actually far more accurate than samples. So when you want to arrange a "sweetening" session with live players, all these problems move from "sort-of-tolerable" to "glaring."
> 
> I've been at this for some time and I am still, even with libraries I've owned for a good while, constantly fiddling with midi offsets.



A track for instrument doesn’t mean a single region for each instrument, at least not in Logic Pro. And it causes no problems here.


----------



## JohnG

I don't use Logic, as you may know, Jay. Consequently, "region" is kind of a local thing to ye logike folke.

Either way, if you're using a key switched instrument with multiple articulations, you would still have to address the different speeds at which they speak, no?


----------



## Ashermusic

JohnG said:


> I don't use Logic, as you may know, Jay. Consequently, "region" is kind of a local thing to ye logike folke.
> 
> Either way, if you're using a key switched instrument with multiple articulations, you would still have to address the different speeds at which they speak, no?



I play, and then I tweak.

Time to switch it a modern DAW, John


----------



## Kent

JohnG said:


> I don't use Logic, as you may know, Jay. Consequently, "region" is kind of a local thing to ye logike folke.
> 
> Either way, if you're using a key switched instrument with multiple articulations, you would still have to address the different speeds at which they speak, no?


I don't do this currently, but it's pretty trivial to set this up per-articulation on a single "instrument-per-track" track in Logic's Scripter using @Dewdman42 's script. And, of course, @NoamL 's classic Thanos script does something very similar using the same tools.


----------



## BassClef

Haakond said:


> I have one track per instrument and use Babylon Waves for articulation switching in Logic. I think having one track per articulation would be very overwhelming for me



Just a hobbiest here... this is what I do in Logic. So my full template is a little over 100 tracks, divided into 7 groups, each routed to a separate buses for reverb, etc. Babylon Waves articulation set... best money I have spent on this hobby!


----------



## JohnG

kmaster said:


> it's pretty trivial



That is a goofy thing to say. It is not "pretty trivial" to create offsets for every articulation in any template, even if you can do it in a single track.

@Ashermusic I learned DP about 20 years ago and don't know half of what it can do. Maybe it can, maybe it can't...

If I remember this conversation, I will ask someone smarter than I am.


----------



## Kent

JohnG said:


> That is silly. It is not "pretty trivial" to create offsets for every articulation in any template.
> 
> Whether or not you can do it in a single midi track is interesting, but it's not a fast process.


Trivially more intensive than applying an offset per "articulation-per-track" track, not trivial compared to not having to do anything at all...


----------



## JohnG

I think it depends on what you're accustomed to. I don't find it hard to do per-articulation, per track.

DP developed folders a long time ago so I just squash down a lot of tracks when not in use.

Also, it's easier for the orchestrator(s) to see what you want; that's not my main reason, but it can be a nice side benefit.


----------



## JonS

JohnG said:


> Maybe that's the consensus but I don't see how people can work that way.
> 
> Using a single midi track for all articulations of any instrument (from trumpet to cello) causes huge problems with many libraries because the attacks of one articulation ("sustain") are often noticeably behind, in time, from others ("spiccato").
> 
> It's most obvious at a fast tempo, in which one instrument is clearly dragging, but the same thing arises at slow tempos, where everything swims around in a way it never would with good live playing.
> 
> The best live players are actually far more accurate than samples. So when you mix in a "sweetening" session with live players, all these problems move from "sort-of-tolerable" to "glaring." It's not even a matter of sliding everything forward or backward -- it's just "off."
> 
> I've been at this for some time and I am still, even with libraries I've owned for a good while, constantly fiddling with midi offsets.


You are completely right about this, John. However, if someone doesn't have the resources to buy more computing power the may be forced to work with a one track per instrument approach and either use keyswitches, CC changes or expression maps. I much prefer working with one track per articulation myself for so many reasons, some you stated above. Once someone starts adding more and more VI libraries things can get out of control quick if you want to layer one library with other ones as again you need a lot of VEPro servers to handle all these separate tracks for each articulation. The only time I find myself not having each articulation assigned to its own track is when I am running out of CPU resources, but ideally one articulation per track is the way to go IMHO.


----------



## storyteller

For most instruments, I have one articulation per track rolled into a master track folder. However, libraries like Century Strings and Century Brass, I’ve found a hybrid approach can be better for my workflow. I have the following tracks per instrument: legato, longs, shorts, arcs, fx. Those tracks use keyswitching to quickly switch between things like fanfare, feathered, spicatto, and staccato. If I’m moving quickly, sometimes I just bring in one articulation of a library into a project on a single track. I almost never use keyswitching with the exception of the Century Series (as described above). But I might return back to one articulation per track at some point there too.


----------



## Ashermusic

JohnG said:


> I think it depends on what you're accustomed to. I don't find it hard to do per-articulation, per track.
> 
> DP developed folders a long time ago so I just squash down a lot of tracks when not in use.
> 
> Also, it's easier for the orchestrator(s) to see what you want; that's not my main reason, but it can be a nice side benefit.




I don't disagree. I just think like a score page.

And I have never had either the necessity or the luxury of hiring an orchestrator, always have done my own.


----------



## Dewdman42

JonS said:


> Ideally I would prefer one articulation per track with each track having its own MIDI channel. However, this can become absurd very quickly since if you own a lot of VI libraries this would require many computers to run so many instances of VEPro as one can only have 768 MIDI channels



you can have more then one vepro instance on each physical server.


----------



## Dewdman42

JohnG said:


> Maybe that's the consensus but I don't see how people can work that way.
> 
> Using a single midi track for all articulations of any instrument (from trumpet to cello) causes huge problems with many libraries because the attacks of one articulation ("sustain") are often noticeably behind, in time, from others ("spiccato").
> 
> It's most obvious at a fast tempo, in which one instrument is clearly dragging, but the same thing arises at slow tempos, where everything swims around in a way it never would with good live playing.
> 
> The best live players are actually far more accurate than samples. So when you mix in a "sweetening" session with live players, all these problems move from "sort-of-tolerable" to "glaring." It's not even a matter of sliding everything forward or backward -- it's just "off."
> 
> I've been at this for some time and I am still, even with libraries I've owned for a good while, constantly fiddling with midi offsets.



Johng this is very interesting. I have question for you. When working with live players to sweeten, do you find that you set an offset for any one particular articulation once for the whole cue and then that articulation will seem to line up with what players are doing intuitively pretty consistently?

this is a Very compelling reason to use a seperate track per articulation though I personally also prefer to think like a score page as Asher always says.

I want to make a suggestion, which unfortunately won’t help you in DP but might help users using other daws.

basically you can put each articulation on a seperate channel in vepro. And use a single track in the daw that channelizes the event per articulation using articulationset, scripter or expressionmap in cubase. Then use latency fixer in vepro on each listening articulation channel to offset each articulation.

this gives the best of both worlds perhaps, timing control over each articulation but also able to have one track per instrument in the daw.

when using Track offsets of any kind there are some other little gotchas to be aware of. It’s that only NoteOn events should really be offset early to compensate for late or slow sample attacks. The Noteoff should NOT be offset early not should other kinds of midi events unless they are being used as keyswitches.

thet poses a problem when using simple midi track offset for this. Same thing goes for using latencyfixer which offsets the audio coming from the plugin but again that means the attack would sound proper and the release would sound early. And other sustained expression might sound early too.

a script solution can offset only NoteOn events selectively, which alleviates the above conundrum.


----------



## JonS

Dewdman42 said:


> you can have more then one vepro instance on each physical server.


I understand that, sorry if I did not communicate that more clearly. I have experienced as many as 8 Instances of VEPro running at once on the same computer, I am sure some run way more than that at once, but there is only so much CPU power and RAM one can utilize out of each Mac or PC so unless you got a Mac Pro with 1TB RAM, which many can’t afford, you just have to add more servers or use each instrument on one track instead of many tracks.


----------



## Dewdman42

In the context of this discussion that is not really relevant, mostly. We’re talking about whether to spread your articulations out to more then 768 channels. But in that situation, for any given instrument using say 16 articulations each, as you put it that is only 48 instruments but it also means only 48 of those 768 channels would be busy at once also, so I don’t think this approach is really significantly more burdensome on the cpu. It might take a bit more ram, but that is solvable. I never even come close to using all my ram.

The point is, you don’t need to neccessarily buy a vepro server farm to use track per articulation.

mainly the reason for having multiple vepro servers these days is in order to have literally your entire large sample library loaded and available as a massive template with everything you own, ready to play on a whim. Ok. Thst is a valid reason for some people but not relevant to whether or not you can put each articulation on its own channel.


----------



## JohnG

Hi @JonS

I'm not sure that I fully understand all your comments. Nevertheless, you are certainly correct that this issue -- midi offset -- is a total pain in the neck.

Even if the sample library has been edited with great care, inevitably, even quite similar instruments, with the same type of articulation (spiccato, violin 1; compared with spiccato, viola) often will be different. Even in the same instrument patch with the same articulation selected (marcato cello), vagaries in editing mean that it's not as locked onto a grid as a really great player will make it.

Result?

No matter how much care one puts into one's midi, it is not going to be as tight as studio players. Accordingly, it is extremely tedious for the engineer to line all this stuff up after the fact and create a successful marriage between the samples and the live guys. Not to mention the actual blending-the-sounds part!

I end up with pretty big bills for engineering and orchestration because of all this, but I really want it to sound good, so I try not to begrudge all that. If I had time, I could orchestrate myself, and I do if there's a cue that I want to think through or it's a smaller job and I have plenty of time (ha). I got my first proper break from Eric Schmidt, who hired me as an orchestrator in the 90s. Otherwise I review the orchestrators' work, edit on an iPad with that (somewhat clumsy) pencil, and then grind through the process.

Getting from inspiration to a fully mixed recording is pretty agonizing, no matter how great all the people are. I don't know why we do this!


PS @Ashermusic I also lay out my stuff like a score page, with one exception -- I have all the non-pitched stuff near the bottom in case I want to transpose, so I don't have to pick around. So some items are a bit out of order.


----------



## Dewdman42

for those following along, here is a LogicPro Scripter script I threw together for kmaster the other day. 









Steve Schow / ArtAlign · GitLab


GitLab.com




gitlab.com





This basically will let you define a latency value per port/channel for up to 768 channels. Yea, you have to edit the script a bit to use it, because Scripter doesn't provide good enough GUI tools in this case. This may or may not be considered "trivial" for some of you. 

What it then does is to provide the same functionality as track negative offset...EXCEPT...it only slips NoteOn earlier...it does not slip NoteOff or other midi events earlier.


----------



## JonS

JohnG said:


> Hi @JonS
> 
> I'm not sure that I fully understand all your comments. Nevertheless, you are certainly correct that this issue -- midi offset -- is a total pain in the neck.
> 
> Even if the sample library has been edited with great care, inevitably, even quite similar instruments, with the same type of articulation (spiccato, violin 1; compared with spiccato, viola) often will be different. Even in the same instrument patch with the same articulation selected (marcato cello), vagaries in editing mean that it's not as locked onto a grid as a really great player will make it.
> 
> Result?
> 
> No matter how much care one puts into one's midi, it is not going to be as tight as studio players. Accordingly, it is extremely tedious for the engineer to line all this stuff up after the fact and create a successful marriage between the samples and the live guys. Not to mention the actual blending-the-sounds part!
> 
> I end up with pretty big bills for engineering and orchestration because of all this, but I really want it to sound good, so I try not to begrudge all that. If I had time, I could orchestrate myself, and I do if there's a cue that I want to think through or it's a smaller job and I have plenty of time (ha). I got my first proper break from Eric Schmidt, who hired me as an orchestrator in the 90s. Otherwise I review the orchestrators' work, edit on an iPad with that (somewhat clumsy) pencil, and then grind through the process.
> 
> Getting from inspiration to a fully mixed recording is pretty agonizing, no matter how great all the people are. I don't know why we do this!
> 
> 
> PS @Ashermusic I also lay out my stuff like a score page, with one exception -- I have all the non-pitched stuff near the bottom in case I want to transpose, so I don't have to pick around. So some items are a bit out of order.


John, again I totally agree with you. Midi offset is a problem at times with some articulations and adjusting attack and release points need to be on separate tracks so the notes can be properly massaged. Other times it’s volume disparities. Maybe the flautando is always way too low in volume to be heard by anyone ie. HZ Strings or Tundra, or perhaps a Marcato is too loud compared to a Sustain or Staccato volume level. Sometimes I have found that always leaving volume faders 🎚 higher or lower in my VEPro template can smooth out these discovered issues with certain articulations. IMHO I find it so much easier to deal with issues when each articulation is on its own track/channel. I will either fix it in DP or deal with it in VEPro. On a side note, when I have over 600 tracks in the Mixing Board in DP my scrolling bar vanishes unless I deselect enough tracks in the Track Selector sidebar. The magic number is around 522 tracks from what I can tell. I’ve told Motu about this.


----------



## JohnG

I think using key switches or not is largely a matter of preference and habit. You still have to deal with midi offset. Ugh.

I am surprised you would have so many tracks on a mixing board. I use it for audio, primarily. I don't ever "mix" midi with that tool, just draw lines.


----------



## David Kudell

My problem with one track per articulation is that when I want to write a convincing melody line, that often involves spicatto, staccato, and a marcato long. How are you guys doing that when those are all on separate tracks, and how is an orchestrator supposed to make sense of it?

I recently started dividing strings, brass, and WW into longs and shorts. The longs has a full instrument per track with the legato, longs, and every other articulation. The shorts folders usually have just the spiccato articulation. This leads to almost twice the number of tracks but I deactivate all tracks until I use them.


----------



## JonS

JohnG said:


> I think using key switches or not is largely a matter of preference and habit. You still have to deal with midi offset. Ugh.
> 
> I am surprised you would have so many tracks on a mixing board. I use it for audio, primarily. I don't ever "mix" midi with that tool, just draw lines.


I should have said that I don’t mix with MIDI specifically in the mixing board either, but I noticed this being a major problem a while ago and alerted motu. Though, there are times I want to Solo or Mute a MIDI track(s) or just visually see where numerous pan positions are at once and the Mixing Board can be helpful.

The only tracks in my mixing board are usually just audio and aux tracks, but DP has some bug in the software if there are too many tracks the scrolling bar vanishes so you can’t scroll at all in the mixing board. Some of my Aux tracks are in V-Racks too. I send midi CC info to DP with a SL Mixface and when I have to adjust or draw lines (though I technically prefer Bars to Lines or Points when I show Lanes) i do this in the Sequence window, or sometimes if it’s just a permanent volume or pan adjust I enter that in the Event list or just adjust it in VEPro. Usually when I change pan or volume levels in VEPro they tend to be a permanent adjustment to my template going forward as some articulations are just out of balance with other articulations within an instrument or I tend to want all the articulations of a specific instrument panned in a particular position.

Even though there is automation in VEPro 7, I usually still do all the active changes and automation on a project or cue (Chunk) inside DP not VEPro, my permanent part of my template also uses V-racks for instruments and some of my aux tracks, though I have aux tracks both in and out of V-racks. I have not even explored how automation works in VEPro 7 yet, but I will at some point since I bet its useful.

DP’s Channel Strip is a very good way of tracking the track one is focused on at that moment and it is something I use all the time.

I’ve suggested so many ideas to Motu about DP over the decades. Just told them they need to add a cancel button when trying to Edit Track Color Schemes in the View menu under Colors as once you open that window and select a color theme (if you want to try it out and see what it looks like) the only way to get back your old color assignments is by closing the file and hoping you did not save it after you selected a different color scheme. I would also like to see the ability to add Clippings from the Content Browser by simply having that Content Browser accessible in the pop up menu that appears when one right clicks (which I do with a Trackpad by tapping with two fingers) into part of the Tracks or Sequence windows without having to drag and drop directly from a sidebar. And Copy to Clipping Window should also be listed in this same popup menu so I don't have to go to the Edit menu and select Copy to Clipping Window. The program has certainly gotten better over the years. I’ve been using it since it was just Performer back in the 1980s. How about being able to sort plugins inside DP Preferences in the Audio Plug-ins tab so that you can see just the FAILED to load plugins, which is also something I would like to see as a list when adding new plugins to DP and they fail during the examining process when you launch DP the first time after you have installed new plugins.

There are features Cubase and Logic have that I would like to see added to DP. Cubase has the ability to lock into place certain tracks at the top of what looks like the Tracks window so they don’t disappear when you scroll down, DP needs that capability. Also, they can lock in place faders 🎚 in their mixing board which one cannot do inside DP once you start scrolling left or right. Again,some faders like the Master fader, Dialogue, SFX, Click track, I want fixed or locked in place inside the Mixing Board so they always appear on screen. In DP’s world being locked applies to SMPTE, but I suggested they have an additional lock in place button regarding visual lock on the display screen.

I also just sent VSL about 12 suggestions on how they should improve VEPro. One is the ability to assign the number of threads uniquely to each specific Instance on a project-by-project basis so every single Instance does not necessarily get the same number of threads on all projects. I want the Instances I rely on most to get more Threads than others that may not be used often ie. Sound Design. I would also like to be able to change the name of a channel at the bottom of each fader in the Mixer view inside VEPro and press the arrow keys or enter key to open up the next channel change name text box to make this process flow faster when adding a lot of new channels to your template either because you got a new VI Library or simply are recreating one's template. I would also like some kind of Create Locate Channel button or window so if I need to go to a folder or channel in any Instance inside VEPro I can select it and that channel is scrolled to and in focus. This way I could quickly go to the SSW folder or Strikeforce folder or HZ Strings folder in their corresponding Instance in VEPro without having to scroll left and right looking for it when you have hundreds of faders 🎚 in every Instance.


----------



## LudovicVDP

David Kudell said:


> My problem with one track per articulation is that when I want to write a convincing melody line, that often involves spicatto, staccato, and a marcato long. How are you guys doing that when those are all on separate tracks, and how is an orchestrator supposed to make sense of it?



Same here. 
I play my line with the keyboard, then I start correcting my bad playing and giving each notes the correct articulation it needs to play. I don't see myself moving notes from midi track to midi track because I test a spic, then maybe a stacc, then whatever...

I usually have tracks for "shorts" playing only ostinatos and stuffs like that and tracks for "longs" where I have mutliple articulations on each track so I can handle melodies.

Not saying at all that it's the way to go... It's just what works for me.


----------



## Henu

+1. 

I've recently started to explore a bit more with ensemble patches and quick templates, where I can understand the articulation splitting a bit better. But when writing idiomatically and with a real orchestration in mind, articulation per track is pure nightmare.


----------



## JonS

David Kudell said:


> My problem with one track per articulation is that when I want to write a convincing melody line, that often involves spicatto, staccato, and a marcato long. How are you guys doing that when those are all on separate tracks, and how is an orchestrator supposed to make sense of it?
> 
> I recently started dividing strings, brass, and WW into longs and shorts. The longs has a full instrument per track with the legato, longs, and every other articulation. The shorts folders usually have just the spiccato articulation. This leads to almost twice the number of tracks but I deactivate all tracks until I use them.


Two ways to deal with that reality, one is to create a copy of that file and consolidate tracks for the orchestrator. The other is make a copy of that file and have someone else consolidate tracks for an orchestrator. This also means that any midi offsets have to be undone first in the duplicate file. If you are working with live musicians or simply just an orchestrator or both building a midi mockup is just one part of the puzzle 🧩 I don’t know about Cubase, but DP allows you to make midi offsets with an insert plugin so removing or turning back on those offsets so you can consolidate tracks for an orchestrator is very easy to do.


----------



## Kent

JonS said:


> I will try to explain this a different way. John, as I assume you know everything I am about to state, VEPro does not yet know how to shift resources once a simple Thread count is tied to each Instance. Thus, VEPro cannot tell if an Instance is being used a lot or a little or not at all and allow other Instances to share its assigned Threads (Cores) if they are needing more CPU Juice.
> 
> So lets say for example someone has an 8-core Mac with 16 Threads total and sets the VEPro Preferences to 2 Threads per Instance. If this is just a server Mac then one can simultaneously run 8 2-Thread Instances on that Mac. Where this gets tricky is if one has a lot of VI libraries and they cannot all fit inside 8 Instances as each Instance only allows 768 MIDI channels and a library like BBCSO has something like 250-300 articulations just by itself. If one owns all or many of the libraries by CSS, Cinesamples, Spitfire, OT, VSL, EW, etc., this becomes a huge problem quickly as if you want playable access to your entire collection of VI libraries at any time then one must buy more servers to house VEPro and spread your VI libraries throughout them as I imagine that is precisely what you do. Buy more servers and get more RAM and have the ability to access more VI Libraries at any given time. But if someone does not have the money to afford more PCs or Macs to use a VEPro servers then one alternate way around this problem is to either not have access to every single VI Library one might own at the same time and only enable (activate) those Instances with the VI Libraries one will use for that project or cue, or, set up a template based on one instrument per track instead of each articulation per track and all of a sudden one does not plow through nearly as many MIDI ports and MIDI channels as you do when every single articulation is on its own channel. However, even with the one instrument per track approach and if you globally purge all samples in all Kontakt instances inside each VEPro Instance, the amount of RAM still being used with all samples purged can be surprising. So, once again, one may still be forced to buy additional PC or Mac servers to house more VI libraries.
> 
> Again, I much prefer to work with each articulation being on its own unique channel but this means I need several VEPro servers to handle this if, as I do, want access to my entire VI library at once.


You know, you _can _use more instances than you have threads.


----------



## JonS

kmaster said:


> You know, you _can _use more instances than you have threads.


I have heard that before but I have never tried doing that. How many instances do you use at once, what do you set your thread count per instance to in VEPro preferences and how many cores/threads does your computer have? Are you talking about on a VEPro server or the main DAW computer since I tend to want to leave some cores/threads available for OSX and DP?


----------



## JohnG

David Kudell said:


> My problem with one track per articulation is that when I want to write a convincing melody line, that often involves spicatto, staccato, and a marcato long. How are you guys doing that when those are all on separate tracks, and how is an orchestrator supposed to make sense of it?



I will often just write with a "sus" patch, or maybe a "marcato sus" or "spic sus" patch (depending on the library. then, as necessary, drag the notes to the other tracks. Sometimes, you have to do that and sometimes it sounds ok enough.

As far as orchestrators, it's dead easy to discern your intentions because each articulation is labeled and on a separate stave.


----------



## Dewdman42

JonS said:


> I have heard that before but I have never tried doing that. How many instances do you use at once, what do you set your thread count per instance to in VEPro preferences and how many cores/threads does your computer have? Are you talking about on a VEPro server or the main DAW computer since I tend to want to leave some cores/threads available for OSX and DP?



Try not to get too lost in the weeds on this.

First one clarification. The setting you were effecting is the count of "threads", not cores. More threads will end up using more cores, yes, but you aren't directly controlling cores, you're effecting how VePro will allocate threads.

at any given time you have dozens of threads across the all the running apps and processes in your operating system they are all sharing time on the 8 cores, taking turns. The OS decides how to give time to each thread on the 8 cores.

VePro has many tasks that can be done in parallel. if you had 100 channels, theoretically it could be possible for VePro to use 100 threads, one for each channel! Of course the operating system is going to make all those 100 threads take turns using the 8 cores anyway, and yes they have to share time with all the threads in other programs like DP.

At some point, having too many threads becomes a hindrance because the overhead associated with all those threads taking turns can be too great. So that's why generally you will see reccomendations for various programs that allow you to control how many threads to use...that you should use 1x the number of cores or 2x the number of cores perhaps... Its not an exact science. if you create too many threads you'll have thread-thrashing working against you. If you create not enough threads then you could have times when not all of the cores end up getting fully utilized, which depends on numerous factors such as what all the other programs are running on your system, how many channels of audio you're processing at once, etc..

But generally if you have the number of threads set to 1-2x the number of cores...you're going to be in good shape, in VePro. Don't over think that part.

However, that setting in VePro is per-instance, which poses an interesting dilemma when you are using multiple instances. What should the setting be? I would argue that you want that setting to be whatever it needs to be so that across all the running instances, they add up to 1-2x the core count. But try some different values to see if anything works better. If you are using two instances then maybe make it 1x the core count. 4 instances would be half the core count..perhaps. Try it.

what happens if you have more than 8 instances? I personally think you should put at least 2 threads on each instance, so for me that would be the smallest value, and for an 8 core machine maybe 16 the largest value...but hey....people should try different values to see what happens....its not an exact science.

The only thing that is a little frustrating is that if you are dividing up your project into instances for workflow reasons, and let's say you end up with 8 instances, 2 threads each...but then you have a section of music where nearly all the channels in one particular instance are being played and the other 7 instances are silent? Then you would have a situation where you don't have enough threads allocated to that one very-busy instance.

This is part of why I think it makes sense to have either one, maybe two instances.....or else....hundreds of instances (or dozens) with only a channel or two on each one. Those two scenarios will allow for the smoothest allocation of threads in a way that covers all situations generically.

In actuality, I wouldn't stress about it too much, my experience is that VePro seems to work rather well in any of these situations, including the inbetweener scenarios. Just be careful in how you work to avoid any one instance with only a few threads from trying to playback many channels at once.

Its amazing to me that a few years ago people were in awe and wonder and counting their lucky stars that they could have the possibility to build a server farm and have their entire humongous sample library available to play on a moments notice. These days people are upset if they can't do it all on one laptop.


----------



## ed buller

I just LOVE having a separate track for every articulation. No hassle with sounds not switching..so easy to adjust levels and make transitions smooth.... and with folders the 750+ track count is hidden

e


----------



## Dewdman42

in answer to the original question, myself I am lately enjoying the use of separate channels per articulation. However, I prefer to use one source track per instrument (like a score). that means I need to use channelizing technology to dynamically channelize notes on a per-articulation basis. This is easily achievable in LogicPro, Cubase and Reaper. Not so easy in other DAW's unless you basically hard code the midi channel into each note, as the way of dictating what the articulation will be, but keeping it on one track. I don't know if that's even possible in DP FWIW.

By channelizing per-articulation from a single source track, I achieve the benefits of a single source track (like a score), but also the benefits of having each articulation on its own listening channel. I can adjust the timing more easily on a per articulation basis that way, as well as balance out the volume of each articulation as well. When done in combination with VePro this becomes really smooth because I can submix the articulations in Vepro and return it back to my DAW as a single instrument audio feed. Wunderbar!

There are only a few downsides that I see. One is that the VePro mixer can get kind of wide. Of course there are folders and such that can be used, so this is a minor niggle really. Also if you are using instruments such as Kontakt and PLAY, you don't actually need to use 16 VePro channels to represent all the articulations, you can host 16 articulations on seperate midi channels inside one instrument instance in those cases.

Another disadvantage is that some instrument libraries are not designed to work well this way. They are keyswitched and with little ability to isolate a single articulation per instrument instance. If they are based on Kontakt, however, you can always load the full instrument, keyswitch it and purge it, then it will essentially be what is needed.

Some of them may have legato transitions that have been crafted to transition between different articulations, or other kinds of behavior in between differing articulations, which would not be well supported by a channel-per-articulation approach, but I find this to be rare.

Overall I find the benefits of channel-per-articulation to be the way to go, but also I want track-per-instrument as the source track. So the key is figuring out how to channelize your single instrument track into 16+ midi channels on a per-articulation basis...then you can basically have the best of both worlds.


----------



## benmrx

I've tried and tried to go with the single-track approach using expression maps and various other techniques, but I always find I prefer a single track per articulation. I've tried setting up 'basic' tracks that have an expression map for the bread 'n butter articulations, tracks that have expression maps just for longs and/or shorts. I feel I've gone down every conceivable path. Separate tracks just seem easier for me. 

It feels more straight forward, and IMO offers a more intuitive workflow when mixing. Say you want more verb on your longs vs. shorts, or you want to offer stems split out by long vs. short. Also, there's a number of libraries that simply have too many articulations to use an expression map and keep things streamlined. A great example is SCS. There's just so many articulations that using an expression map workflow makes me feel like I'm working with gloves on. And when using various libraries by different developers it becomes impossible to have an expression map layout that is identical between libraries simply because not all libraries share the same articulation sets. 

Separate tracks per articulation also offer an easier (IMO) workflow when it comes to layering articulations. By that, I mean layering a marcato with a sustain or layering staccatos from two different libraries. 

Also, at least with Cubase/Nuendo combined with a Streamdeck and Keyboard Maestro it takes a couple of clicks/button presses to implement a new library into my template, routed, roughly mixed, and ready to go regardless of what or how many articulations the library contains. 

I've built macros that offer a 'somewhat' similar method to expression maps, so I can select a range of notes, press a button and they move to whatever track/articulation I need. 

In the end, to each their own.


----------



## JonS

kmaster said:


> You know, you _can _use more instances than you have threads.


From the people I’ve spoke to, I was told to not assign more threads to an instance than your computer is capable of. So if for example I have 8-cores and 16 threads, I was told not to assign more than 2 threads to each instance if all are active and I am using 8 instances. Are you suggesting to assign more than two threads to an instance in this example?


----------



## Kent

JonS said:


> From the people I’ve spoke to, I was told to not assign more threads to an instance than your computer is capable of. So if for example I have 8-cores and 16 threads, I was told not to assign more than 2 threads to each instance if all are active and I am using 8 instances. Are you suggesting to assign more than two threads to an instance in this example?


Yes, it won’t break anything and is worth a shot


----------



## JonS

kmaster said:


> Yes, it won’t break anything and is worth a shot


From what VSL has told me, that won’t help at all and will inevitably make every instance run more sluggishly.


----------



## Kent

JonS said:


> From what VSL has told me, that won’t help at all and will inevitably make every run more sluggishly.


It might! But it might not. On my old computer I regularly ran about 45-60 on an i7 6700k.


----------



## Dewdman42

Let's say you have 8 instances, with a setting of 16 threads per instance. 

I think the only time that would turn into possibly too many threads would be if and when all 8 instances are processing a significant number of channels...in which case you'd have a lot of overall threads and probably it would flood the system.

However if your reason for having a buzzillion channels on 8 instances is more to have every sound possible ready and waiting, but in reality, it would only end up 1 or 3 of the instances processing heavily...then you'd have a manageable number of threads hitting the system and a lot of threads sitting idle, and I don't see that as a problem.

See what I mean? It depends on your usage pattern. Using 2 threads per instance only makes sense if you are sure that all 8 instances will always be used rather evenly...that you will commonly have cpu processing happening in all of them at the same time.

If you think its more likely that at any given moment...1 or 2 of the instances would be busy at once, the others sitting idle...and the instances could be taking turns being busy depending on the music...but anyway, in that case you would want more threads allocated per instance...more than 2 for sure.


----------



## BlackDorito

Interesting discussion. I work almost exclusively in Sibelius because I want to view a nice-looking score, with good navigation, as I go along. So all my 'tracks' are staves connected multi-artic instruments. I don't play in much material [.. and I can tell you it gets particularly exciting if you want to play a jazz passage and import into Sibelius.] But anyway, I've gotten to the point where it is second nature to add articulation markings as in:




If you need to adjust for volume differences, you might have to set a few controllers:





Gives absolute control in the score. This is kinda the opposite of spontaneous real-time music generation, but seems to work well for visually conceiving and implementing complex music (.. for me). When I'm done, I export and run Ozone.

EDIT: I should add that I find myself tweaking the start times of notes quite often due to differences across libraries and articulations. Easily done in Sibelius.


----------



## muk

ed buller said:


> I just LOVE having a separate track for every articulation. [...] so easy to adjust levels and make transitions smooth....
> e



Interesting. For me it's the other way around. Transitions are one of the two main reasons why having one instrument per track is much faster for me. Say I switch from a legato to a staccato in a line. When I have both articulations on one track, I just keyswitch and I am done. The player takes care of the transition for me, stopping the legato note at the right time when the staccato begins. When I have the staccato on a separate track, the last legato note will not get a note off command. Thus I have to edit it manually, fiddle with the note length to make it stop at the right time, and adjust cc1 to have a natural sounding volume curve to transition into the staccato note on the other track. All of this is taken care of for me automatically when both articulations are on the same track.

The second main factor is recording. I am a trained pianist, so hitting the right keyswitches at the right time isn't too hard for me. With all articulations on one track, I record one time and have the complete flute track done. With one track per articulation, I have to record seven or eight passes. Or even more, depending on how many articulations the instrument gets to play.

That's why one track per instrument is quicker for my workflow. It's also easier to keep track of everything for me, as I am accustomed to a score layout. Goes to show how everybody has different preferences in this regard. You really have to try for yourself to find the best workflow for you.


----------



## CT

I like something between these extremes. Maybe one track for "legato," one for most/all other long notes (keyswitched), another for different short lengths (lengths controlled by the modwheel, which feels more musical for me in the moment than keyswitching to create a phrase made up of different shorts), and then maybe one more track for other decorative short stuff which would also use keyswitches.


----------



## GNP

Depends on the library and how they've setup their keyswitches, but for me I personally prefer one articulation per track. That way, I can cheat by having both.

If I were to write for a live orchestra, I'd cheat that way too. For example, record the staccatos separately, then record the arco expressivos separately, in a passage that requires both, and *dare I say*, overlapping in tones and octaves (which is a sharp no no with proper orchestration).



I know it's not the "proper orchestrational" way to do it, but when you're in the middle of somebody getting killed by a monster, I want it all at the gates - staccato AND other other articulations all raging at once!

The advantage when it comes to MIXING, is also very nice. I can separately EQ the spiccatos/staccatos to sound sharp and nice, but then leave the expressivos to be creamier, and they're separate channels.
If you have all articulations within one instrument track, you can't really achieve such. So from *both a compositional AND mixing standpoint*, one articulation per track is really most convenient.


----------



## Yogevs

Lots of great discussion here!


----------



## Gerbil

Like so many, I've tried every different way there is. These days, I have a basic template arranged as a standard score with each instrument set up on a single track. If I need to overlay other articulations then I just hit K on my keyboard and the selected instrument KS track is duplicated and I use the arts from that. The simpler the better. It's only about 50 tracks in total and 60% of the time that's all I use.

If I need to add any 'flavour' libraries to spice things up (eg: sections from SStO) then I select the arts I need via my Streamdeck XL. Takes two seconds in Reaper. So I'll select an icon that takes me to, say, SStB. Then select the instrument icon which takes me to all the arts for that instrument, including a KS option, and choose the one I want. It did take a lot of setting up, with many profiles within profiles on the Streamdeck, but it was well worth it and works very quickly, like switching presets on a synth.

Mixing is not something I have preset as it depends what I'm working on. The fx are ready to go per instrument though.


----------



## CGR

muk said:


> Interesting. For me it's the other way around. Transitions are one of the two main reasons why having one instrument per track is much faster for me. Say I switch from a legato to a staccato in a line. When I have both articulations on one track, I just keyswitch and I am done. The player takes care of the transition for me, stopping the legato note at the right time when the staccato begins. When I have the staccato on a separate track, the last legato note will not get a note off command. Thus I have to edit it manually, fiddle with the note length to make it stop at the right time, and adjust cc1 to have a natural sounding volume curve to transition into the staccato note on the other track. All of this is taken care of for me automatically when both articulations are on the same track.
> 
> The second main factor is recording. I am a trained pianist, so hitting the right keyswitches at the right time isn't too hard for me. With all articulations on one track, I record one time and have the complete flute track done. With one track per articulation, I have to record seven or eight passes. Or even more, depending on how many articulations the instrument gets to play.
> 
> That's why one track per instrument is quicker for my workflow. It's also easier to keep track of everything for me, as I am accustomed to a score layout. Goes to show how everybody has different preferences in this regard. You really have to try for yourself to find the best workflow for you.


Very similar to my production setup.


----------



## Yogevs

I've moved from having a track per articulation to having a track per patch. As an example, Areia have basic articulations and advanced articulations per instrument. So I have two tracks for that.

I realised that having separate tracks just stops me from including a variety of articulations when I'm writing lines and I'm mostly using the articulation I started with. I sometimes included other articulations but I had the actively force myself to think about it. When it's in the same track it comes much more naturally to me.

Since the only orchestral big libraries I have are Audio Imperia's (Nucleus and Areia), both have the *AMAZING* Sample Start feature. So I don't really need to deal with aligning the different articulations. They just work.

I think I'm spoiled as when I use other libraries (I have some solo Tina Guo, Taylor Davis and some more freebee stuff) I get kind of frustrated with all the games I have to do with the notes for it to sound good.
I AM spoiled by Audio Imperia!


----------

