# Why do you want Short/Long folders/tracks?



## Spid (Jul 8, 2022)

Hey guys,

I’m still trying to figure out my first template, and I was wondering the benefit to have Short and Long folders. I understand that Shorts and Longs might have different reverb, so it could be good to have different reverb sends, but what would be the benefit of having separate folders/tracks for Short and Long?

I’m on Logic and the way I plan to build my template would be to have articulations using the articulations maps, so I would group as much articulations on a single track… and therefore having tracks that reflect different libraries. However, I notice a lot of people have 2 or sometimes 3 tracks for the same library by having Short, Long and FX. So I wonder the benefit of having Short and Long on 2 different tracks? Is that all about Reverb sends or you plan to play both track in the same time (let say to add some Spiccato on a Sustain?).

Since I’m gonna build most of articulation maps, I want to make sure I would make the best choice so I won’t have to redo later if I change my mind to separate Short and Long…

I hope it makes sense. Any feedback is welcome


----------



## Marius Kappes (Jul 8, 2022)

Started out as you. A few things that made me use multiple tracks per instrument are:
1. track delay for longs and shorts is often different
2. longs are usually controlled using CC1 while shorts use velocity. It's oftentimes simpler to have them on two different tracks then
3. not only reverb, but also mic mix. shorts usually use the close mic at little more to be more aggressive


----------



## onnomusic (Jul 8, 2022)

yeah, everything that Marius just mentioned, but also, depending on where your music is going, if you have deliver stems, you usually deliver longs and short separately. So, this could be an important reason as well.


----------



## William The Concurer (Jul 12, 2022)

So keyswitching is now redundant, in this case. I thought it was such a great way of working. But there are some libraries that will not disable/unload the shorts and longs and other articulations, so they cannot be separated out in one instance.The Spitfire "player" is one. This means you're having to load up the full set of articulations, but only use the longs /shorts/FX, depending on its relevance. Isn't this a total waste of resources and memory?

In the past I've done many TV things where I'd stem out the high and low instruments, but never this longs, shorts thing. Other than the reverb issue for longs and shorts I see no point, and in many cases the built-in mic setting will take care of it anyway. 

I just don't get this, so maybe someone might explain.

Thank you.


----------



## jcrosby (Jul 12, 2022)

In my case it’s a matter of treating shorts/longs differently.

Shorts might get some light transient shaping, and/or a more up front mix. Longs I might treat in such a way where I prefer more ambience, or prefer they 
behave more like a bed, where they sit behind the shorts…


----------



## JimDiGritz (Jul 13, 2022)

William The Concurer said:


> So keyswitching is now redundant, in this case. I thought it was such a great way of working. But there are some libraries that will not disable/unload the shorts and longs and other articulations, so they cannot be separated out in one instance.*The Spitfire "player" is one. This means you're having to load up the full set of articulations, but only use the longs /shorts/FX, depending on its relevance. Isn't this a total waste of resources and memory?*
> 
> In the past I've done many TV things where I'd stem out the high and low instruments, but never this longs, shorts thing. Other than the reverb issue for longs and shorts I see no point, and in many cases the built-in mic setting will take care of it anyway.
> 
> ...



The Spitfire player does allow articulation unloading:






Just click the pencil icon top right of the articulation selection panel


----------



## JimDiGritz (Jul 13, 2022)

Also perhaps it's just me and my workflow right now, but I often want to copy/shift sections of shorts and legato/longs around the composition (sometimes into other instruments as starting point)

If all the violin parts are sequential in a single track I would have to cut out sections and faff around.

I can see, however, that if you are composing strictly for a physical orchestra then you would probably want to keep strictly to the sections to allow export to Dorico etc.


----------



## William The Concurer (Jul 13, 2022)

JimDiGritz said:


> The Spitfire player does allow articulation unloading:
> 
> 
> 
> ...


Excellent, and thank you. Not a very obvious (or elegant) way of doing it ,I think, and having to make one more click, when it could have be achieved by right/control clicking the articulation on the main page, like SAM, Cinematic Studio, etc. Well, that's what comes from not using their player very much ( though I do like it!). That and poor eyesight. Actually now I will have to use it more, as I bought the BBCSO and Abbey Road, so this will help immensely.

But, ultimately, I simply cannot get on with separating out things like this. I'm from a more "classical" background, so I want to be able to keep a performance coherence to the various articulations,_ in one region, _and not split them up, clumsily, like this. I'm using orchestral libraries for realism, and not pads, or fillers. That I can do elsewhere.Even though I do write hybrid music for games/TV, I'm pretty strict about orchestral faithfulness, etc., whenever possible.

I guess you could do the performance all on one, key switching region, copy it to, eg:, a staccato patch, then mute the appropriate longs, and vice versa, etc. Horrible and time consuming, imo, but thank God for copy and paste in this respect. In the end, it's all about the best compromise for your own workflow.

Thanks once again for that info.

-w


----------



## William The Concurer (Jul 13, 2022)

William The Concurer said:


> So keyswitching is now redundant, in this case. I thought it was such a great way of working. But there are some libraries that will not disable/unload the shorts and longs and other articulations, so they cannot be separated out in one instance.The Spitfire "player" is one. This means you're having to load up the full set of articulations, but only use the longs /shorts/FX, depending on its relevance. Isn't this a total waste of resources and memory?
> 
> In the past I've done many TV things where I'd stem out the high and low instruments, but never this longs, shorts thing. Other than the reverb issue for longs and shorts I see no point, and in many cases the built-in mic setting will take care of it anyway.
> 
> ...


This post is good timing. I'm about to start a major transformative template, but I'm struggling with the concept of splitting up the articulations like this. In the past I'd do highs and lows, but nothing more than that.

I might recommend this from Spitfire's Paul Thomson. It's extremely clear. I've spent a lot of time researching, and his is the most similar to the way I'd like to do it. His not using summing stacks was a surprise, until he explained why. Also the track stack approach results in a different way of working with FX bussing and subgrouping. A track stack is, in effect, a subgroup.



-w


----------



## Spid (Jul 13, 2022)

I'm trying to build my first template, so I try to get all inspirations to find the best workflow that would work for me. 

So what I'm trying to do right now is to have a big Folders for the big instrument type categories, so when I collapse everything I can quickly navigate to the track I would need, just like searching by instrument category on the old workstation keyboard.






And yet, inside of each category, I would have different instruments of each library. Then I would build a track using multi outs to have different instrument articulation on different outputs.






And I'm building custom articulation set to have all articulations available without using key switch if I don't want to, but also to make copy/paste easier between tracks.

So if I would want to get a specific articulation, I could just select the track, or if I would want all of them, I would go to the main track that host the plugin (with all MIDI channel).






So in the Piano roll, I can easily change an articulation on a phrase and it would change the midi channel and articulation, and therefore the audio would change too.

So on the Mixer, I can route different articulation on different Buses with different Reverb. I haven't done all routing yet, it's still a work in progress... 







But we can see that when different note use different articulation, we have different audio out on their own channel strip.

That's the idea... at least for now. As said, it's a work in progress and I'm trying to get all best ideas from all your feedbacks, the pros and cons, and trying to figure out what would be the best layout for me... YMMV


----------



## Spid (Jul 13, 2022)

PS: I forgot, in the editor, I can change the midi note to show color by articulation:






So when it will be time to dig deeper into editing to make something more realistic, I could easily view what I'm doing... I hope


----------



## storyteller (Jul 13, 2022)

I understand that a lot of mix engineers like to have shorts and longs separately for strings, but I tend to think that a lot of the "blah-ness" of modern music is due to concepts like this which feeds into over-mixing. This isn't a knock on mix engineers. In many ways, they are better than ever. Mixes are more "perfect" than ever. But I think that when everyone strives for achieving this group-think concept of perfection, the character and soul of a recording can be lost. I mean, this is just my opinion on how the material to be mixed is presented to the mix engineer. The most authentic and moving scores and orchestral recordings to me were done without separating shorts and longs. At most, section mics exist... So the articulations are presented as they are heard rather than micro-managing articulations separately for the mix. 

Anyway, this might be an unpopular opinion, but it is a long way of saying I like to keep shorts and longs on the same track with the same settings. If the shorts are too blurry for a song, then I might alter the entire string section mix to give it more clarity or automate the mic positions during the song. This might make the longs a little drier, but that is the beauty of getting a section to feel right for the song.


----------



## Spid (Jul 13, 2022)

I'm not a mix engineer, I have some notions of course, but I know I'm not a mix engineer, so I don't know what would be best. If I follow my own reasoning, I would tend to think that the musician is in the same room, regardless if s/he plays long or short notes, therefore I would tend to keep the same room/reverb for both.

That was actually one of the thing I was wondering before to start my template... because I could just keep One track for all articulation and have all articulation into the same articulation set to pick from... but since I'm not expert and I'm not mix engineer, I'm keeping the option to route them on different bus if I would want different mix later on.

But by doing it the way I do right now (things might change later), I can just select the output bus (Main/Long or Short if I want a dedicated Short bus). The Main being also the instrument Stack.

As said, this is where I'm at right now... experimenting it, and building articulation sets... I might change my mind later on. I just try to keep the best of both world for now.


----------



## blaggins (Jul 13, 2022)

I've struggled with this same question. A lot of the advice on this forum is to split longs and shorts. I have (being a beginner but wanting to do things the "right way" for the get-go) implemented split longs/shorts on all my orchestral instruments. I keep one-track-per-instrument with articulation maps in Cubase, and splitting longs/shorts is honestly a major PITA. It doubles track counts and creates a bunch of complexity in routing internally in Kontakt (for multies), etc. 

The whole thing seems to hang on this concept that you want to add more reverb to your longs than your shorts but generally I'm not even doing that in my mixes! (Let me tell you, my mixes have got much bigger problems than tweaking reverb sends between longs and shorts will ever solve...) But the advice to do this is repeated a lot (hell even Paul says it in the video in this thread).

I do often wonder how much of a difference it really makes though (like, is it worth it for ME), and if it's perhaps more situational and dependent on the broader context of the orchestration. Do folks ALWAYS add more reverb to their longs, or is it more an occasional thing for a bit of extra here and there?

Lately I've also been wondering if I'm going about this all wrong. I have "fixed" microphone levels uniform across all articulations but then have the ability to play with reverb send amounts for longs vs. shorts. If I vary the microphone balance, I vary it for all articulations at once. However many VSTs allow you to balance differently per articulation. It occurs to me that I could add more outrigger or ambient mic to my longs vs. my shorts and "bake in" that balance at the VST instead of playing with it at the reverb send? Anyone doing things this way?


----------



## Spid (Jul 13, 2022)

@tpoots you actually summed up so well my situation. I share the same concerns. My main concern being the number of tracks actually, because it's limited to 1,000 in Logic, and by having 2, 3 or more tracks per instrument, it can increase very quickly. I'm already at 700+ tracks and I didn't even add 5% of the libraries I wanted to add in my template at first... So I'm worry of doing it the "wrong way".

When I started I wanted to get it "right" immediately to not have to redo it over and over... but I'm not even done with 3 or 4 library brand that I already notice I need to redo some because I messed up the outputs buses. 

And lastly, it takes me for ever to build this template, maybe I'm just slow, but it's really a long process I don't really wish to redo many times in the future. 

So overall, I wonder if I'm not gonna try something else... just keeping 1 track per instrument, but having it loaded in different midi channel in Kontakt/Sine/etc... with multiple outputs. So if later on, I need to have different mix for shorts or such, I could easily do it on the different outputs, I would only need to press the + on the channel strip in logic mixer, and yet I would keep a reduced number of tracks. 

And yet, in the piano roll editor, if I want to have different articulation, I could always select notes per articulations, or select note per MIDI channel (that would match 1: Long, 2: short, 3: SFX...)

I'm just thinking out loud here... as said it's brainstorming time now while I'm building the template. So the more comment and feedback I can get, the better. I really appreciate everyone's inputs here. Even if I don't pick your way to do it, it doesn't mean your comment was not useful. So thank you all!


----------



## storyteller (Jul 13, 2022)

Spid said:


> @tpoots you actually summed up so well my situation. I share the same concerns. My main concern being the number of tracks actually, because it's limited to 1,000 in Logic, and by having 2, 3 or more tracks per instrument, it can increase very quickly. I'm already at 700+ tracks and I didn't even add 5% of the libraries I wanted to add in my template at first... So I'm worry of doing it the "wrong way".
> 
> When I started I wanted to get it "right" immediately to not have to redo it over and over... but I'm not even done with 3 or 4 library brand that I already notice I need to redo some because I messed up the outputs buses.
> 
> ...


I have similar conversations with users of OTR when they are setting up their templates. I built OTR with track templates for every type of template design possible. However, the difference in choosing how to build a large template really boils down to your priorities. For example you could easily have 33 or more tracks dedicated to one instance of Kontakt (16 midi, 1 VI, 16 stereo returns). Or you could have 1 track per instrument if you take the time to setup articulation maps and such.

*If your priorities are...*
A lot of unique instruments? You will need to do the one-track-per-instrument approach. Individual articulations per track? You will severely sacrifice the number of instruments for a realistic track count in your template. That is just the simple mechanics of a template.

My template is about 1400 total tracks... 1200+ of those are individual instruments. I use a one-track-per-instrument approach with all articulations setup in @tack's Reaticulate for Reaper (the same thing as expression maps in other DAWs). Those instruments are sitting on 4x VEPro servers (3x128gb, 1x32gb) with all articulations and mic positions loaded. All instruments are mapped to standard midi CCs for my template. The same template designed in an articulation-per-track approach would probably be 8000-10000 tracks... which is not even remotely manageable.

Anyway - hope that sheds some light on it from another user's perspective.

*EDIT: *Oh - and I do keep all tracks in my template disabled (enabled on the VEPro side) until they are used. This keeps the CPU down and lets me keep a buffer of 256 to 384. Otherwise, having that many enabled tracks will severely tax your CPU (unnecessarily so).


----------



## Spid (Jul 13, 2022)

yeah, it helps... I started with a track per group of articulations, for instance, Long, Short, but then Trill, and Swells, and Runs, and SFX, etc... But I noticed I was easily getting to 500 tracks for just one library such OT Berlin Series. So I would eventually end up in the 8k 10k tracks like you describe if I try to do so with all libraries.

So I started to do "middle-ground" by having Long, Short and SFX... but the more we talk, the more I realise that I might probably change my template to have 1 track per instrument. One sure thing, I'm learning to use Articulation Set in Logic (like Maps in other DAW) and I like the way it works. And I'm experimenting *right now* the different midi channel and output for different articulation group such Long, Short, SFX... but all contained in 1 track with the player in multi-outs and the library loaded on different midi channels, with custom articulation sets... 

Right now I keep everything in the DAW with unloaded tracks, but eventually when I will get enough money for more computers, I will probably add couple of VEP servers and move some stuff out of the DAW to have them permanently loaded in VEP servers. I'm doing it that way because 1) I don't have the money to guy couple of Mac Studio/Mini now... but also 2) to get more experience with each library and see which ones would take too much time to load and would need to be on VEP server, and which ones I could keep on my internal 8TB SSD....

It's all trial and error I guess... I'm slowly learning


----------



## NuNativs (Jul 13, 2022)

You can always do a 1 track per instrument with articulation maps setup to keep down track count, then duplicate select tracks and re-route where you feel the Shorts need to be separated out for processing if timing becomes an issue you can't look past.


----------



## Spid (Jul 13, 2022)

NuNativs said:


> You can always do a 1 track per instrument with articulation maps setup to keep down track count, then duplicate select tracks and re-route where you feel the Shorts need to be separated out for processing if timing becomes an issue you can't look past.


Actually, if you do it with multi-channel, you don't even need to duplicate anything, you can have permanent routing of different articulation group into a separate bus out, so you can apply (or not) different treatment/reverb.... while having only 1 track per instrument.

I think that's gonna be my final choice... 1 midi track with articulation set and multi-out for different articulation groups. Easy to navigate, easy to copy/paste midi note between instruments, no keyswitch laying around, and yet having different outs on different channel strip... 

I'm gonna dig a little bit more into this solution and redo part of my template to see if it makes sense or not when it's time to use it.


----------



## William The Concurer (Jul 13, 2022)

Spid said:


> yeah, it helps... I started with a track per group of articulations, for instance, Long, Short, but then Trill, and Swells, and Runs, and SFX, etc... But I noticed I was easily getting to 500 tracks for just one library such OT Berlin Series. So I would eventually end up in the 8k 10k tracks like you describe if I try to do so with all libraries.
> 
> So I started to do "middle-ground" by having Long, Short and SFX... but the more we talk, the more I realise that I might probably change my template to have 1 track per instrument. One sure thing, I'm learning to use Articulation Set in Logic (like Maps in other DAW) and I like the way it works. And I'm experimenting *right now* the different midi channel and output for different articulation group such Long, Short, SFX... but all contained in 1 track with the player in multi-outs and the library loaded on different midi channels, with custom articulation sets...
> 
> ...


Trust me, you don't need farms these days. Most people are ditching them because a single Mac Studio will demolish most of them terms of power.And don't underestimate the new Mac Mini's either; incredible power and value. Some people are even not even using VEP Pro either. It's one more layer of trouble you don't need.We're moving into a whole different era of power, in the coming five years. In the end, its the quality of the music that counts.RIght?

From what I see, you are definitely going in the right direction, and knowledgeable enough to accomplish it.


----------



## William The Concurer (Jul 13, 2022)

storyteller said:


> I understand that a lot of mix engineers like to have shorts and longs separately for strings, but I tend to think that a lot of the "blah-ness" of modern music is due to concepts like this which feeds into over-mixing. This isn't a knock on mix engineers. In many ways, they are better than ever. Mixes are more "perfect" than ever. But I think that when everyone strives for achieving this group-think concept of perfection, the character and soul of a recording can be lost. I mean, this is just my opinion on how the material to be mixed is presented to the mix engineer. The most authentic and moving scores and orchestral recordings to me were done without separating shorts and longs. At most, section mics exist... So the articulations are presented as they are heard rather than micro-managing articulations separately for the mix.
> 
> Anyway, this might be an unpopular opinion, but it is a long way of saying I like to keep shorts and longs on the same track with the same settings. If the shorts are too blurry for a song, then I might alter the entire string section mix to give it more clarity or automate the mic positions during the song. This might make the longs a little drier, but that is the beauty of getting a section to feel right for the song.


Indeed, you make some good points. There’s a lot of blah-ness, for certain, but there’s another, “darker” reason for wanting as many stems as possible.


When I was first doing TV the word "stems" didn’t exist. Stereo mixes were it, but then submixes started creeping in; the forerunner of stems. You know: take out the melody, and busy stuff.I can understand the desire for different mixes (and even stems) : mainly to keep out of the way of the v/o, etc. But now production companies can mix and match stems from _entirely different cues_ and create new ones, thus amassing an enormous library resource of music, without having to keep hiring composers.


This is a development from the days when production companies would wish to avoid hiring composers for new cues, and use existing music for shows they were not originally intended. I’m still getting royalties from modern TV shows that are using my music from around 18 years ago.So, as long as they're honest, the royalties will come in from way back.

Generally, in popular TV these days, one doesn't even sync to picture. I haven't done it for years.I used to have a VHS, all synced up with Unitor 8's/MTC, but now its: "Yeah, just give us a bunch of 2' "cues" under different categories". You know: Tension, Drama, Comedy, Sad, Happy. Ugh. Unless it's drama, of course. Then get your stems at the ready.

Anyway, stems are not all about technical considerations, imo. I'm definitely with you about keeping shorts and longs, and whatever else, on the same track. I just don't share your computer farm philosophy,lol!


----------



## Spid (Jul 13, 2022)

yeah, I got a MBP 16" with M1 Max 64GB and 8TB for my main computer here, so I can see already how far I could go without any use of VEP. I messed in the past with VEP, but as you're mentioning, it's 2 additional layers of potential issues... I count 2 because of routing and hosting, two things that can go wrong, not just one.

In the other hand, I remember I watched a video of Harry Gregson-Williams where he mentioned he got a loan from Hans Zimmer of $100k to purchase the 27 Roland S760 sampler with 32MB extension so he could load the same orchestral samples Hans was using at the time... and by now, I'm sure Hans is using 20-ish computers.

We have this tendency that once we can do more with one computer, we start to "need" more to run the new big samples. I'm old enough that I saw those evolutions, starting on Atari, moving on PC and then moving to Mac with VEP, etc... So I tend to think that when we could get 1TB RAM laptops, we will need 2 to 4 more VEP servers just to have 4x 1TB RAM to load more libraries, bigger samples, maybe more CPU hungry system for "Virtual Violinist running on AI" or else that future will bring...

Trust me, with or without VEP, and with or without separate Short/Long tracks were probably my 2 main questions when I started building my Template... and couple weeks later, I'm still not sure I will ever have found the perfect solution... all trial and error. I guess I need to do some mistake to learn in the mean time, it's part of the journey.

I come from a rock/pop/urban music background, so no orchestral at all. So until lately, I only knew "Sustain and Staccato" articulations... even Legato was an obscure word for "mono" in my mind. So, I'm slowly learning all articulations, how they sound... for instance I discovered "Bartok" and this is the sound I heard once and I wanted to reproduce one day, I just didn't knew at the time it was called Bartok. So I guess there's no shortcut to success, we need to go through all those discovery time of trial and error to figure out what I want and would work for _my_ preferences.

I'm just trying to gather as much information as I can... One sure thing is that I'm not gonna record my music with a real orchestra and I won't composer for any commercial project, TV or such... I'm not interested anymore in the "business side" of the music industry. I'm just doing music as a hobby now, that's it. So I can already remove some aspect (like laying the template in the order of a score sheet, and instead pick my favorite order based on General Midi habits... -ish).


----------



## Spid (Jul 13, 2022)

William The Concurer said:


> And don't underestimate the new Mac Mini's either; incredible power and value.


Actually, if we could get 64GB or even better 128GB RAM in a MacMini, even if it's with the "basic" M1 CPU (no M1 MAX or M1 Ultra), that would probably be enough for me. However, right now, 16GB seems a little bit small for a VEP server...


----------



## William The Concurer (Jul 13, 2022)

Spid said:


> yeah, I got a MBP 16" with M1 Max 64GB and 8TB for my main computer here, so I can see already how far I could go without any use of VEP. I messed in the past with VEP, but as you're mentioning, it's 2 additional layers of potential issues... I count 2 because of routing and hosting, two things that can go wrong, not just one.
> 
> In the other hand, I remember I watched a video of Harry Gregson-Williams where he mentioned he got a loan from Hans Zimmer of $100k to purchase the 27 Roland S760 sampler with 32MB extension so he could load the same orchestral samples Hans was using at the time... and by now, I'm sure Hans is using 20-ish computers.
> 
> ...


A very interesting post. I remember that Gregson-Williams thing. I think he might have got that idea from James Newton-Howard, who also had a "farm" of Roland S-760's. It was Newton-Howard who came up with the "trick" of layering sordino strings with normal bowing. It's something that they do with real strings and it's the most magical sound. Newton-Howard was recognized as the best mockup artist of the time. I love his scores, too.How come he never won an Oscar?!

Bartok. You, me, or anyone living (even John Williams) will never be like the Bartoks, or Stravinskys. Go listen to The Rite Of Spring, or his Neo Classical period stuff, like his Symphony In Three Movements. Or Bartok's The Miraculous Mandarin. It's like Heavy Metal. But scarier, yet prettier.

And while we're on this, go listen to Steve Reich, who is still alive. His Three Movements (no co-incidence) is a mind- blowing example of live players performing rhythmic phasing. You'll go quiet, I'm certain, especially if you listen on good headphones. It's humbling to hear true genius and orchestras that can play this stuff. You'll never go back to your computer and see it in the same way.

Btw I remember that computer era, too. The Atari (which I had for a while, before getting a Mac G3) was the first machine to support 1GB of RAM(!), and even the pros said the Atari's MIDI timing was tighter than the far more expensive Mac. I have to say the Atari was tight as a drum. I loved it. You could even get these little apps that would make it look like a Mac. RIP the Atari....

I'll stop now, because you got me on my passion. It sounds like you'll make it. Seriously, never give up on your big dream, because there are a lot of smaller dreamers being successful.

I wish you well.


----------



## William The Concurer (Jul 13, 2022)

Spid said:


> Actually, if we could get 64GB or even better 128GB RAM in a MacMini, even if it's with the "basic" M1 CPU (no M1 MAX or M1 Ultra), that would probably be enough for me. However, right now, 16GB seems a little bit small for a VEP server...


Yes, I always thought that. The Mac Studio is, if you look at it, is a supercharged Mini, at a price. It even physically resembles it.


----------



## tc9000 (Jul 13, 2022)

You seem rather knowledgable so this may not be of value, or you may be well aware of this already, but can I recommend AKD:



https://www.youtube.com/c/AnneKathrinDernComposer/videos


----------



## quickbrownf0x (Jul 13, 2022)

Not sure if this is considered a cross-post and allowed or not, but the other day a bunch of us had a very similar discussion and so here are my two cents on building a template, how to handle reverb, etc; 

Waffle on about template design 1/2

Waffle on about template design 2/2

I can definitely see why you'd want to split out shorts and longs, but I'd say it's also about the type of sound you're trying to achieve. I tend to mix and match, use (super)dry and wet instruments at the same time, but if I want that live orchestral, more blurred sound then who cares, right? Unless you're being asked for specific stems, of course.


----------



## Spid (Jul 14, 2022)

No problem at all for any cross-posting here, I appreciate all feedback. So I’m gonna check all links provided.

The only downside I found so far with using only 1 track for all articulations, is that the articulation map/set could become very crowdy. For instance, if I check OT Berlin Series, and I want to make a track for Violins I, I can end up with close to 50 articulations if I add them all on the same track… so the articulation drop menu could become very long and will require to scroll amongst articulations.

But then comes another issue I haven’t solved yet, I’m trying to figure out how I could change and select articulations. There are many choices out there, from separate keyboard, launchpads, iPad with TouchOSC or else, etc… Right now, I’m not sure. Ideally I would love to find one system that would work accross all Libraries. So when I want Sustain, I know the button/key/pad to press to have Sustain, regardless if it’s OT, or Spitfire or NI or VSL or any future library I might add… 

I thought about using a 8x8 pad at first, mainly if we could set different color for pads, but then I’m not sure we can have so much customization with out-the-shelf products. I know iPad could be a great solution for that, and I’m already using an iPad Pro 11” on the side of my MacBook Pro… but then it means I will have to spend a LOT of time to customize all maps, layouts, midi CC/notes, etc… I’m kind of perfectionist person, so I want things to be “perfect” (that’’s often my biggest personal downfall, but a very great quality when you work in R&D…).

I’m trying to figure out if there’s a way to do it differently and make it easier… as usual, work in progress


----------



## samphony (Jul 14, 2022)

May I suggest to write the music you want to achieve and derive your template organically from the process of growing with your experience?


----------



## Spid (Jul 14, 2022)

William The Concurer said:


> Btw I remember that computer era, too. The Atari (which I had for a while, before getting a Mac G3) was the first machine to support 1GB of RAM(!), and even the pros said the Atari's MIDI timing was tighter than the far more expensive Mac. I have to say the Atari was tight as a drum. I loved it. You could even get these little apps that would make it look like a Mac. RIP the Atari....


When I started computer music, I had an Amiga with the old soundtrackers, and quickly I shift for the Atari for its great reputation for MIDI. It was with Steinberg Pro24, and then I changed for the Atari Mega ST, I had the external 30MB hard drive… it was serious stuff at the time. I was using it with a Roland Sound Canvas SC-55 in General Midi… that’s it. And it was my rig for the next 5 to 10 years probably. So I had not a lot of sounds and I was using always the same… that’s maybe why now I’m trying to get more new sounds all the time. But somehow, because I only had that, I went super deep into SysEx edition of the sounds, and I was focusing on making music… it was my most productive years. Once I moved to PC for Cubase Audio, and then Cubase VST, etc… sure I could do so much more stuff (like audio, virtual instruments, etc…), but I lost a lot of productivity to deal with the complexity of the more evolved system. But god knows I loved my Atari, the MIDI was indeed great. I had a Roland V-Drums, so I could record drums in MIDI, so tight MIDI was necessary and I never felt any lag or latency or issues whatsoever.

This is pretty much what I’m trying to replicate now, but using a MacBook Pro for my DAW, with a Template with “everything under the sun” in so I can just pick a track and play/record… no setup required, no adjustments to do, no midi learn, no map to pick, etc… And yet instead of having a Sound Canvas, I’m gonna use my internal sounds for now and later on I might get a Mac Studio/Mini with VEP to use it as a big ROMplers like I did with my Sound Canvas in the past. Except there are way more options today and I’m trying to figure out the best choices to make to fit my needs and the workflow I’m chasing since I left the Atari. I hope that makes sense…


----------



## Spid (Jul 14, 2022)

samphony said:


> May I suggest to write the music you want to achieve and derive your template organically from the process of growing with your experience?


I should, but somehow, I’m a bit OCD and somehow in my mind, I need everything to be done perfectly, if not I can’t do it…. I know, I’m just taking any excuse here. It’s a good thing it’s just a hobby now and I have no real obligations, no project to deliver in time, no real need to write an album or whatever… but also because of that, I have no pressure to go faster an write more music. Actually, I had some music theme I did long time ago with just using Symphobia multi patch, no multi track, not even recording on the click, no editing, just a dictaphone style recording to keep the idea for later… and I always thought, once I have my new computer and my template done with all new sounds, I will redo this music… so now, I’m trying to get everything ready… spending more time for technical problems than musical writing… I never said I was perfect


----------



## quickbrownf0x (Jul 14, 2022)

Spid said:


> The only downside I found so far with using only 1 track for all articulations, is that the articulation map/set could become very crowdy. For instance, if I check OT Berlin Series, and I want to make a track for Violins I, I can end up with close to 50 articulations if I add them all on the same track… so the articulation drop menu could become very long and will require to scroll amongst articulations.


True - would be nice to have some sort of filtering/search option. 



> Ideally I would love to find one system that would work accross all Libraries. So when I want Sustain, I know the button/key/pad to press to have Sustain, regardless if it’s OT, or Spitfire or NI or VSL or any future library I might add…


Yeah, I've been down that rabbit hole a few times. Got pretty close too, using Lemur. But then I found that in real use I actually never used my tablet to switch articulations; I just use my mouse a lot more than I thought I would. So after a while, I just gave up on this idea. 

Not saying you should too, but it might be smart to maybe start small before you spend hours debugging custom software or learning how to program this specific bit of kit.



> I’m kind of perfectionist person, so I want things to be “perfect” (that’’s often my biggest personal downfall, but a very great quality when you work in R&D…).
> 
> I’m trying to figure out if there’s a way to do it differently and make it easier… as usual, work in progress


Been there too. I guess a little lean and Agile thinking might help.


----------



## Akarin (Jul 14, 2022)

Different track delays, different reverb, different compression settings... I kinda treat longs and shorts like different instruments :-p


----------



## Snarf (Jul 14, 2022)

If you are splitting articulations for reverb reasons, you might be overthinking it - depending on your style of music. Consider the following example, which was done with just one hall reverb:



However, as you can see, Blakus still uses individual tracks for articulations for a much more important reason: layering articulations to stitch together a musical phrase. This has a much bigger impact on achieving a realistic/expressive sound than slight reverb differences between articulations.


----------



## William The Concurer (Jul 14, 2022)

quickbrownf0x said:


> Not sure if this is considered a cross-post and allowed or not, but the other day a bunch of us had a very similar discussion and so here are my two cents on building a template, how to handle reverb, etc;
> 
> Waffle on about template design 1/2
> 
> ...


Thank you kindly for those valuable links. A very valuable resource.I believe there is a distinction between the requirements of a music engineer, for the purposes of, say, TV drama, and the other. The other being no requirement for stems, because one can use articulations sets/keyswitches on one region/track, then route stuff out to separate outputs that are non-TV industry standard. eg: for an album release, or low level reality TV. Seriously, who will want stemming for Kim Kardashian?!


You don't need such stuff for an album release, that the TV/film people will require.


Spid said:


> When I started computer music, I had an Amiga with the old soundtrackers, and quickly I shift for the Atari for its great reputation for MIDI. It was with Steinberg Pro24, and then I changed for the Atari Mega ST, I had the external 30MB hard drive… it was serious stuff at the time. I was using it with a Roland Sound Canvas SC-55 in General Midi… that’s it. And it was my rig for the next 5 to 10 years probably. So I had not a lot of sounds and I was using always the same… that’s maybe why now I’m trying to get more new sounds all the time. But somehow, because I only had that, I went super deep into SysEx edition of the sounds, and I was focusing on making music… it was my most productive years. Once I moved to PC for Cubase Audio, and then Cubase VST, etc… sure I could do so much more stuff (like audio, virtual instruments, etc…), but I lost a lot of productivity to deal with the complexity of the more evolved system. But god knows I loved my Atari, the MIDI was indeed great. I had a Roland V-Drums, so I could record drums in MIDI, so tight MIDI was necessary and I never felt any lag or latency or issues whatsoever.
> 
> This is pretty much what I’m trying to replicate now, but using a MacBook Pro for my DAW, with a Template with “everything under the sun” in so I can just pick a track and play/record… no setup required, no adjustments to do, no midi learn, no map to pick, etc… And yet instead of having a Sound Canvas, I’m gonna use my internal sounds for now and later on I might get a Mac Studio/Mini with VEP to use it as a big ROMplers like I did with my Sound Canvas in the past. Except there are way more options today and I’m trying to figure out the best choices to make to fit my needs and the workflow I’m chasing since I left the Atari. I hope that makes sense…


That's pretty much my memory of it. I had the 1040, with a bunch of those C-Lab Unitors hanging off the end, like black bricks. I went into Logic and never changed. I also remember starting with a small external drive, then I bought a 1GB Micropolis. It was an enormous grey breeze block of a thing and I think it cost 500 quid!! In those days I never dreamt of the way things would be now. Terabytes on a little fob! Terabytes was a non-existent word, then.

I was acquiring a lot of synths around that time, and ended up getting the Mac G3, with Unitor 8's. I had 5 of them, in the end, and still have them in storage!! Beautiful blue things.

Nothing seems as exciting as those easy days of tech. I can't seem to get worked up over any computer now. Even a Mac Studio, which is monstrously powerful.

Thanks for sharing that great backstory. It's important stuff.


----------



## tack (Jul 14, 2022)

Snarf said:


> Blakus still uses individual tracks for articulations for a much more important reason: layering articulations to stitch together a musical phrase.


You can still do that with track-per-instrument, but the workflow is certainly different (layering across MIDI channels instead of tracks). Generally in practice I find, at least for orchestral mockups, you bounce a single line between articulations more often than you layer, except perhaps where divisi is involved.

When I find I want to layer for sonic reasons (i.e. not orchestration), I'll pull in another copy of the patch (or a patch from a different library) on a different channel and layer on the same track. I'll only go with a dedicated track if I want to postprocess or automate the VIs differently. For me that's the exception rather than the norm.

But these workflow preferences are also going to be skewed based on the features and limitations of your DAW. I use Reaper with Reaticulate. If this solution had weaker articulation management, I'd probably prefer track separation. If it had more user-friendly flexible signal chain processing on a given track, I'd probably _never_ add tracks for layering.


----------



## jbuhler (Jul 14, 2022)

Snarf said:


> If you are splitting articulations for reverb reasons, you might be overthinking it - depending on your style of music. Consider the following example, which was done with just one hall reverb:
> 
> 
> 
> However, as you can see, Blakus still uses individual tracks for articulations for a much more important reason: layering articulations to stitch together a musical phrase. This has a much bigger impact on achieving a realistic/expressive sound than slight reverb differences between articulations.



I agree with this for the most part and It’s true that layering articulations is sometimes the only way to accomplish an approximation of a line given the way the libraries are built, but it is a convoluted solution, hard to scan from just looking at the midi, and so subject to error and hard to go back to correct and adjust. 

Treating the articulations as different instruments as @Akarin suggests is going to end up, not with an integrated conception of the real instrument, but with a treatment that divides the instrument against itself. There can be good reasons for doing that of course and sometimes it is indeed the least bad solution; you can also write effectively if you write to that situation but a lot of basic capabilities of the real instrument are lost in the process. So it needs to be kept in mind that this is very much a special case. (It has however become quite ubiquitous in media music. I’m not disputing Akarin’s advice, which is sensible for media composition, but only pointing out that it comes with inherent limitations.)

If you write a lot of lines that include a mix of articulations, which is the case for most orchestral music not written for samples, you will have to work with the partial solutions that the current VIs allow. 

I think the basic difficulty is captured best in vocal libraries and choirs where the effect of the reduction and parceling out of articulations is most easy to see and hear. Working with voices beyond vowels or random syllables will quickly yield a convoluted mess of interlocked tracks—so many tracks! (ETA: and the midi channel variant that @tack helpfully suggested is only marginally useful in reducing the complexity here)—that can be mitigated some with key switching and word builders but only to a limited extent. The difficulty is not just in the writing but keeping track of where everything is as you adjust the details during revision. Instrument tracks, especially for ensembles rather than solo instruments, are a bit less demanding (largely because ensemble playing is highly standardized), but less than we usually want to admit. 

Modeled instruments are a solution to this problem even if they are convoluted in their own way and still struggle with delivering an appealing tone that you want to work with. They do offer a different approach however and are often much more adept at handling lines with mixed articulations in a way that you can keep track of the whole. I don’t generally use modeled instruments myself because I can’t get past the tone but they offer the most idiomatic playing that VIs have to offer. 

These observations are not entirely off topic in that they affect how you lay out your tracks and so also the routing. Then too depending on the stems you are expected to deliver you will have to conform your template to that. 

But if you have no need to deliver stems for any purpose other than your own mixing, as is the case for most hobbyists, then perhaps you can work on conforming your mixing practice to what works best with your composing workflow. Or if you have hired someone to help with mixing consult with them about what stemming is optimal for them.


----------



## quickbrownf0x (Jul 14, 2022)

Spid said:


> When I started computer music, I had an Amiga with the old soundtrackers, and quickly I shift for the Atari for its great reputation for MIDI. It was with Steinberg Pro24, and then I changed for the Atari Mega ST, I had the external 30MB hard drive… it was serious stuff at the time. I was using it with a Roland Sound Canvas SC-55 in General Midi… that’s it.


Ha, funny - I started out almost exactly the same.


----------



## IFM (Jul 14, 2022)

I went through this a while back and made this massive long/short template for the string section only to decide later that it was a waste of time and slowed me down. I strive to find articualtion that let me play it like it feels, short stac...it has an accent and if I press lightly and hold it's a long, all on one articulation. Then I can always alter some of the notes to other articulations using the map I created. This allows for a much more natural flow for me. So far I like how BBCSO and EWQL handle this. 

And in the end if I decided to hire a mixing engineer that wants the longs/shorts separate then I can turn out multi-out. The only argument I get is for offsets but often I find a long/short set in the libraries I use are so close that I can use just one offset or my old method of moving notes.


----------



## blaggins (Jul 14, 2022)

Spid said:


> I should, but somehow, I’m a bit OCD and somehow in my mind, I need everything to be done perfectly, if not I can’t do it…. I


I have a similar predisposition to wanting to get everything in my control perfect... like getting my ducks all in a row before moving on to the next step. Then again I'm also a programmer and find building templates both satisfying and slightly therapeutic 

However all that being said, I spent probably an entire day re-working what I have into a split longs/short/legato articulations with the idea that this gives me (1) control over long vs. short fx, reverb, etc. and (2) reasonably find-grained control over track delays on the audio return channels (still use just one MIDI channel per instrument so I can't do it there). I figured most "shorts" have about the same delay, so do most "longs" and of course legato is it's own massive delay usually. Or at least I can pick a good middle-of-the-ground value and get 95% the way there with grid alignment. But... I just don't make use of any of it, and it's a day of my life that I won't get back and I am really starting to question if there was any point to all that work.


----------



## JohnG (Jul 14, 2022)

One other reason to split longs and shorts that I didn’t see reading through is when you are going to layer live players on top of your samples. Depending on how many real players you have (and on their performance) it is very handy to have longs and shorts split in audio.

*Live Does Not Equal Perfect*

That “exciting chase scene” sometimes becomes a bit ragged when played live, especially if one is just learning and has inadvertently written something very hard / impossible to play. Unless you have the good fortune to be recording at Abbey Road or Sony or something similar, with the great (union) players available there, _and_ the time to go over anything rough, you may have to rely more on the samples. 

And if you’re _not_ at Abbey road, but instead are recording with a good, B-plus set of players, they still can add a tremendous amount to your pieces, but sometimes they can get uneven on the fast, intricate parts.

Separating samples into longs and shorts also helps if you don’t have very much time to edit the live recordings before mixing. If you run out of time editing the audio of that “hard charging” passage comprised of shorts at 155 bpm, you might want to lean in on the samples a bit more to keep things on point.

One more tip; if you are “sweetening” — not replacing your samples but layering some live players over them — the live players don’t have to play that loudly. They often are more accurate if they ease back on the dynamics 10% or so. If you have longs and shorts separate, it’s easier to get to the finish line in that circumstance.

*Sample-Based Compositions — separate for Reverb Only?*

If you are only using samples, it’s a different story. I agree with whoever said that separating longs and shorts solely for reverb offers only a marginal advantage, if any, depending on the piece. Not sure it’s worth the hassle for just that. For full disclosure, I do apply different reverbs to longs and shorts in my demos, but I doubt it makes a meaningful difference. 

*What’s the Point?*

Goodness knows what kind of playback equipment is used to review demos — an iPhone on a desk? Someone else’s iPhone in a car? Buried under dialogue and SFX in a game /movie / TV show? Sometimes we spend a lot of time worrying about how the samples sound and then the way our music is ‘consumed’ is so imperfect it can feel that we wasted a lot of time.


----------



## Spid (Jul 14, 2022)

Hey @JohnG thank you for this very enlightening post, I keep learning every day. I’m absolutely not concerned because I’m just doing music as a hobby now, and I want to keep it that way, so I won’t be recording real musicians, nor will have to deliver my music to a mix engineer or a TV Shows/Movie/Game, etc… so I’m just trying to learn from the Pros to apply good habits… but I try to focus on things that would matter the most for my situation. But still, all post here are great because even if I’m not concerned, I still learn and confirm why I’m not and why I could be concerned… and also, maybe someone else will read this thread and will be concerned. So it’s really a win win situation. So thanks again for your inputs.


----------



## JohnG (Jul 14, 2022)

Thanks for the reply @Spid 

Given where you are, I would devote more energy to composing than to generating the World’s Best Template, or fiddling too much with details of mixing. I’m not saying that anyone who is working with samples can ignore best practices, but there are a few things that I would think take a higher priority — if you already are doing them or know this, I apologise for stating what you may find obvious.

A few items that I think every great composer can do:

1. Playing some instrument (or voice) at a professional or close-to-professional level.

2. Learn enough theory so that at least you can follow chord symbols (“changes”) and play along.

3. Learn to read music. I know there are a handful of geniuses who didn’t but they are rare.

4. Remember to enjoy yourself!

The last one is maybe the most important: enjoy yourself. I never urge people to stop doing what they love about music, or tell them they have to learn a lot of stuff they find deadly boring before they’re “allowed” to write. As one member wrote here some time ago, there is no substitute for the basic impulse to create, to play around with sound and music, so don’t be diverted from what you love by know-it-alls (myself included).

So above all keep having fun.

Kind regards,

John


----------



## Spid (Jul 14, 2022)

Don’t take it the wrong way, but I’m not interested at all into learning music theory, or read music… for me, that’s really not fun AT ALL. I find music theory boring… I tried couple times and I could never get interested. Also, I have no desire to become a great composer, nor a great musician. I started playing drums, then moved to the keyboard to program sequence in the Atari, and since I was alone and I couldn’t do audio, I then shift to guitar… but I’m mediocre at best on any instruments now… I can noodle here and there, I can generally can record what I have in my mind, but that’s about it. I thought it would be a big issue, and somehow, it never was… and once I heard Hans Zimmer saying he probably plays his computer better than he plays keyboard, I thought: Oh, he’s like me then, so I guess I’m not a lost cause.

I’m a nerd, so doing Template could be fun, more fun than learning music theory… I know, to each his own, and somehow to become a great composer it’s probably better to learn music theory than computer stuff… but it’s a matter of priority. My priorities are just to learn stuff as a hobby now. I’ve done music in relatively high level, I’m not interested anymore…. Now music is a hobby and nothing more. I’ve learned in the past that sometimes when you make of your passion your day job, it can ruin your passion. So now, I want to keep music on a hobby level, with no real expectations or goals. I’m just learning this for the fun of learning it… just like I was learning MIDI Sysex programming 30+ years ago when I was using my Atari with the Sound Canvas. I only did it because I could and I wanted to learn what I could do with it. I guess I see Template the same way, programming Articulation set, etc… (I also have to mention, I’ve worked in R&D in electronic instruments, so I like the whole conception, development and building phases of a new project).

Now having said that, your comment is a good recommendation for everyone that would read this thread, even if I don’t feel directly concerned because I have no intention to even become a composer, a great one even less…


----------



## Akarin (Jul 14, 2022)

jbuhler said:


> It has however become quite ubiquitous in media music. I’m not disputing Akarin’s advice, which is sensible for media composition


Oh it's definitely for media composition! I don't do anything else. I'm also always asked for separate stems for long and short strings anyway.


----------



## Akarin (Jul 14, 2022)

Spid said:


> I’m a nerd, so doing Template could be fun, more fun than learning music theory… I know, to each his own



You are not a lost cause... I make a living from writing music and I can barely read music. Yet, I have produced a course on template building :-p Still, the basics of music theory are in my opinion needed. It allows me to work much faster when developing a motif into a full-blown theme.


----------



## José Herring (Jul 14, 2022)

Spid said:


> Hey guys,
> 
> I’m still trying to figure out my first template, and I was wondering the benefit to have Short and Long folders. I understand that Shorts and Longs might have different reverb, so it could be good to have different reverb sends, but what would be the benefit of having separate folders/tracks for Short and Long?
> 
> ...


I have a track that's mostly legato, another that's mostly shorts and another that's various articulations. The reasoning is that Shorts and Legatos and Longs have really different offset values. So to combine them all on one track can lead to timing problems. On the other hand, us classical dudes are use to switching articulations even with in one line. So, I figured I'd have one track that was mostly shorts, one track that was mostly legato or slow attack long notes and one track that was run by Articulations Maps for more traditional style writing. 
Until Cubase gets the ability to offset per articulation in an expression map like DP has, that's the best way to keep the timing tight.


----------



## Spid (Jul 14, 2022)

I fully agree @Akarin — after the years, I caught some of the basics, like time signature, or even following some solo instrument, I can’t know what note he’s exactly playing, but I can listen a song and I can know if the next notes will go up or down and I can pretty much know where he is in the sheet… sometimes I can get lost, but generally I can follow one specific instrument. But I wouldn’t say “I know to read”… that would be misleading. I would say, I know reading as much as I know to play instrument… poorly but enough for my level and needs


----------



## IFM (Jul 15, 2022)

The only request I ever had from a well-known mixer I was going to hire till deadlines loomed was, yes, he wanted separate stems for longs and shorts unless it was a specific sound, which in the case of the piece I was working on at the time was how it was built. It was 95% shorts, but there were small, short, and fast parts that lasted 3 to 4 notes that sounded better with legato than staccato, and it made no logical sense to process a separate track with almost nothing on it and make it sound like it was played at once.


----------



## Emanuel Fróes (Oct 31, 2022)

Spid said:


> Hey guys,
> 
> I’m still trying to figure out my first template, and I was wondering the benefit to have Short and Long folders. I understand that Shorts and Longs might have different reverb, so it could be good to have different reverb sends, but what would be the benefit of having separate folders/tracks for Short and Long?
> 
> ...


THis is THE dilema!

It has to do with how mad articulation maps can relate to cc controllers and management of channels. If you have different tracks it makes easier to just have the cc in one channel for all. VEP allows you to solve this dilema, but you need to customize a lot of cc automation still.

Basically it is a PROBLEM! Because these daws are not made exclusively FOR film composers exactly.

This is why we have this thread...



I spend many hours on this, since i am not satisfied with the common solutions. But i have not solution more than saying this: it has to be easier and not worse than going to music paper and writing " subito piano, gradually going to ponticello, but staccato".


----------



## Emanuel Fróes (Oct 31, 2022)

Spid said:


> Hey guys,
> 
> I’m still trying to figure out my first template, and I was wondering the benefit to have Short and Long folders. I understand that Shorts and Longs might have different reverb, so it could be good to have different reverb sends, but what would be the benefit of having separate folders/tracks for Short and Long?
> 
> ...


The advantage of detailed tracks is to be able to quickly play them. If mixing is the priority you do so. But from compositional stand point it is POOR : you compose lines that are either short or long, so to say. 

Key is to be able to mix the articulations seammsly in the same line if you can.


So far as I know, only VEP can solve this problem extensively, by letting you send all to cc faders. So you use one track. 


The closer your daw track resembles to a staff in the score, the better. 

Composing is a language, so it pays of to keep a well known grammar, relating your composition to the written canon of orchestral works.


----------



## Emanuel Fróes (Oct 31, 2022)

William The Concurer said:


> Trust me, you don't need farms these days. Most people are ditching them because a single Mac Studio will demolish most of them terms of power.And don't underestimate the new Mac Mini's either; incredible power and value. Some people are even not even using VEP Pro either. It's one more layer of trouble you don't need.We're moving into a whole different era of power, in the coming five years. In the end, its the quality of the music that counts.RIght?
> 
> From what I see, you are definitely going in the right direction, and knowledgeable enough to accomplish it.


I see this with VEP. However VEP is still a " Macro" that allows good workflow that looks more minimalistic. It is however true that is more one variable for problems.

I opened a thread exactly for this. This is why it is SO important that they update and improve regarding stability and "basics"

VEP allows a rich macro automation , rich layering of vsts´s , and channel management, but can still be more complete on this, since regarding performance it will be useless very soon.


----------



## Emanuel Fróes (Oct 31, 2022)

JohnG said:


> One other reason to split longs and shorts that I didn’t see reading through is when you are going to layer live players on top of your samples. Depending on how many real players you have (and on their performance) it is very handy to have longs and shorts split in audio.
> 
> *Live Does Not Equal Perfect*
> 
> ...


Regarding short/long for reverb, i guess pre.delay setting is enough, and something like the "direct" feature, so like in the Irl reverb of Waves. It allows to control details of the attack, without losing the color and realism of the reverb . I think one can get a similar result with sends, compresison and routing, allowing reverb bypass only on attack. But i did not try this one. 

Actually i find that libraries are coming so good out of the box, just some instruments need EQ correction or some reverb. 

Not "sound as such" is the problem in my view, but the chaos of options and workflow conflicts, like this question of articulation tracks


----------



## Spid (Oct 31, 2022)

Couple months later, here’s where I am (for now). 

I discovered that I would be limited with Logic Pro due to the 1,000 tracks limit, so it would be impossible to have all libraries in my master Template, so I’m building my template in Cubase Pro now. Starting again from scratch…

Also, I don’t have a classical background, so I don’t really care about layout to match a real orchestra. So I have a track group for my Strings Ensemble, and another one for my Strings Solo, one for my Woodwinds Ensemble, another one for my WW Solo, etc…

So I tend to have separate track for Ensemble instruments because it’s very likely to have phrasing that would be either Short or Long, and also to have ability to stack both for some special effect (like reinforcing the attack of a long articulation with some Marcato or Staccato, that kind of stuff).

And finally, for Solo instrument, I’m trying to get both Short & Long on the same track with a big articulation map, so we could more easily go back & forth like a soloists would do…. 

That’s maybe a flawed logic, maybe, it’s still work in progress, but so far, that’s how I built it.

Also, I tried as much as I could to have only 1 track per instrument, but for some developers it was just not possible because there would be like 50 articulations… it’s just hardly manageable.


----------



## jcrosby (Oct 31, 2022)

Here's a real world example of why separating them can make life easier. A publisher may request that you nudge up the level of shorts or longs either as a whole, or during a specific section/sections.

I received notes to nudge up the level of my short strings in one section by 1-2 dB, having them broken out to separate groups made this a simple one step fix by automating the group level automation up 1.5 dB.

Lots of practical reasons for separating short and long strings.


----------



## IFM (Oct 31, 2022)

I decided it really wasn’t necessary and gets in my way of writing. If I’m really ever asked for this I’ll just print a separate output.


----------

