What's new

To use expression maps OR one articulation pr. track like A.K. Dern

Not sure if it is madness or not but I am currently creating a track for every articulation in BBCSO for Reaper. I will then create Track templates for both instruments and sections which can then be used in the master template. It is rather boring doing it, but now I have started it I feel the need to complete it. Later on I might add in all the track delays as well.

I don't think I will be doing the same for EW Hollywood Orchestra as I sure the strings alone would drive me to madness!
Curious that you mention that one in particular because that's the library I was mainly thinking about, lol!
Even though I thought it would drive me to madness and that there are at least 3000+ tracks (probably lots more than that, but I don't fancy counting them all) I decided to build a full complete EWHO Diamond set of track templates for Reaper including Divisi.

 
Out of curiosity, but which company so far made it easy for the customer to deal with the negative track delay? No matter what patches from no matter what sample library I am using, they all contain different track delays. Sometimes I have to put in different track delays for the same articulations, depending on the arrangement, the track tempo, and the flow.

It is like a great drummer, not only plays in time. They can drag or rush and deal with micro timing ... I find the same is necessary for the orchestra. Real orchestras perform with a conductor who makes them tight, rush, or drag. A sample library company cannot implement conductor features on all these track delays. This is what we have to take care of.

Besides that, the first sample library company that takes care of the track delay problem with either negative delay compensation or all articulations matching between each other in realtime will also win the Nobel's prize because they managed to make time travel possible.
Well I don't think any company has made that part of building a template easier, but I think that VSL and Spitfire has done a good job with building templates for their instruments, while dealing with OT libraries force you to deal with all that your self. I would imagine that many customers would be very happy for Kontakt Multies that hold ALL articulation of a single instrument and corresponding articulation maps would be very welcome too.
And regarding the conductor analogy I don't agree at all. In my life as a Musical Director I have never given, nor received instructions regarding the timing of the music, understood in the way musicians should place theres lines according to the beat. As an arranger I have made instructions to the orchestra to "lighten" (con vivo) the tempo feel but of course they will always follow the baton - or try too.
What I need from the samples libraries companies is an awareness of the problems, and that side could be improved, as my recent mail correspondence with one of the big names shows, where the supporter didn't quite understand my point. The dream scenario could be a defined sample delay of maybe 100ms, so the whole track of a 1st violin group could play with precision regardless of which articulations was in play. This could maybe be refined into a setting where you could switch this "uniformed" delay on/off.
I think one of the best moments in Anne-Kathrin Derns YT about Brass libraries (14:40), was the spot where she talks about the imperfections of the libraries. I totally agree with her in her views regarding this, we need libraries with precision both in timing and tuning, any imperfection you want can easily be added by your programming.
 
Hopefully in the not so distant future all this underpinning AI tech could help achieving a more user friendly workflow until then whatever works. Especially if you prefer your midi notes to be perfectly on the grid there is no way around single track articulations or a middleware that deals with delay timings per articulation.

maybe one day these articulation maps or expression maps allow different delay timings.
 
maybe one day these articulation maps or expression maps allow different delay timings.
the perceived timing changes with the microphone mix ...

When I need my shorts on the grid, I make a second instrument and give it a negativ delay. That's good enough for me. Because the RRs will have different timing as well. E.g. SF has a tightness control, I'd need to dial that down to zero as well. And sooner than later everything sound unrealistic. If I need my ostinatos dead on the grid, I put a synth under the shorts (to shape the attacks). Works well for me.
 
When I need my shorts on the grid, I make a second instrument and give it a negativ delay. That's good enough for me. Because the RRs will have different timing as well. E.g. SF has a tightness control, I'd need to dial that down to zero as well. And sooner than later everything sound unrealistic. If I need my ostinatos dead on the grid, I put a synth under the shorts (to shape the attacks). Works well for me.
Would love to hear some examples of this with and without a synth added if you could upload something here?

Just wanna hear how well that works for you!
 
What I need from the samples libraries companies is an awareness of the problems, and that side could be improved, as my recent mail correspondence with one of the big names shows, where the supporter didn't quite understand my point. The dream scenario could be a defined sample delay of maybe 100ms, so the whole track of a 1st violin group could play with precision regardless of which articulations was in play. This could maybe be refined into a setting where you could switch this "uniformed" delay on/off.
I'm not sure how many developers actually see this as a problem?

Sure, I totally understand the logic behind pre-defined track delays from a workflow perspective, especially if your final destination is a score > live orchestra. There's obvious advantages to working to the grid.

I guess it's a question of how far do you go with it though? My suspicion is that by the time you've got all the values to hand, made all the adjustments for different articulations, mic mixes etc and somehow wrangled it into a workable template - the time expended would dwarf the time otherwise spent on just nudging things around on an ad-hoc basis? There's only so far you can go organising this stuff before you're a full time template builder instead of writing music.

Perhaps the best approach would be to get the negative delay values for library articulation/mic/library combos that you'll use frequently. A few "short ostinato" patches for example, some key "longs" and leave the rest to the sample gods.
 
Last edited:
Hopefully in the not so distant future all this underpinning AI tech could help achieving a more user friendly workflow until then whatever works. Especially if you prefer your midi notes to be perfectly on the grid there is no way around single track articulations or a middleware that deals with delay timings per articulation.

maybe one day these articulation maps or expression maps allow different delay timings.
It more or less works in Ableton Live, using this amazing M4L device to handle your articulations:

If you use this with an instrument rack (which is only one of many ways you can use it), you can then add (positive) Midi delays in each chain of the rack, so that in the end all articulations for this track have the same total delay (which is the highest built-in delay occuring for any of these articulations). You then set your track pre-delay to compensate for it.
 
I’m going to feature request that Steinberg add an option in the Expression Maps window to have a negative track delay for each art. On/Off and a neg value for when On is selected. That’s where it should be. Top right section.
That's a good idea. The whole Expression Map set-up REALLY needs an overhaul, has been for years.
 
I just rebuilt my template and I used Anne's as a bit of a guide, along with what I saw in Lorne Balfe's Cubase projects he made available for download. I'm in the single articulation per track camp now based on what Anne mentions in her video.

When it comes to a template - a legato patch, a stacc, and a Marcato/stacc combo work 95% of the time. How often do you really use portato, swells, doubles, etc? Those can always be loaded when needed.

My first template I had tons of tracks, instruments, and articulations, thinking I needed to be prepared for every eventuality. The result was huge file size, slow saving on every project file. Since paring the track count down, save times are down to 2 seconds and that's without using VEPro.
 
Yes I also looked at A.K. Dern. Funny enough I didn't listened to anything from her DAW, but the demo of libraries, so it's a little uncertain how it actually sounds, right?
Where did you find information about Lorne Balfe?
Do you have any links?
 
I thought a lot of composers used one track per articulation but realised JunkieXL uses Cubase expression maps.

I am playing around with both methods at the moment. I don't like the idea of really large articulation maps as they will be loading lots of articulations you don't need I to memory. On the flip side, on a more melodic line, flipping between tracks for a few notes doesn't sound like much fun.
 
The new goal of everyone should be how more efficient to your projects needs can you build your template and share your findings. How many of you really need 2000 tracks in a project?

The most efficient would be a lean basic template and then maybe one that. Has all strings, all brass etc so you can import tracks as needed?
 
The one track pr. articulation approach is not that smooth if you compose in Sibelius or Dorico.
 
The one track pr. articulation approach is not that smooth if you compose in Sibelius or Dorico.
However, this approach is actually really smooth if you are working in Studio One.

You only have to use the instrument preset organisation of Studio One, and create a preset for each instrument set up with each single articulation.

I use an "empty" template (just bus groups and folders), and the real "template" is the folder structure of my presets. So when I need anything, I just navigate through the presets, and drag and drop the presets for the articulations I need from an instrument into the corresponding folder track, and Studio One automatically loads them creating a track for each articulation, while assigning the output to the bus of the folder.

So, no need for having hundreds or thousands of tracks "pre loaded", just load each track when needed. Everything is alway there on the side, organised as instrument presets.

I found this way of setting up a template somewhere here on VI-Control, and if I remember correctly, Alex ( @Waywyn ) also showed this way of working in one of his (excellent) videos.

After the introduction of the keyswitch lane functionality with Studio One 5, I only shortly tried working with these, but found it a bit cumbersome compared to the one track per articulation method.
 
Not to mention that different artics can call for different processing which also doesn't make sense to me as to why you'd have them all on one track using key switches/expression maps. Maybe the shorts need less reverb? Maybe one articulation has some mids you want to remove with eq? So you have to sacrifice all of the other articulations just to edit that one issue?
I'm staying out of this debate, but I've seen statements like this in several posts and I am curious, do people really do this? It seems awfully extreme. I understand different processing for different libraries, Cinematic Studio Strings requires different processing than CineStrings, for example.

Even within a library I can understand processing violins and violas differently.

But articulations? To what end?

My background is recording live players, although I've been enjoying the whole computer based studio thing since probably the early 90s. I have never seen anyone process individual articulations in a live recording.

I have enabled or disabled sends for a section of a composition which may, or may not correspond with different articulations, but that's the limit.

Can someone explain the reason for this level of tweaking? It can't be realism, can it?
 
The one track pr. articulation approach is not that smooth if you compose in Sibelius or Dorico.
It can be. The workaround for this in Dorico is to condense the separate art tracks into a single instrument. Most often you will have no overlaps, or it will show "a2" wherever you layered articulations. It takes a bit to set up but if you have a template and a saved flow with all the condensing and formatting options set up it works like a charm.

EDIT: I am referring to engraving, going DAW -> Dorico. Re-reading I think you might be talking about the inverse. Which yeah, it takes a bit to cut and paste every part on the correct track. But it's easier than playing everything in.
 
I'm staying out of this debate, but I've seen statements like this in several posts and I am curious, do people really do this? It seems awfully extreme. I understand different processing for different libraries, Cinematic Studio Strings requires different processing than CineStrings, for example.

Even within a library I can understand processing violins and violas differently.

But articulations? To what end?

My background is recording live players, although I've been enjoying the whole computer based studio thing since probably the early 90s. I have never seen anyone process individual articulations in a live recording.

I have enabled or disabled sends for a section of a composition which may, or may not correspond with different articulations, but that's the limit.

Can someone explain the reason for this level of tweaking? It can't be realism, can it?
Maybe if I was using everything from one library, or at least all in the same hall, for example the BBCSO or the SSO, then i wouldn't have to tweak as much.

But my string legato patch contains of CSS + CSSS, my string longs like sustains, are CSS, CS2, Albion One. My spiccatos are a blend CSS, CS2, Fluid Shorts, and SCS. My pizzicatos are SCS and Albion One. I'm never really too worried about how many libraries/halls i'm blending together, just as long as the end result sounds good. So as you can see, different articulations have lots of different combinations of libraries for me, which really does call for different processing. CSS + CSSS legatos for example need a high boost with eq for more "air" which i might not use for libraries like SCS and Albion One strings which have lots of "air" already. I might use different reverb amounts because my short articulations are sounding more ambient because of the blend of libraries I use.

So there is a lot to it I think, not always about realism but what sounds good, which is always just a subjective preference really.
 
I'm staying out of this debate, but I've seen statements like this in several posts and I am curious, do people really do this? It seems awfully extreme. I understand different processing for different libraries, Cinematic Studio Strings requires different processing than CineStrings, for example.

Even within a library I can understand processing violins and violas differently.

But articulations? To what end?

My background is recording live players, although I've been enjoying the whole computer based studio thing since probably the early 90s. I have never seen anyone process individual articulations in a live recording.

I have enabled or disabled sends for a section of a composition which may, or may not correspond with different articulations, but that's the limit.

Can someone explain the reason for this level of tweaking? It can't be realism, can it?
I'm not sure I can see the application either in the context of "normal" orchestral work, but such comments may refer to using libraries for productions like trailer music etc or when you're layering up multiple libraries.

In those cases you might want to ultra-treat your shorts or layer longs for a "larger than life" sound. In which case, separate tracks would be more efficient.

It could also be that VIC members can't stop tweaking. 😉
 
Top Bottom