What's new

How to deal with orchestral sample timing (slower attack)?

dr.soundsmith

New Member
I recently bought Palette Symphonic Sketchpad and Palette Melodics from Red Room Audio as my entry point into orchestral virtual instruments. I am currently working on a metal/rock song that has orchestral sections and I am finding I am having a hard time getting things to feel right with tempo. The samples seem to take a bit to get to the attack of the note, so if perfectly quantized, it sounds out of time. Palette lets you adjust the attack of individual articulations, but it doesn’t really solve the problem. And it seems to be the case for basically every articulation.

If I drag the notes to the left a bit, I can get it to sound a little better, but something still seems off. I’ve found that just physically recording the input on the midi keyboard gives the best feel (probably because my brain can compensate a bit for the lag). The problem is, I’m not that great of a piano player and it still isn’t perfect. Then I have a hard time using editing to get it to sound right too. I usually with VIs have liked to record with the keyboard and then use a partial quantize (iterative quantize in Cubase) to get things closer to perfect while still having a human feel. It doesn't work very well in this case.

I’m assuming this is something common to many libraries, so I’m looking for advice on how to handle this. I wrote a brief orchestral piece a month ago and it wasn’t so much of an issue because everything fit together pretty well, but for rock music in which locking to the grid is important, it’s difficult to work with. How do you deal with timing issues of samples? Do you have any tricks for getting things to be in time? Is there a way to quantize to an offbeat position for example? Or do you have a good way of figuring out how far the notes need to be dragged over?

I’m using Cubase. If anyone uses Palette and/or Cubase and has any insights, that would be great, but other input is most welcome.

Thanks for the help!
 
Sometimes they mention the true start times in the manual.
They will also suggest the track delay needed for each articulation.
It's usually -60ms for spiccatos, -120 ms for staccato, legato.
(-30 ms for the Scarbee Rickenbacker bass).

As you said, it's not always consistent with different round robins.
So the best solution is to get it the closest timing with track delay, then print to audio and nudge the samples that are a bit off.

It's a pain to do... but then you don't have to worry about it.
 
Sometimes they mention the true start times in the manual.
They will also suggest the track delay needed for each articulation.
It's usually -60ms for spiccatos, -120 ms for staccato, legato.
(-30 ms for the Scarbee Rickenbacker bass).

As you said, it's not always consistent with different round robins.
So the best solution is to get it the closest timing with track delay, then print to audio and nudge the samples that are a bit off.

It's a pain to do... but then you don't have to worry about it.
I don't see anything in Palette's manual about timing, but I didn't really know track delay was a thing, so I'll look into that. Thanks.
Also printing to audio to deal with round robins is a good idea.
 
yes that's what you do. When you have time to fix your template just go through all your sample libraries and set this up. You'll only have to do it once and it will save you a lot of time later.

best

ed
 
https://www.pendlebury.biz/using-negative-track-delay-to-pull-your-track-into-time/
I found this link which talks about this very issue (now that I know about track delay). I think it will be way easier to test it out when I can just drag a slider and listen for when it sounds right.
I guess the issue here will be to see if keyswitching will be a problem. If so, I guess I'll have to do different tracks instead.
Hi @dr.soundsmith . Looks like you've already discovered how to deal with the issue. I will say that this is a common topic with orchestral libraries. Most orchestral instruments perform notes (especially sustains) with a slight curve/crescendo on the attack which can give the illusion of sounding "late." Simply moving your MIDI data back a little (or using track delay) should help tighten the timing to your liking. Hope this helps and that you're enjoying Palette!
 
yes that's what you do. When you have time to fix your template just go through all your sample libraries and set this up. You'll only have to do it once and it will save you a lot of time later.

best

ed
So why are there multiple libraries with keyswitching? What's the standard for dealing with different timings while also using keyswitching?
 
I don't know Palette or how important Legato is for you personally, but since everyone's talking about track delay, I'd like to make this addition.

When using track delay on true legato instruments, keep in mind that the first note in a passage (without legato from a previous note) often adheres to different delay times than legato notes (or has no delay whatsoever). I found it's often easier to set a track delay consistent with the legato delay, then push only the first note in a passage forward in time (as in after the beat). In my experience, moving a few notes around is easier than moving all legato notes around - but that's personal preference.

With true legato, track delay often doesn't solve all of your issues. Minor adjustments will always be necessary. It's a pain in the behind, but that's the state of things as of now.

Big thanks to the few developers who take this into account and add "air" to non-legato notes to be consistent with legato notes.
 
So why are there multiple libraries with keyswitching? What's the standard for dealing with different timings while also using keyswitching?

In Logic I use a script which recognizes the articulation type and sets a corresponding positive delay.

Example:

Pizzicatos need to be -50 ms early
Legatos need to be -100 ms early
Staccatos need to be -60 ms early

The most delay is -100 for legatos so the track is set to -100

Then the script recognizes:

Legatos - add 0ms delay (so it goes through at -100)
Staccatos - add +40ms delay (so it goes through at net -60)
Pizzicatos - add +50ms delay (so it goes through at net -50)

But this is just because I like to work on a single track. IMO written articulation keyswitches are the way of the past in any case. Expression maps or ArticulationID are the best method.
 
So why are there multiple libraries with keyswitching? What's the standard for dealing with different timings while also using keyswitching?

well it's just not a perfect world and you've hit on the reason why so many people I know DON'T use keyswitching or Program changes and use 1 midi track Per articulation...It's just what you are used to basically.

best

e
 
In Logic I use a script which recognizes the articulation type and sets a corresponding positive delay.
I did a quick search of ArticulationID and saw that in Logic it lets you set delay per articulation. I don't think Cubase has the equivalent in expression maps, but I'll ask around. Thanks.
 
Very nice summary of the problem. May I also add that even the same articulation, say legato, may require different delays depending on the transition speed. The legato transition speed and delay should be consistent in any case. I dont have palette but i computed delays times of various legato transitions for HS from the audio print. The tiresome but ideal handling is to handle the delay note by note not trackwise.
Best.


In Logic I use a script which recognizes the articulation type and sets a corresponding positive delay.

Example:

Pizzicatos need to be -50 ms early
Legatos need to be -100 ms early
Staccatos need to be -60 ms early

The most delay is -100 for legatos so the track is set to -100

Then the script recognizes:

Legatos - add 0ms delay (so it goes through at -100)
Staccatos - add +40ms delay (so it goes through at net -60)
Pizzicatos - add +50ms delay (so it goes through at net -50)

But this is just because I like to work on a single track. IMO written articulation keyswitches are the way of the past in any case. Expression maps or ArticulationID are the best method.
 
In Logic I use a script which recognizes the articulation type and sets a corresponding positive delay.

Example:

Pizzicatos need to be -50 ms early
Legatos need to be -100 ms early
Staccatos need to be -60 ms early

The most delay is -100 for legatos so the track is set to -100

Then the script recognizes:

Legatos - add 0ms delay (so it goes through at -100)
Staccatos - add +40ms delay (so it goes through at net -60)
Pizzicatos - add +50ms delay (so it goes through at net -50)

But this is just because I like to work on a single track. IMO written articulation keyswitches are the way of the past in any case. Expression maps or ArticulationID are the best method.


I started building a new template with one track per section, articulation switching via midi channel, one kontakt instance with all the single articulation patches in it, track delay -200ms and individual midi delays per midi channel to make the different articulations consistent. What I already like about that is how easy it gets to switch midi data around between different sections.
 
Some years ago, it took me a while to intuit that “ah ha” moment where my ear was still the most reliable tool in the midst of digitized workflows. You seem to have already zeroed in on that essential part of it. But with so much note entry to juggle, you point to a common workflow challenge, though the individual approaches we all have will vary.

Trust your ear first for timing, but also see if Cubase is repositioning your note entry to a grid. In Logic, I use a smart function with predetermined values, not strict snapped placement.

Also, with wind and brass attacks for my more classical style, anticipation and/or lag is critical. Hence, I routinely play in and then nudge notes either manually, or using a nudge tool mapped to a key command, assigned to a physical button (Stream Deck) for moving attacks off the grid, so to speak. I’ll often hit those left and right nudge buttons without even thinking, repositioning sloppy notes, or those which are too tightly quantized, until they are sonically right.
 
Also, why did the discussion shift to legato timing (above) in the context of the genre of music the OP described? It seems to me that he/she is referring to the timing of attacks so that things sound appropriately natural, and not overly quantized and/or robotic.

Similarly, I’m not following how or why articulation keyswitching is relevant to timing of note entry. It seems some are solving problems not mentioned in the initial post. Or am I missing something here?
 
Simply moving your MIDI data back a little (or using track delay) should help tighten the timing to your liking. Hope this helps and that you're enjoying Palette!
You don't happen to know if there is a recommended time in milliseconds for palette (for each articulation) do you?
 
Also, why did the discussion shift to legato timing (above) in the context of the genre of music the OP described? It seems to me that he/she is referring to the timing of attacks so that things sound appropriately natural, and not overly quantized and/or robotic.

Similarly, I’m not following how or why articulation keyswitching is relevant to timing of note entry. It seems some are solving problems not mentioned in the initial post. Or am I missing something here?
I think the issue is just that different articulations are going to need to be offset different amounts in order for the attack to sound in time. So legato (with note transition) adds to that, and using keyswitching presents a certain problem because you couldn't use track delay to compensate for everything at once. Obviously how it sounds is most important, but so is workflow in editing, so being able to use the grid is very useful.
Thanks for your insights.
 
Top Bottom