# Thoughts on Negative Delay Compensation?



## marclawsonmusic

Hi all,

I saw this video from John Powell and was honestly shocked at how super-neat and quantized his MIDI is:



In the comments, he mentions using 'Negative Delay Compensation on all tracks' to achieve this. Makes sense, but doesn't that mean the audio will play right on the beat? In his video, there is clearly an audio delay as the playhead moves through the notes...

Beyond that, any thoughts on using Negative Delay Compensation? It would be helpful to be able to quantize MIDI if it still sounded good... I also think this would make it much easier to go from DAW-to-score... and also easier to orchestrate in the DAW - copying and pasting parts becomes less of a problem with this.

Thanks in advance,
Marc


----------



## Alex Fraser

For some reason I can't play the video..
Like you say, it's probably a great and quick way to work if the final result is to be played by an orchestra. If you're writing a track with the final product being VI output, I'd bet that an "off grid" workflow would ultimately result in more realism. But I could be proven wrong - it's not for me to question JP!


----------



## marclawsonmusic

Alex Fraser said:


> For some reason I can't play the video..
> Like you say, it's probably a great and quick way to work if the final result is to be played by an orchestra. If you're writing a track with the final product being VI output, I'd bet that an "off grid" workflow would ultimately result in more realism. But I could be proven wrong - like I said, I can't play the video.
> 
> Interesting though!



John Powell's MIDI mockups sound better than any I have ever heard, and he is using most of the same tools that everyone on here is using (He publishes his template annually on social media and posts these MIDI mockups all the time). 

So, it's not just about going DAW-to-score - it's about getting a great result inside the DAW too.

Here are some URLs in case they work better... EDIT: Looks like the forum converts these to media links too... oh well.


----------



## brek

One other great reason to quantize - conforming a large cue to a new picture edit when every note starts slightly ahead of the measure. 

This is one of those old "rules" that has some truth to it, but is a bit overplayed - particularly with modern VIs. Personally, I think note dynamics and durations/transitions are often more likely to give away the realism. 

@NoamL has some good demonstrations of realistic quantization using micro tempo adjustments that are very effective.


----------



## David Chappell

I always write entirely on grid using negative delay, 100% quantized, solo instruments being the only exception.

At first glance it seems like this is a recipe for robotic, lifeless playback, but you have to remember that midi notes are really just playing back audio files, so what the midi notes look like is fairly irrelevant. They key here is that humanisation already exists in the samples. An ensemble of players playing a short articulation, for example, won't be playing exactly on the beat. Some will be slightly before, some slightly after, by only a few milliseconds, and this will vary for each note, RR, velocity, recorded. So when you press a MIDI note what you're really playing back is an audio file where all of the players are slightly off beat each time. This is the humanisation. Having each midi note be slightly off beat means that you're taking a sample that already has variations in timing baked in, and giving it further variations in timing.

Since some players will play slightly before the beat, developers need to cut the samples a few milliseconds before the beat as well. CSS for example cuts the shorts 60ms before the beat. Then it's just a case of putting the note exactly on grid, and telling the DAW to actually play that note 60ms earlier, so that the midi note plays back the sample file with the "peak" landing right on the beat.

Much more important for realism is good velocity/ CC management, and automation of the tempo track (still keeping everything 100% quantized).

At least, this is my reasoning that I worked out for doing it this way


----------



## marclawsonmusic

brek said:


> One other great reason to quantize - conforming a large cue to a new picture edit when every note starts slightly ahead of the measure.
> 
> This is one of those old "rules" that has some truth to it, but is a bit overplayed - particularly with modern VIs. Personally, I think note dynamics and durations/transitions are often more likely to give away the realism.
> 
> @NoamL has some good demonstrations of realistic quantization using micro tempo adjustments that are very effective.



FWIW, NoamL's Thanos script is what got me thinking about this in the first place!

And, I typically use a played-in piano guide to create a tempo track, so that helps with the music 'breathing'. Hmmm... good food for thought here.


----------



## goalie composer

brek said:


> One other great reason to quantize - conforming a large cue to a new picture edit when every note starts slightly ahead of the measure.
> 
> This is one of those old "rules" that has some truth to it, but is a bit overplayed - particularly with modern VIs. Personally, I think note dynamics and durations/transitions are often more likely to give away the realism.
> 
> @NoamL has some good demonstrations of realistic quantization using micro tempo adjustments that are very effective.


Would love to check these out. Do you have a link to a video of this handy?


----------



## marclawsonmusic

David Chappell said:


> I always write entirely on grid using negative delay, 100% quantized, solo instruments being the only exception.



I quit using quantisation back in the earlier days of EWQL. Maybe samples have changed enough to where this isn't so much of a problem any more. Certainly makes it easier to edit things.


----------



## brek

goalie composer said:


> Would love to check these out. Do you have a link to a video of this handy?





From this thread:
https://vi-control.net/community/threads/css-williams-magic.64674/


----------



## babylonwaves

David Chappell said:


> I always write entirely on grid using negative delay, 100% quantized, solo instruments being the only exception.


me too. can't think of doing it differently. i might loosen up the quantisation in certain cases but i always adjust every single instrument in my template with negative delay to make sure i can move around regions without having to adjust the timing.


----------



## Alex Fraser

David Chappell said:


> At first glance it seems like this is a recipe for robotic, lifeless playback, but you have to remember that midi notes are really just playing back audio files, so what the midi notes look like is fairly irrelevant. They key here is that humanisation already exists in the samples. An ensemble of players playing a short articulation, for example, won't be playing exactly on the beat. Some will be slightly before, some slightly after, by only a few milliseconds, and this will vary for each note, RR, velocity, recorded. So when you press a MIDI note what you're really playing back is an audio file where all of the players are slightly off beat each time. This is the humanisation. Having each midi note be slightly off beat means that you're taking a sample that already has variations in timing baked in, and giving it further variations in timing.


That's really interesting and I can't quite believe after so many years I hadn't really considered it that way. Old habits die hard I guess. I feel some experimentation coming. I guess it also depends on how consistent the library is with regard to sample editing. Maybe that's why everyone loves CSS..


----------



## brenneisen

marclawsonmusic said:


> John Powell's MIDI mockups sound better than any I have ever heard



those are not mockups, live performances instead.


----------



## shomynik

brenneisen said:


> those are not mockups, live performances instead.


Was thinking the same. Don't know as a fact, but strongly believe those aren't samples.


----------



## Vonk

babylonwaves said:


> me too. can't think of doing it differently. i might loosen up the quantisation in certain cases but i always adjust every single instrument in my template with negative delay to make sure i can move around regions without having to adjust the timing.


How are you calculating the negative delay? it vary between libraries? - between articulations? I find I'm having to use them, but am unsure of the best approach.


----------



## marclawsonmusic

brenneisen said:


> those are not mockups, live performances instead.



Are You sure about that?


----------



## brenneisen

shomynik said:


> Don't know as a fact



it is.


----------



## marclawsonmusic

brenneisen said:


> it is.



Well damn! No wonder it sounded so good. Can I delete this thread now? Hahah

PS - Thanks for clearing this up.


----------



## babylonwaves

Vonk said:


> How are you calculating the negative delay? it vary between libraries? - between articulations? I find I'm having to use them, but am unsure of the best approach.


i play a short sample. bounce it and measure the delay in the sample editor. or, if i'm lazy i just push the delay until it suits my needs. of course this is all under the assumption that the articulations within the instrument have the same delay. but it works pretty good in most cases.


----------



## NoamL

where is this comment about using negative delay? I quit facebook a while ago.

v busy day at work but I have some thoughts on this wholetopic


----------



## marclawsonmusic

NoamL said:


> where is this comment about using negative delay? I quit facebook a while ago.
> 
> v busy day at work but I have some thoughts on this wholetopic


----------



## chillbot

Sik brag! "Valued Commenter". So jealous.


----------



## marclawsonmusic

chillbot said:


> Sik brag! "Valued Commenter". So jealous.



AKA “fanboy”


----------



## dgburns

Delay settings in the inspector ? that what we talking bout?

Yeah sure I use those all the time, and I usually flatten the midi (even when I should hold off) because it's nice to be able to work to the grid but adjust for the delay ( in either direction +/- )


----------



## Rob Elliott

Great thread. I usually play the parts in still but use iterative quantizing - but have noticed it works better 'much closer' to grid than in years past. Could I Q exactly - probably. Old habits...


----------



## marclawsonmusic

dgburns said:


> Delay settings in the inspector ? that what we talking bout?
> 
> Yeah sure I use those all the time, and I usually flatten the midi (even when I should hold off) because it's nice to be able to work to the grid but adjust for the delay ( in either direction +/- )



Yes, track delay which can be found in Inspector in Logic. I have not been using this but am reconsidering after this thread.

@Rob Elliott, I abandoned quantizing years ago, but damn my MIDI looks abysmal sometimes!

If there was some way to quantize / pseudo-quantize and still get a good (MIDI recording) result, it would save a lot of time and also look a lot nicer. John Powell inspires again.


----------



## JohnG

Vonk said:


> How are you calculating the negative delay? it vary between libraries? - between articulations? I find I'm having to use them, but am unsure of the best approach.



It not only varies between libraries, but _within _the same libraries. So sometimes the amount you use for a short articulation with violas is different from the amount necessary for bass or violins or oboe.

And short articulations often require a different offset than sustained ones. It takes forever to get it right.


----------



## brenneisen

@marclawsonmusic

you might want to check this


----------



## marclawsonmusic

Hahaha, that was fantastic! His mockup was amazing but I heard the weaknesses of samples in a couple spots (mainly the WW runs). 

Just goes to show that it’s possible to make an outstanding mockup - even on the grid. What a gentleman and inspiration John Powell is.

Thanks for sharing


----------



## danbo

Trying to understand this. Check me on this, the pre-delay referenced is using these two settings in Logic (attached). According to the documentation they set a fixed positive or negative delay, just offsetting the track from the MIDI event tick. How does this humanize anything? All it does is turn an American orchestra into a German one (the Berlin Philharmonic famously plays a delayed beat to the conductor).

Playing on a comment above, samples do appear to my ears have 'humanizing' built in because they are acoustic instruments played by people. When I play a note on the clarinet there is a <something> millisecond delay until the note comes out the bell, and another in the propagation of the sound waves.

I think robotic rhymes are more true for electronic instruments, not acoustic. I compose by entering in the score editor which inserts everything on the grid (I just write orchestral and ensemble) - it sounds human to me. Further, being an orchestral musician, I'll tell you that in the professional orchestras there's an obsession with rthymic perfection. There has to be with 100 musicians! The conductor will roast you for pushing or pulling the tempo while everybody else is a metronome. The music comes from how you play the line, not how you vary the tempo which is the conductors job.

I'm transitioning from being a paper and musicians composer to a VI composer so perhaps am missing something, but so far in my work keeping a strict beat sounds like a normal orchestra to me. What I do is obsess over the shape of the lines (velocity and expression), overall volume (both from individual dynamics and density) and I also rubato the overall tempo.

TL/DR
Thinking about all the orchestral literature I've played in concert, I think the only time a soloist (usually woodwind or first chair string player) gets to rubato the tempo is when the rest of the orchestra is subdued and they're the thing. Think of the concertmaster violin in Scherhazade, the strings are subdued while the soloist gets to shape the line as they please. Note here it's basically written as a cadenza, so they get license to be free.

In media music all I hear people write is 'epic' (what's so great about epic by the way?) big ensemble stuff, which would be played on grid by any professional orchestra. I also hear them play at effectively one dynamic which I think is a fault, but that's just me.


----------



## Dewdman42

In my opinion the main time to use negative track delay is when a particular track is playing a particular articulation that has a slow attack. When you have legatos and other kinds of slow attacks, the sound in the sample starts out very early and takes a few milliseconds or sometimes a lot of milliseconds to actually reach the peak. This gives a perception in our ears that the moment of attack is well after the first sound in the sample starts playing. When we send midi to a sample player and tell it to start playing a note from a sample with a slow attack, then it will sound late to us because the real transient that we perceive as the attack moment will be some milliseconds later then the start of the sample. 

Real players start playing their slow attacks early so that this perceived moment of attack will be on the beat, not withstanding some human error slop factor.

But in midi if the midi notes are on the beat then the sample doesn’t start playing until the beat and the perceived moment of attack will sound late. It’s like a plugin with latency, but the amount of latency depends on the actual sample, there is no consistency. One articulation could have a lot and another articulation could have a little.

You can compensate for this by using negative track delay, but that really only works if you have one articulation per track. You can also use plugin delay compensation if you are in a daw that doesn’t have negative track delay. But if you mix more then one articulation sample on any given track then the amount of negative track delay to use will only be right for one of them. 

You can use scripting such as thanos to create a larger amount of overall latency and make sure all articulations play exactly the right amount early to all sound in the beat and then you can set one big negative track delay for that track. But thanos is not really general purpose yet, as I understand it, it’s hard coded for some particular popular libraries.

Another trick is to use a plugin subhost such as plogue bidule or patchwork to host a set of different articulations on different channels. Inside there you could use some free latency manipulation plugins to delay things and report things, feed it one midi track and let pdc do the rest. Complicated to setup though.

In general that’s what we are talking about with negative track delay.

Now if you use a “bit” of track delay on a simple track that has different articulations in the instrument, then it could be somewhat humanizing in that short attack notes would sound a little early and slow attack notes maybe sound a little late but not as late as they would have been otherwise. But that doesn’t always make musical sense not mimic what real players are doing so I do not think negative track delay should be thought of as easy humanizing. It’s a latency adjuster when latency in the sample is not reported by the plugin.


----------



## marclawsonmusic

danbo said:


> Trying to understand this. Check me on this, the pre-delay referenced is using these two settings in Logic (attached). According to the documentation they set a fixed positive or negative delay, just offsetting the track from the MIDI event tick. How does this humanize anything? All it does is turn an American orchestra into a German one (the Berlin Philharmonic famously plays a delayed beat to the conductor).



I think what everyone is trying to avoid is the 'Finale (or Sibelius) playback' scenario... where the music plays back with inferior-sounding samples exactly on the beat like a robot. As a pen-and-paper composer, you have probably used Finale or Sibelius and understand what that sounds like.

In the earlier days of samples, this is what MIDI playback sounded like in your DAW too... at least when everything was lined up exactly on the beat. But technology has changed and maybe putting everything 'on the grid' isn't as bad as it once was? At least, that's my takeaway after listening to John Powell's mockups.

One key you mentioned is using a rubato tempo... or at least a tempo that is humanized. I have found that helps when trying to make an orchestral track sound more authentic.

PS - The delay setting we are referring to is the one in your first example - the 'Delay' setting on the track Inspector in Logic. If you set this for every library / articulation, you could conceivably put all notes on the beat and your music would play back in time.


----------

