What's new

Logic Scripter—multiport track delay compensation?

I have had my doubts about sample accuracy when using environment instruments. I haven’t tested it though.

if the latency is consistent then you can use latency fixer. It basically just sits in a mixer strip and reporte latency without adding any. So if the environment is adding latency then you just report that amount with latency fixer and the pdc engine will do the rest.

I doubt though that the environment is adding significant latency, it’s the instrument you are using thst is doing it. It’s possible that using the environment is blocking pdc from doing what it needs to do, in which case latency fixer would be blocked also.

why do you feel you need to use the environment approach?
I seem to get the same issues when it's a new blank project with, for example, a Sampler piano—I'm consistently and evenly ahead of the beat unless I consciously react late. So I am not so sure it is a particular instrument that causes it.

The 'Environment approach' allows me to:
1. add MIDI transformers between the track and the instance so that I can control each instrument the exact same way on the front end
2. have a clear separation between MIDI out and Audio in, which makes compartmentalizing creative tasks more easy
3. take advantage of the default MIDI Volume option on the MIDI track inspector (don't need this for instruments I have articulation sets for, but it's useful for "simple" instruments like drum patches)
4. align in visual/functional style with the way I access my external synths
5. send more MIDI channels to the same VEP instance (if there is a way to do this in the "LPX Approach," I don't think I know about it)
6. avoid the kludgy feel of the "LPX Approach" (i.e., either add stuff through the New Track Wizard, which makes weird and confusing things happen in the Auxes, or make Auxes into track 'instruments,' which AFAIK don't operate too independently from each other). I'm honestly not sure whether those methods are full of bugs or just odd design choices, but I try to stay away from them if I can...
 
In the meantime....

1. add MIDI transformers between the track and the instance so that I can control each instrument the exact same way on the front end

I hear you. The alternative is to use Scripter without the environment

5. send more MIDI channels to the same VEP instance (if there is a way to do this in the "LPX Approach," I don't think I know about it)

You definitely don't need environment for this. With VePro.AU2 you can send up to 16 instrument tracks to one VEP instance. With VePro.AU3 you can send up to 127 source tracks to one VePro instance... Here are some templates I made for AU3 you can try out that do it:

https://gitlab.com/dewdman42/Logic-VEP-MultiPort-templates
Manual instructions for how to create a template like that is here:

https://www.logicprohelp.com/forum/viewtopic.php?f=9&t=143416


6. avoid the kludgy feel of the "LPX Approach" (i.e., either add stuff through the New Track Wizard, which makes weird and confusing things happen in the Auxes, or make Auxes into track 'instruments,' which AFAIK don't operate too independently from each other). I'm honestly not sure whether those methods are full of bugs or just odd design choices, but I try to stay away from them if I can...

LPX has 3 fundamental ways to handle multi-timbral instruments such as the VePro plugin, each with pros and cons. Here is a post I made on this subject a few years ago:

https://www.logicprohelp.com/forum/viewtopic.php?f=1&t=132635&p=697195&hilit=multi+timbral#p697195


I tend to find myself using the current Apple endorsed method, which is basically the approach used by the New Tracks Wizard, though you don't need to use the Wizard to set it up, but you do have to kind of understand how it works and once you do it won't seem that kludgey to you. One of the downsides is that linked volume slider which always is annoying to many people and it used to be annoying to me too, but it is that way for a sensible reason and unlikely to change. I just disable the volume slider from my track headers and mute/solo also for the same reason. Use the mixer for adjusting stuff like that instead.

See the above forum post about the pros and cons of multi-timbral approaches, all of the approaches have some pros and cons, including the classic old school method you are using. It may be that you have to live with some PDC issues, but I don't want to say for sure that is actually happening without more testing. It may be that PDC is not 100% in some way with the old school approach, either because playback is off or the way the midi is recorded to the track is not quite right, but if you have a simple project that demonstrates it, I'll check it out and try to confirm that.
 
Well, I’m willing to re-examine how I do things, since my new rig is much more powerful than my old one and might like certain setups more than others.

for example, my old system hated Playback and Live mode...but my new system hates Playback mode (unless I constrain to 8 or fewer threads, but what’s the fun in having 20 then?)
 
Last edited:
In the meantime....



I hear you. The alternative is to use Scripter without the environment



You definitely don't need environment for this. With VePro.AU2 you can send up to 16 instrument tracks to one VEP instance. With VePro.AU3 you can send up to 127 source tracks to one VePro instance... Here are some templates I made for AU3 you can try out that do it:

https://gitlab.com/dewdman42/Logic-VEP-MultiPort-templates
Manual instructions for how to create a template like that is here:

https://www.logicprohelp.com/forum/viewtopic.php?f=9&t=143416




LPX has 3 fundamental ways to handle multi-timbral instruments such as the VePro plugin, each with pros and cons. Here is a post I made on this subject a few years ago:

https://www.logicprohelp.com/forum/viewtopic.php?f=1&t=132635&p=697195&hilit=multi+timbral#p697195


I tend to find myself using the current Apple endorsed method, which is basically the approach used by the New Tracks Wizard, though you don't need to use the Wizard to set it up, but you do have to kind of understand how it works and once you do it won't seem that kludgey to you. One of the downsides is that linked volume slider which always is annoying to many people and it used to be annoying to me too, but it is that way for a sensible reason and unlikely to change. I just disable the volume slider from my track headers and mute/solo also for the same reason. Use the mixer for adjusting stuff like that instead.

See the above forum post about the pros and cons of multi-timbral approaches, all of the approaches have some pros and cons, including the classic old school method you are using. It may be that you have to live with some PDC issues, but I don't want to say for sure that is actually happening without more testing. It may be that PDC is not 100% in some way with the old school approach, either because playback is off or the way the midi is recorded to the track is not quite right, but if you have a simple project that demonstrates it, I'll check it out and try to confirm that.
There aren’t too many Scripter-specific resources, are there? It’s basically JS, right? But I’m sure there are some Logic-specific things worth knowing about. I wouldn’t mind having more granular control over things like articulation timings...
 
Any good resources you know of specifically for Logic Scripter, @NoamL ? or did you just create Thanos, etc. from your existing JS knowledge?
 
Yes, I just taught myself javascript. Recommend https://www.w3schools.com/js/default.asp and look at some of the default scripts that come with Scripter to learn the things that are specific to the LogicX implementation of it. I found it much easier to understand than the Environment!!

There's not any pro software engineering going on under the hood of Thanos, it only uses if-then statements, switches, very simple use of functions, a few arrays, and a bit of math.
 
The official Javascript reference is good too. Start with functions


since the bare bones of what the LogicX Scripter does is take some incoming MIDI event and apply a Function to it
 
There are a few resources on the web but really most people just figure it out through trial and error and in logicpro help menu there is a user manual for “effects” where you can find a section for scripter that will explain it almost completely except for a few hidden features that you don’t need to know to get started. There are approaches which work that aren’t explained in the manual but for a lot of simple use cases it doesn’t matter really.

Im sure if you dig in and try some simple things step by step ask for help here I will be happy to help as will others I’m sure. I also find scripter to be way more straightforward then the environment. The environment is handy for filtering things BEFORE the midi hits the sequencer and for certain kinds of tasks but I usually just avoid it.
 
In the meantime....



I hear you. The alternative is to use Scripter without the environment



You definitely don't need environment for this. With VePro.AU2 you can send up to 16 instrument tracks to one VEP instance. With VePro.AU3 you can send up to 127 source tracks to one VePro instance... Here are some templates I made for AU3 you can try out that do it:

https://gitlab.com/dewdman42/Logic-VEP-MultiPort-templates
Manual instructions for how to create a template like that is here:

https://www.logicprohelp.com/forum/viewtopic.php?f=9&t=143416




LPX has 3 fundamental ways to handle multi-timbral instruments such as the VePro plugin, each with pros and cons. Here is a post I made on this subject a few years ago:

https://www.logicprohelp.com/forum/viewtopic.php?f=1&t=132635&p=697195&hilit=multi+timbral#p697195


I tend to find myself using the current Apple endorsed method, which is basically the approach used by the New Tracks Wizard, though you don't need to use the Wizard to set it up, but you do have to kind of understand how it works and once you do it won't seem that kludgey to you. One of the downsides is that linked volume slider which always is annoying to many people and it used to be annoying to me too, but it is that way for a sensible reason and unlikely to change. I just disable the volume slider from my track headers and mute/solo also for the same reason. Use the mixer for adjusting stuff like that instead.

See the above forum post about the pros and cons of multi-timbral approaches, all of the approaches have some pros and cons, including the classic old school method you are using. It may be that you have to live with some PDC issues, but I don't want to say for sure that is actually happening without more testing. It may be that PDC is not 100% in some way with the old school approach, either because playback is off or the way the midi is recorded to the track is not quite right, but if you have a simple project that demonstrates it, I'll check it out and try to confirm that.
Testing out the new track wizard method, delays in ms DO work....BUT it seems that the whole "Instrument" (or at least the whole Port—haven't tested further yet) takes whichever track delay is set last.

In other words, if I have Inst 1 Port 1 Ch1 (let's call this Flute) and Inst 1 Port 1 Ch2 (let's call this Oboe) and set Flute to -126.5ms, both Flute and Oboe play back at -126.5ms. If I then set Oboe to +79ms, even if I do not also change the Flute offset, both Flute and Oboe play back at +79ms.

Am I just misunderstanding how to properly use this method? It seems like this is a bigger problem than only being able to offset MIDI events by ticks...
 
You appear to be correct. Make sure to submit a bug report to Apple about this.

And by the way if you use expert sleepers latency fixer you will have the same problem.

so basically I can surmise that the old school midi tracks are the only kind that are truly considered as midi tracks by logicpro and the track delay parameter in that case is a midi parameter ( that’s why it’s only in midi ticks), but since it’s an actual midi delay, it’s seperate for each source midi track.

the new tracks wizard way that parameter appears like it’s a track delay but it’s actually a mixer channel attribute I guess, which is why you can use ms, it basically is one setting for the whole mixer channel, for a similar reason that the track volume slider and mute/solo effect the whole channel.

(sigh)

there is still a way to get ms delay per track but you’re not gonna like it.

basically what you do is set the track delay to a large negative value ( or use latency fixer to do that) that will set one global negative delay for all tracks hitting that one vepro instance. But then you will have to use scripter to add positive delay back to each midi port/channel appropriately. In some cases you add the same amount of positive delay as your global negative delay setting, to net zero. In other cases you add less positive delay which results in net negative delay for only that one midi port/Chan.

note that scripter itself is global for the whole vepro instance so you have to write it all in one script, handling each port and channel as you wish.

most likely it will just be easier using old school tracks and just fudge around with tick based negative delay until it’s close enough.

the advantage of the scripter approach is that you also have the possibility to look at articulationid and set the net negative delay on a per articulation basis in addition to doing per port/channel. But you have some stuff to learn about scripter before you will get that done.

This brings us back to trying to figure out why your timing is off with the old school tracks. Do you have a simple LPX project that demonstrates that issue?
 
In case you're feeling adventurous...here is a simple Scripter example which illustrates the possible solution I mentioned above. You can expand or tweak it to your own purpose, or at least learn a little about Scripter in the process...

In this first example, port 1, channel 1 will be sent with 23ms delay. Port 1, channel 2 sent with 54ms delay. All other port/channels sent with 60ms delay..

JavaScript:
function HandleMIDI(event) {

    // port 1
    if(event.port == 1) {
     
        // channel 1
        if(event.channel == 1) {
            event.sendAfterMilliseconds(23);
            return;
        }
     
        // channel 2
        if (event.channel == 2 ) {
            event.sendAfterMilliseconds(54);
            return
        }
    }
 
    // all other channels
    event.sendAfterMilliseconds(60);
}

So in the above example, if you were, for example, to set the singular-track-delay parameter to -60ms, then channel 1 would be sent 60 - 23 ms early (37ms early) and channel 2 would be sent 6 ms early. The other unconfigured channels would be sent net zero.

Obviously you can copy and paste the IF blocks to configure as many port/channels in the midi as you want.


If you wanted to handle per articulation as well, that would be like this...

JavaScript:
function HandleMIDI(event) {

    // port 1
    if(event.port == 1) {
     
        // channel 1
        if(event.channel == 1) {
            event.sendAfterMilliseconds(23);
            return;
        }
     
        // channel 2
        if (event.channel == 2 ) {
     
            if(event.articulationID == 1) {
                event.sendAfterMilliseconds(40);
                return;
            }
            if(event.articulationID == 2) {
                event.sendAfterMilliseconds(30);
                return;
            }
         
            event.sendAfterMilliseconds(60);
            return
        }
    }
 
    // all other channels
    event.sendAfterMilliseconds(60);
}

In the above example, channel 2 has artictulationID handling and it ends up with artid1 20ms early and artid2 30ms early and any other articulations on channel 2 would be net zero (assuming track delay is set to -60ms).

Hope that makes sense.

I should note that with the articulationID stuff there can be other complications such as how to handle CC and pitchblend events when you have different articulations with different amounts of delay, etc.. and non-notes generally don't have articulationID, for example, so it can get a fair bit more complicated to manage all that in Scripter, but its all doable..and yes this is just like how Thanos does it, but Thanos takes care of those extra complexities and is specifically hard coded with all the correct delay values for CSS, ready to roll.

There are other more elaborate ways in Javascript to avoid a humongous list of IF statements, you can make a more data driven approach which will be easier to tweak and modify, rather then having to copy and paste . But its more complicated and the above approach is best for starting out with Scripter.
 
Last edited:
You appear to be correct. Make sure to submit a bug report to Apple about this.

And by the way if you use expert sleepers latency fixer you will have the same problem.

so basically I can surmise that the old school midi tracks are the only kind that are truly considered as midi tracks by logicpro and the track delay parameter in that case is a midi parameter ( that’s why it’s only in midi ticks), but since it’s an actual midi delay, it’s seperate for each source midi track.

the new tracks wizard way that parameter appears like it’s a track delay but it’s actually a mixer channel attribute I guess, which is why you can use ms, it basically is one setting for the whole mixer channel, for a similar reason that the track volume slider and mute/solo effect the whole channel.
Figured as much!

there is still a way to get ms delay per track but you’re not gonna like it.

basically what you do is set the track delay to a large negative value ( or use latency fixer to do that) that will set one global negative delay for all tracks hitting that one vepro instance. But then you will have to use scripter to add positive delay back to each midi port/channel appropriately. In some cases you add the same amount of positive delay as your global negative delay setting, to net zero. In other cases you add less positive delay which results in net negative delay for only that one midi port/Chan.

note that scripter itself is global for the whole vepro instance so you have to write it all in one script, handling each port and channel as you wish.

most likely it will just be easier using old school tracks and just fudge around with tick based negative delay until it’s close enough.

I was actually thinking through this last night, before I discovered this "feature" this morning. I think this is the way I'd go...sort of like a meta-mega-Thanos ;)

This brings us back to trying to figure out why your timing is off with the old school tracks. Do you have a simple LPX project that demonstrates that issue?

So there are actually two issues here, and I'm pretty sure they are these:

1. I just naturally play ahead of the beat, probably due to years of playing a slower-speaking bass instrument in concerts and being further outfield in marching band—I feel like I'm lagging unless I'm ~20ms ahead of the beat. I do not think this is a computer issue. (Maybe I just need to get a giant control room with all the spare real estate and studio-building money I have lying around and sit 20 feet from my monitors? hah!)

2. I want to be able to adjust the negative track delay on instruments so they can be quantized to the grid but sound in time. Virtual instruments, by necessity, "play" slightly after the note on.

I do think that these two things are interacting, which is why I'm exploring potential solutions...but I really don't think my computer is "wrong." I'm just as ahead of the beat in a fresh empty project with an EXS Piano as I am in a project with 500+ MIDI instrument instances routed through four or five layers of auxes and effects.
 
see my above javascript examples in case you missed it.

Back to your present situation though, I would really like to understand if using the old school midi tracks is messing with the timing. There are generally two things to worry about, and believe me, LPX does have some timing related bugs, especially when it comes to things like external midi and stuff like that, so I would not put it past it to have a bug in this way, but the big project you mentioned has a lot of other stuff going on which is probably clouding the issue, will comment on that in a sec.

Anyway, the timing issues are two fold.

  1. latency during playback...if Logic detects latency in plugins, then it should use Plugin Delay Compensation (PDC) to play exactly on the click.

  2. recording midi to tracks, LPX is supposed to be aware of your latency both from audio devices as well as plugins and do the "right thing" to make sure that the events end up on the timeline where you intended. I believe LogicPro goes by the notion that where you heard it is where it should end up on the timeline. In other words if there was plugin latency causing a delay in sound, then it should register the event on the timeline at the point in time when you heard it. During record it calculates the correct position and attempts to do that, taking into account all the latency factors it knows about (knock on wood)
Can it get confused by environment routings in step #2? Entirely possible. And some bugs do exist actually when using external midi tracks that way.

On top of that, you mentioned 4-5 layers of AUX channels and if you have plugin latency on those tracks it can start to get very confusing.

PDC works in two ways... if you have a latent plugin on an audio or inst track, then Logic will figure that out during playback, and feed the source track to it early so that it compensates for that latency and you hear the audio when you are supposed to hear it. during record it can't feed it early, so you will hear it late.

However AUX channels and the master output channel are handled differently. if you put plugins with latency on those channels, then Logic basically assumes its "live" audio and can't be fed early, so it will add delay to all other channels to bring them up in sync with the delayed aux channel.

This second way can get confusing if you have a lot of sends going different directions and I have no idea how LPX sorts all that out, but that is basically what its doing.

So when you record tracks with a lot of AUX channels, if any of those AUX channels or the master bus have any plugins with latency, it causes everything to be delayed, including the instrument you are trying to play.. And see point #2 above...in theory it should correct for all that and register the midi event on the track where you HEARD it, but perhaps some use case is not doing that right...or perhaps you are just getting confused by all the PDC happening with your AUX channels.
 
Last edited:
see my above javascript examples in case you missed it.

Back to your present situation though, I would really like to understand if using the old school midi tracks is messing with the timing. There are generally two things to worry about, and believe me, LPX does have some timing related bugs, especially when it comes to things like external midi and stuff like that, so I would not put it past it to have a bug in this way, but the big project you mentioned has a lot of other stuff going on which is probably clouding the issue, will comment on that in a sec.

Anyway, the timing issues are two fold.

  1. latency during playback...if Logic detects latency in plugins, then it should use Plugin Delay Compensation (PDC) to play exactly on the click.

  2. recording midi to tracks, LPX is supposed to be aware of your latency both from audio devices as well as plugins and do the "right thing" to make sure that the events end up on the timeline where you intended. I believe LogicPro goes by the notion that where you heard it is where it should end up on the timeline. In other words if there was plugin latency causing a delay in sound, then it should register the event on the timeline at the point in time when you heard it. During record it calculates the correct position and attempts to do that, taking into account all the latency factors it knows about (knock on wood)
Can it get confused by environment routings in step #2? Entirely possible. And some bugs do exist actually when using external midi tracks that way.

On top of that, you mentioned 4-5 layers of AUX channels and if you have plugin latency on those tracks it can start to get very confusing.

PDC works in two ways... if you have a latent plugin on an audio or inst track, then Logic will figure that out during playback, and feed the source track to it early so that it compensates for that latency and you hear the audio when you are supposed to hear it. during record it can't feed it early, so you will hear it late.

However AUX channels and the master output channel are handled differently. if you put plugins with latency on those channels, then Logic basically assumes its "live" audio and can't be fed early, so it will add delay to all other channels to bring them up in sync with the delayed aux channel.

This second way can get confusing if you have a lot of sends going different directions and I have no idea how LPX sorts all that out, but that is basically what its doing.

So when you record tracks with a lot of AUX channels, if any of those AUX channels or the master bus have any plugins with latency, it causes everything to be delayed, including the instrument you are trying to play.. And see point #2 above...in theory it should correct for all that and register the midi event on the track where you HEARD it, but perhaps some use case is not doing that right...or perhaps you are just getting confused by all the PDC happening with your AUX channels.
Yes, but like I said it's the exact same timing on an empty project with a basic instrument, so I'm more inclined to believe it's me than complexity.
 
Also, when I create a simple project with an old school midi track feeding into environment instrument, which cables to EXS24, the track DOES allow for negative delay by ms. Click on the word "Delay" and you'll see a popup menu to choose the type of delay to use

delay.jpg

I see now that this was brought up already earlier in the thread. what do you mean when you say the ms didn't work?

The Auto Compensation is not what I think you need. That is designed to basically mimic what the external instrument plugin does. its designed for when you are connecting raw midi tracks directly to hardware midi ports so that the sound coming from the external synth will line up with your LPX mixer. Typically, the midi needs to be delayed in order to line up with the LPX mixer which is delayed somewhat by plugin and audio card latency. So this Auto compensation setting basically adds some delay to midi events to match whatever delay is being caused by the LPX mixer overall...so that in theory if you were listening to your external hardware through an external hardware mixer, not coming back into LogicPro...then it would all sound in sync.

I don't think that's the option you want to try to use in this case, it will just confuse the issue.

The Delay in MS should be working though...??
 
Last edited:
Now tell me how to replicate the audio rendering timing problem you're getting so that I can try to reproduce the problem here.
 
Also, when I create a simple project with an old school midi track feeding into environment instrument, which cables to EXS24, the track DOES allow for negative delay by ms. Click on the word "Delay" and you'll see a popup menu to choose the type of delay to use

delay.jpg

I see now that this was brought up already earlier in the thread. what do you mean when you say the ms didn't work?

The Auto Compensation is not what I think you need. That is designed to basically mimic what the external instrument plugin does. its designed for when you are connecting raw midi tracks directly to hardware midi ports so that the sound coming from the external synth will line up with your LPX mixer. Typically, the midi needs to be delayed in order to line up with the LPX mixer which is delayed somewhat by plugin and audio card latency. So this Auto compensation setting basically adds some delay to midi events to match whatever delay is being caused by the LPX mixer overall...so that in theory if you were listening to your external hardware through an external hardware mixer, not coming back into LogicPro...then it would all sound in sync.

I don't think that's the option you want to try to use in this case, it will just confuse the issue.

The Delay in MS should be working though...??
Okay:
1. quantize a MIDI event or series of events to the grid
2. Using no "Delay" selection, print the audio output, via record, onto an audio track (via a bus from the audio return of the MIDI instrument)
3. Using "Delay in Ticks," -99 ticks, print the audio output, via record, onto an audio track (via a bus from the audio return of the MIDI instrument)
4. Using "Delay in Milliseconds," -500.0 ms, print the audio output, via record, onto an audio track (via a bus from the audio return of the MIDI instrument)
5. Compare the accuracy of the prints to the location on the grid

This is what I did here:https://vi-control.net/community/th...t-track-delay-compensation.97100/post-4617761
 
Top Bottom