What's new

Logic Scripter—multiport track delay compensation?

I tried that, but in VEPro, I have a lot of multis... so I can delay the multi, but not the channel within the multi. I might experiment with splitting out the multis in VEPro. Also going to try your script.

As always, thanks.
 
For the script also note that it has a GUI menu item to enable/disable the delays on one channel for while you record while leaving all other channels active correction. It could be possible to use automation to control those. I haven’t done this yet but in theory we could create midi switches that could be used to quickly choose a channel to record on without delay.

The json editing I agree is not optimal. Scripter’s GUI capabilities are too limited to do better then that. But once you get used to it I don’t think it will bother you that much, the only thing is that if you setup a large template then if you start changing channels around that could be a hassle.

The reality is that most musicians are put off by having to edit any JSON whatsoever which is why I have not made a big effort to talk about this script much. It does get the job done though

And yes for the expert sleepers approach you’d have to use multi audio outs from each plugin in order to isolate the latent articulation
 
As a side note, I keep going back and forth between logic and cubase because of logicpro problems like this. I mean it’s entirely unacceptable that in 2021 logicpro’s half baked multi timbral operation can’t support out of the box 768 AU3 midi channels with the mute and solo buttons working properly and negative track delay working properly, etc. These problems should have been corrected years ago. I suspect they are difficult to fix because of the way logicpro is a prettier GUI layered on top of an ancient old school logic environment object paradigm, and this brings up some of these problems because of complicated linking of those internal objects. To really fix it would require a complete rearcchitecture of logicpro, which is unlikely to happen.

On the other hand, cubase has its own share of problems and hassles. I use scripter a lot and cubase provides nothing. When you are talking about channelizing articulations, for example, cubase will not channelize expression events, and neither does logicpro but I have a script for it, which I can’t do on cubase. For example. There are all kinds of pros and cons in both sides and it will come down to choosing your compromises
 
Whoa, this issue made me do a quick test and guess what.. the track delay parameter does not work with not only external midi tracks through the environment, but it doesn't work with the normal multi-timbral tracks either! Not for AU2 either.

If you setup a 16 part multi-timbral track in Logic, using its very own wizard no less...then try to apply different track delays to each of those 16 parts...they don't work. They are all linked together with the same delay setting, so whichever part you set the delay, that will be the one affecting all the parts EVEN THOUGH, each of those parts can have a different delay setting appear on the track inspector...it doesn't matter, internally its a single linked value and whichever one of them was the last one to set it will be the delay value used.

That is DEFINITELY a bug.... Kind of related to the external midi track issue too in some way if you ask me, but anyway, I will report this to Apple again as a bug, just to bomb them with heat...but..Its been a around a long time and I don't expect it to change.

:-(

What this means, bottom line, is that if you intend to use the track delay parameter in LogicPro, it is not possible to use multi-timbral instruments with different delay setting per part. There is no way to do it, its just fundamentally broken, not with external midi tracks and not with apple approved multi-timbral tracks either.

Can be done with Scripter of course. ;)

And the expert sleepers PDC trick.
 
Yes, I discovered this as well. It's maddening, because at first it seems to work, but then you realize it was a false positive.

I am experimenting with adding expert sleepers latency plugin inside VEPro. Just need to figure out the best way to set up the signal chain.
 
I implemented the expert sleepers latency plugin in my VEPro instance and it seems to do the trick. I had to create separate outputs in Kontakt for legatos, longs and shorts (level of granularity for the delay settings), but once I did that I could add the plugin as an effect on each channel. I sum everything before sending back to Logic, so I didn't even touch the Logic side of things.

I did this for only one of my string libraries so there is still a lot of work to be done, but I'm optimistic. I'm going to try the remaining libraries over the coming days. Hopefully, I won't hit another brick wall.

Really appreciate your help (and moral support), @Dewdman42!
 
Wow, this is fantastic @Dewdman42. It gives you control at the MIDI channel level - which is much more granular than I have now with the audio outs in VEPro (which is still working by the way). I'll check this out later today. Great work!
 
It also does one thing special no other tool does, it makes the note attacks start early while note releases and cc, pitchbend and aftertouch events are left alone.
 
Catching up on all of this as I am trying to set up a VEPro template making use of both Art Conductor articulation sets and heavy use of negative delay (can't live without it now), and I'm surprised that so many issues/problems are being discussed literally right now! I am shocked it's such a hassle with Logic. @marclawsonmusic which template method are you using the @Dewdman42 script with that's finding good results?

Obviously there have to be compromises, and I'm wondering how I'll have to route and organize things in order to get different delays for different articulations.

Is it possible to get this working with, for example, one Kontakt combined-articulations instrument (rather than a bunch of split patches)? I'm still learning all of this, but I was hoping to cram as many related instruments into one Kontakt multi as possible - say, every staple patch in Cinebrass Core/Pro, such that I don't exceed 16 instruments in that one Kontakt. My original thinking was every instrument gets its own stereo output, and the compromise is not having granular control over mics.

However, looking at this negative delay issue, I'm thinking I might need to route outputs more along the lines of "high brass shorts", "high brass longs", etc, because each instrument (e.g. horn solo) having 2 or 3 outputs is crazy, but the script depends on analyzing the audio outputs to determine delay compensation. Is that more or less correct? Thank you guys for working all this out!

Edit: Correction - I think I was conflating the latency fixer plugin and the script.. I need a nap...
 
Sounds like you figured this out, but the script operates entirely in midi domain. The other approach that was discussed earlier assigns each articulation to a separate audio out so that when it hits the vepro mixer each isolated audio can have latency fixer assigned to it. Latency fixer does nothing to the timing it just reports whatever latency you set it to; to the host (vepro) vepro then does whatever it does to perform plugin delay Compensation on each articulation audio channel in the vepro mixer.

There are pros and cons to either approach
 
What I would if I were you would be to create your various busses in the vepro mixer instead of in kontakt. The final set of audio outputs from vepro back to logicpro can be whatever you want
 
@marclawsonmusic which template method are you using the @Dewdman42 script with that's finding good results?
Hi @lettucehat,

Here's what I'm doing right now:

One VEP instance per library. So I have an instance called 'VEP CineBrass'.

In there, I have one Kontakt multi per instrument. So, I've got the following multis (each on its own track in VEPro) for CineBrass:
  • CB Solo Tpt (port 1)
  • CB Tpt Ens (port 2)
  • CB Solo Horn (port 3)
  • CB Hn Ens (port 4)
  • CB Solo Tbn (port 5)
  • CB Tbn Ens (port 6)
  • ... you get the idea
Each of those multis is on a separate port, so I can have up to 16 MIDI channels per Kontakt multi.

In Logic Environment, I have an Instrument channel strip with the VEP AU3 plugin. I clone this XX times, and change the port number for as many I need. I think I have 9 ports for CineBrass. Each channel strip clone is set to a different port, which is mapped to the corresponding multi in VEP - CB Solo Tpt, CB Solo Hn, etc.

Also in the Environment, I have a MIDI Instrument for each articulation - e.g. CB Solo Tpt Legato, CB Solo Tpt Longs, CB Solo Tpt Shorts. These MIDI instruments point to the corresponding Instrument channel strip.

Finally, in Tracks view, I have a bunch of External MIDI tracks that I 'reassigned' to each of the MIDI Instruments. I use articulations on these to swtch between long, marcato, trill, trem, etc. But this is where my plan fell to pieces... I was going to put negative delay on each of these MIDI tracks (one per articulation), but Logic simply doesn't recognize it. BIG bummer.

To get around this, in VEPro, I set up multiple outputs in each Kontakt instrument. So in CB Solo Tpt, I have legatos going to st.1 (1/2), longs going to st.2 (3/4), and shorts going to st.3 (5/6). In VEPro, I hit the plus button to create two additional audio outputs that correspond to the Kontakt outs. These 3 audio outs are where I put the Expert Sleeper Latency plugin. All audio in each VEPro instance still sums to a single stereo return that feeds back into Logic. You could obviously use multi-out if you wanted.

Still not sure this is all going to work... I just finished setting up all the instruments today and need to do some stress testing with an actual track. If it doesn't work, I'm seriously thinking of switching to Cubase.

PS - Haven't had a chance to test Dewdman's script yet. Need to establish a baseline first.

Hope this helps,
Marc
 
Last edited:
The actual LatencyFixer plugin takes almost no CPU...it doesn't do nearly anything other then report a value back to the host.. So that particular aspect should not have any problem with a stress test.

The extra audio track splitting in order to accommodate it though..that could. Please let us know how that goes.
 
Hi @lettucehat,

Here's what I'm doing right now:

One VEP instance per library. So I have an instance called 'VEP CineBrass'.

In there, I have one Kontakt multi per instrument. So, I've got the following multis (each on its own track in VEPro) for CineBrass:
  • CB Solo Tpt (port 1)
  • CB Tpt Ens (port 2)
  • CB Solo Horn (port 3)
  • CB Hn Ens (port 4)
  • CB Solo Tbn (port 5)
  • CB Tbn Ens (port 6)
  • ... you get the idea
Each of those multis is on a separate port, so I can have up to 16 MIDI channels per Kontakt multi.

In Logic Environment, I have an Instrument channel strip with the VEP AU3 plugin. I clone this XX times, and change the port number for as many I need. I think I have 9 ports for CineBrass. Each channel strip clone is set to a different port, which is mapped to the corresponding multi in VEP - CB Solo Tpt, CB Solo Hn, etc.

Also in the Environment, I have a MIDI Instrument for each articulation - e.g. CB Solo Tpt Legato, CB Solo Tpt Longs, CB Solo Tpt Shorts. These MIDI instruments point to the corresponding Instrument channel strip.

Finally, in Tracks view, I have a bunch of External MIDI tracks that I 'reassigned' to each of the MIDI Instruments. I use articulations on these to swtch between long, marcato, trill, trem, etc. But this is where my plan fell to pieces... I was going to put negative delay on each of these MIDI tracks (one per articulation), but Logic simply doesn't recognize it. BIG bummer.

To get around this, in VEPro, I set up multiple outputs in each Kontakt instrument. So in CB Solo Tpt, I have legatos going to st.1 (1/2), longs going to st.2 (3/4), and shorts going to st.3 (5/6). In VEPro, I hit the plus button to create two additional audio outputs that correspond to the Kontakt outs. These 3 audio outs are where I put the Expert Sleeper Latency plugin. All audio in each VEPro instance still sums to a single stereo return that feeds back into Logic. You could obviously use multi-out if you wanted.

Still not sure this is all going to work... I just finished setting up all the instruments today and need to do some stress testing with an actual track. If it doesn't work, I'm seriously thinking of switching to Cubase.

PS - Haven't had a chance to test Dewdman's script yet. Need to establish a baseline first.

Hope this helps,
Marc
Wow, thank you for the detailed explanation. So I'm strongly assuming this is all using split patches in a one track per articulation setup, right? I think there's still a ton I can emulate in your approach, but unfortunately one of my main priorities is sticking with articulation sets / one instrument one track (as long as the sample library permits). Given how many problems there are already in getting negative track delay to work, I'm guessing there's no way one could make it work by cracking open the all-in-one patches (like Six Horns Articulations) and assigning different delays or outputs to the longs/shorts... right?
 
The approach marclawsonmusic is using can still be used with a single source track. You can specify in the articulation Set to "channelize" the notes rather then sending keyswitches. Just setup your articulations set that way.

Then in kontakt you load a seperate instance of the instrument into 16 channel slots of kontakt, and pre-load the articulation in each of those, then use the purge all samples command to empty out the ram and save it as a Kontakt Multi.

After that you you will have one source track, articulation set channelizes it...and then from there go to kontakt hosting each articulation on seperate channel and so on...

I still highly recommend you both consider my Scripter script though, and actually this is a good time to use one of my other two related scripts...EventChaser or Channelizer...

The reason is because LogicPro does not propagate CC and PitchBend expression across multiple channels....even when the articulation set is moving the notes to new channels as described.

These two scripts do that. EventChaser assumes the notes are already channelized and propagates the expression events to the other channels of the same port.

Channelizer is interesting because with that its possible to have more than 16 articulations, spread across multiple midi ports. if you have an instrument that needs more than 16 articulations, then Channelizer is really useful.

I'm doing some updates to all of my scripts right now, but in the next few days I will post a more comprehensive tutorial about how to set things up with these scripts in combination to achieve both channelizing and expression propagation and latency correction... all in the midi domain.

As I was explaining earlier, the other thing about my latency correction scripts is that they correct the NoteOn's while not correcting NoteOff, CC and PitchBend..which is what you want! All the other tricks of using negative track delay or latency fixer, etc...are conceptually easy...but actually are like a blunt hammer that is just kind of getting it closer..but still left with a problem that way where Notes will be ending shorter then you realize and CC and PitchBend will all end up coming in earlier then you realize too.
 
The approach marclawsonmusic is using can still be used with a single source track. You can specify in the articulation Set to "channelize" the notes rather then sending keyswitches. Just setup your articulations set that way.

Then in kontakt you load a seperate instance of the instrument into 16 channel slots of kontakt, and pre-load the articulation in each of those, then use the purge all samples command to empty out the ram and save it as a Kontakt Multi.

After that you you will have one source track, articulation set channelizes it...and then from there go to kontakt hosting each articulation on seperate channel and so on...

I still highly recommend you both consider my Scripter script though, and actually this is a good time to use one of my other two related scripts...EventChaser or Channelizer...

The reason is because LogicPro does not propagate CC and PitchBend expression across multiple channels....even when the articulation set is moving the notes to new channels as described.

These two scripts do that. EventChaser assumes the notes are already channelized and propagates the expression events to the other channels of the same port.

Channelizer is interesting because with that its possible to have more than 16 articulations, spread across multiple midi ports. if you have an instrument that needs more than 16 articulations, then Channelizer is really useful.

I'm doing some updates to all of my scripts right now, but in the next few days I will post a more comprehensive tutorial about how to set things up with these scripts in combination to achieve both channelizing and expression propagation and latency correction... all in the midi domain.

As I was explaining earlier, the other thing about my latency correction scripts is that they correct the NoteOn's while not correcting NoteOff, CC and PitchBend..which is what you want! All the other tricks of using negative track delay or latency fixer, etc...are conceptually easy...but actually are like a blunt hammer that is just kind of getting it closer..but still left with a problem that way where Notes will be ending shorter then you realize and CC and PitchBend will all end up coming in earlier then you realize too.
The channelizing route makes a lot sense, I actually just did a very inadequate job of explaining the articulation sets thing - the thing I'm trying to stick with is the very specific Art Conductor sets (since they are well standardized and I already paid for them, haha), which use keyswitches and, on occasion, use channels for libraries that have a lot of articulations. As of now I'm resigned to having only one negative delay setting per library instrument for some some libraries, unless there's a way of applying one negative delay (based on the longs) and somehow applying a positive delay inside Kontakt to the shorts. Thankfully, many of my favorite libraries only come split out by articulation so I can definitely try your scripts there. I mean, I'll use them on the keyswitch instruments too, but unless there's that miracle solution or some other workaround, I'll have to find some compromise between the delay needs of longs and shorts. Once I started working quantized with negative delays it's been hard to go back! I'm at least thankfully not wedded to separating longs and shorts in audio routing.
 
adjusting latency with key switched instruments is very difficult to do. I am working on a very elaborate script also in the long term, but nobody will want to use it because its too complicated. There are many issues that come up when using key switched instruments...

With kontakt it should be possible to do some things inside Kontakt multi-script, but I don't know KSP well enough to attempt it and there is really not a general solution.

Bottom line, I know of no general solution for latency correction with key switched instruments. There are a few very specific things that have been done for CSS for example, and a few things like that, but as far as a general approach, its just not available anywhere...and I have been tweaking on this issue for a while....and I can say..its not an easy problem to solve in a general way.

If you want to handle automatic latency correction, I strongly suggest you consider using a channelizing approach.

By the way you can adapt Art Conductor for that. You just have to go into the output section and remove the keyswitches and replace them with channel assignments. haha, but that's throwing away a lot of what you paid for, I get it. But also Art Conductor is using a consistent paradigm on the input switch side..(which you can still use by the way!). You can use the input switches as defined by Art Conductor to assign articulation ID's to notes on the fly when you record them...then on playback...have the OUTPUT section channelize instead of send keyswitches...

or even better, just completely blank out the output section and use my channelizer script instead.

but it will mean you have to rework the articulation sets you are using from them in that regard, and it will mean setting up a lot of kontakt multis also in order to receive the channelized midi instead of key switched.

Even libraries that don't have separated instruments for each articulation, you can just load the same instrument 16 times. purge all samples and then manually configure each of those to the specific articulation, as if the keyswitch has already been sent. And they will just be saved that way in the kontakt multi.
 
I don't know enough about the articulation sets you are using, but I am using Logic's built-in articulations on my three MIDI tracks - legatos, longs, shorts. There is no reason I couldn't combine these into a single track - as long as the articulation can output to separate MIDI channels.

You will still need to separate the instruments (inside Kontakt) into shorts, longs, etc. - basically anything you want the negative delay on. And then route those instruments to a separate output.

PS - Some of my articulations us keyswitches and I haven't had problems with those.
 
Sure the keyswitches can be there but it has to be channelized in order to do the latency correction, that’s all I’m saying. If you try to determine how much latency correction based on the keyswitches that becomes very difficult to do in a generic way, complicated.

You can use channel to determine latency and generally include keyswitches too, if they are still needed.
 
Top Bottom