# Multiple Crossfades



## daringone (May 16, 2012)

Can anyone tell me how to create multiple articulation crossfades.

I have sustained, and vibrato, oboe samples going from P to FF. I want to be able to cross fade from P to FF and between sustained and vibrato, using two different CC's.

Using big bobs math library I can get part of this working but only with two notes. I've also looked at Nil's cross fade script but I'm struggling to make the necessary modifications.

I have also tried just using the modulators in Kontakt and this seems like it might be the simplest solution but I haven't been able to get it to work sucessfully, I can get it to fade between dynamics, or between sustained and vibrato, but not both.

Any help very much appreciated!


----------



## Big Bob (May 16, 2012)

Hi daringone,

I'll say you are a daring one :lol: I'm afraid this is a rather 'tall order' as they say. 

I don't know how well this will work but here is the sort of thing I would try if I had to implement it.

I would try using a combination of AET morphing and Crossfading as follows.

1. Put each set of volume samples in their own groups. So if you have 6 volume layers from pp to ff, you would have 6 groups for the non-vibrato and 6 groups for the vibrato samples.

2. Build an articulation morph with the AET filter for each of the two sets of volume layers. Specifically, you would analyze each volume layer for the non-vibrato samples and insert the AET filter in the ff group of the non-vib samples. Then you would analyze each volume layer for the vibrato samples and insert an AET filter in the ff group of the vib samples.

3. Now you arrange it so when any note plays, only the two ff groups will be enabled to sound. You assign both Morph knobs to the CC you want to use for volume timbre control.

4. Finally, you arrange to crossfade the two ff groups with the crossfade CC being used for morphing between vibrato and non-vibrato.

I don't think the AET can possibly handle the morph between vib and non-vib but it can do a pretty smooth job of handling the volume level morph. 

Since only the two ff groups are actually sounding, they are the only two groups that need to be crossfaded. Since the vib and non-vib samples may be fairly coherent, you may not need to use equal-power for your crossfade (ie you probably won't need the sin/cos shaping) but, you will have to take into account the cubic curvature of the group volume engine parameter. Since the ep has to be proportional to the cube root of the desired volume ratio, you can use the Math Library routine VR_to_ep.

For example, if your vib to non-vib control is a knob named Vib and declared something like:

```
declare Vib (0,10000,1)
```
 you could compute your two eps with:

```
VR_to_ep(Vib,ep1)
VR_to_ep(10000-Vib,ep2)
```

As with all crossfading, there may be phasing problems depending on the samples. 

If you try this, please let me know how it works out for you.

Rejoice,

Bob

BTW: I have written a hands-on tutorial for using the AET Filter in collaboration with David Carpenter. Part 1, with the basics is completed but Part 2, dealing with advanced issues, is still being written. If anyone would like a copy of Part 1, send me a PM with your email address and I'll try to send it to you.


----------



## daringone (May 16, 2012)

Thanks Big Bob, 

I have just had one of those revelation moments that occurs to all programmers just before they are ready to tear their hair out. I had another look at Nil's script and your xfade demo and realised I'd been approaching the thing incorrectly. I may be able to script it, we shall see.

If not, your way sounds good  !


----------



## daringone (May 17, 2012)

Yippie! I managed to code the double cross fade, the first one was difficult, the second one was just a pain and I missed something obvious, but it works!! hehe

Now I need to look at legato intervals 

Thanks again.


----------



## Big Bob (May 17, 2012)

Congratulations o-[][]-o 

Just out of curiosity though, are you working within the constraints of staying at the initial velocity of the note once started? 

Or, are you actually able to play a note and sustain it, then vary the volume up and down (passing through the set of volume samples from pp to ff) and able at any point along the line to morph from vibrato to non-vibrato and then change volume again, etc? i.e. are the two crossfade controls able to work at all times and levels during the sustain?

If so, double congratulations are in order, a 'daring feat' indeed! :lol: 

Rejoice,

Bob


----------



## daringone (May 18, 2012)

Hi Bob

I have it set up so I have groups from pp to ff, and within the groups are both sets of sample vibrato and sustain at different velocity levels.

When a note is pressed all the samples are played and straight after the volumes are set (as in your xfade demo), first they are set based on the modwheel position, and then the dynamic layer that is currently playing is set based on the vibrato controller, CC77 for my purposes.

Whenever a controller is moved the volumes are updated again in the same manner, so I can control both volume and vibrato depth at the same time.


----------



## Big Bob (May 18, 2012)

> I have it set up so I have groups from pp to ff, and within the groups are both sets of sample vibrato and sustain at different velocity levels.



But if the only way to select between the vib samples or the non-vib samples is to change the velocity (which Kontakt only reads at note start), how are you morphing back and forth between them during a note's sustain? Your explanation sounds more like my first statement:



> Just out of curiosity though, are you working within the constraints of staying at the initial velocity of the note once started?



Specifically, only the initial played velocity determines whether vib or non-vib samples play. You can then morph back and forth between the volume samples but you can't change the velocity anymore during the sustain of the note, ergo, you can't continuously morph back and forth between vib and non-vib. Or am I misunderstanding your explanation? :?


----------



## daringone (May 18, 2012)

Hi Bob,

When I play a note both the sustain articulations and the vib articulations start playing at the same time, So i have 10 voices going, 5 non vib, 5 vib, but the vibs are silent until i raise the CC77 controller at which point they crossfade with the non vib.

The velocity only comes into play right at the beginning to select the correct articulation, once we have started the notes playing we don't care about it any more. 

My original intention was to use different groups for each articulation and the velocities within the groups for dynamic level, but that would mean using a lot more groups. 

I also wanted to be able to turn silent notes off until they were needed and use the note offset so that all the samples lined up (in order to save voices), but then I found out that's only really viable in sampler mode and I want to use DFD, and I could use the S. Mod but then I might aswell use the sampler mode since the RAM use is about the same if I enable S. Mod for the whole sample (which is what I would need to do)

That explain things better?


----------



## Big Bob (May 18, 2012)

> That explain things better?



Yes indeed, I understand. Sorry for being a little dense this morning (no coffee yet :lol: ). 



> I also wanted to be able to turn silent notes off until they were needed



To keep polyphony lower I presume? That would be one advantage to what I suggested because only two FF notes would have to sound at the same time.

But, out of curiosity, what do you perceive is the advantage of combining the vib/non-vib samples in the same group versus just using another parallel set of groups? Are you just trying to reduce the total groups or do you see some other advantage? 

In any case, congratulations again. o-[][]-o 

Rejoice,

Bob

BTW What kind of crossfade shaping are you using and are you experiencing any phasing issues?


----------



## daringone (May 18, 2012)

Yea it was the polyphony that was bothering me, I think in my instrument I might limit it to some degree anyway because the oboe isn't exactly renowned for it's polyphony 

My only reason for using velocity layers instead of groups was because I came across this article and I thought it made a good point  I don't know if what it says is true about the CPU usage but I figured I'd go along with it anyway.
http://www.orangetreesamples.com/blog/2 ... of-groups/

Not sure what you mean by crossfade shaping. It's equal power. I haven't experienced any phase issues, I didn't make the samples though so maybe phase was accounted for by the people who did.


----------



## Big Bob (May 18, 2012)

> My only reason for using velocity layers instead of groups was because I came across this article and I thought it made a good point I don't know if what it says is true about the CPU usage but I figured I'd go along with it anyway.



Yes, I'm familiar with the article but I don't agree with the premise advanced. Kontakt is _very_ group organized and hopefully therefore very efficiently coded to reckon with lots of groups. Of course for mono instrument situations it may not matter too much but, generally, I think it's easier and more efficient to crossfade group volume than to have to crossfade each individual note sounded within a group.

At the time when Nils wrote the velocity crossfade script, the Math Library only had routines for crossfade shaping of individual notes (via change_vol). The group engine parameter functions such as VR_to_ep hadn't been added yet.



> Not sure what you mean by crossfade shaping. It's equal power.



That answers my question, you are using sin/cos shaping then. Whether or not that is the most appropriate will depend on the amount (or lack thereof) of correlation between the two components of the crossfade. Do you experience any volume non-uniformity as you crossfade from one extreme to the other?



> I haven't experienced any phase issues, I didn't make the samples though so maybe phase was accounted for by the people who did.



Often times when using crossfading, phase differences in the sample pair involved, cause a flanging or phasing type of sound in the crossover region. With some samples it's more notable than with others. I'm glad yours seem to be well suited for crossfading. This problem plus reducing the polyphony used per note is the main reasons I was suggesting using the AET filter for the volume layer morphing rather than crossfading. But, if you have polyphony to spare and you don't have any phasing problems then what you are doing should be just fine.

Again, my congratulations on your achievement o-[][]-o 

Rejoice,

Bob


----------



## daringone (May 18, 2012)

Thanks Bob,

What's this VR_to_ep thingy? I think I may be using an old version of the Math Library because I can't see it in there.

What's the latest version?


----------



## Big Bob (May 18, 2012)

The very latest is V406. I'll attach the source code.

V405 was the last full package but there was a minor fix made recently and thus V406. The package with docs, etc is too large to attach but if you want it, just send me a PM with your email address and I'll send it to you.

You may also want to look at the source code I posted in this thread which uses VR_to_ep

http://www.vi-control.net/forum/viewtopic.php?t=25980

In my spare time I'm working on V450 which will take advantage of some of the new KSE features like return-value functions. But, I have no idea when I'll get this finished. :lol: 

Rejoice,

Bob


----------



## daringone (May 18, 2012)

Ah, thank you, I've been using v105


----------



## lee (May 18, 2012)

Sorry for highjacking.. (you were talking about crossfading and nils script  )

Do you guys know if Nils crossfade script work in K4?

/Johnny


----------



## Big Bob (May 18, 2012)

Hi Johnny,

I don't know of any reason why it shouldn't work with K4 as well as with K2 but I'm not personally using it. Maybe daringone can tell you because he has been experimenting with it I think. Why don't you just try it? :lol: 

Rejoice,

Bob


----------



## daringone (May 18, 2012)

Yes it works with K4.


----------



## lee (May 18, 2012)

thanx a bunch guys! You may now continue your ûberintelligent discussions and mystical code language. o-[][]-o


----------



## daringone (May 18, 2012)

lee @ Fri May 18 said:


> thanx a bunch guys! You may now continue your ûberintelligent discussions and mystical code language. o-[][]-o



hehe. 8) 

Bob (or anyone else)

Moving slightly away from crossfading, but not entirely. I'm getting into legato interval transitions. 

So I have my sustained note playing, then if you play another key within a certain amount of time of playing the previous key I need to play the legato transition and then the new sustain note.

Ok so, so far I can find the correct interval note to play but what I need (or think I need) is a method of queueing the notes so I can control the fades between the notes. I could just use the fade_in/out methods but i'd like to use equal power again.

Any advice?


----------



## Big Bob (May 18, 2012)

> Ok so, so far I can find the correct interval note to play but what I need (or think I need) is a method of queueing the notes so I can control the fades between the notes. I could just use the fade_in/out methods but i'd like to use equal power again.



Since the crossfades can usually be rather short (I presume you are talking about sampled legato as opposed to synthesized legato such as SIPS), I think you may get better overall results (and with a lot less CPU demand) if you just use the fade-in/out functions. 

For short crossfades, it is very doubtful that you will hear much difference between linear and sin/cos shaping. What is more likely is that you will hear more difference due to having to use a time-driven while loop for one and not the other (so the sin/cos shaping won't execute as smoothly). However, I must admit that I have never scripted a sampled legato instrument so maybe someone else that has should chime in here. 

But, if you want to be sure, why not try it both ways? :roll: 

On the other hand, if you want to avoid the extra work and you want to go one way or the other but not both, I would vote for using the fade functions. But, please don't shoot me if I'm wrong (I often am). :lol: 

God Bless,

Bob


----------



## daringone (May 19, 2012)

Thanks Bob, 

I think I'll try with the fade functions first, if all goes well I'll report back later


----------



## daringone (May 19, 2012)

Yippie, it worked, very simple and effective, much easier to do than I expected. Thanks again Bob.


----------



## daringone (May 23, 2012)

Bob I'm gong to be recording some sample soon which I intend to crossfade. What should I look out for to prevent phase problems?


----------



## Big Bob (May 23, 2012)

You know, I really don't know if you can do much of anything during the recording process to minimize the crossfade phasing problem. I think most developers try to align the harmonics as much as they can but more or less after the fact by post-processing of the audio.

Maybe someone else may have better words of wisdom for you on this topic. All I really know is that the problem exists and seems incredibly difficult to eliminate. Remember that Sample Modeling is actually seeking a patent to protect their process which they have dubbed 'Harmonic Alignment'. I generally find it much easier to use dynamic EQ techniques because it results in zero phasing and is very smooth across the volume band. The AET is in essence a super form of dynamic equalization and I think it has been under utilized due to a lot of misunderstanding of how to use it.

If you don't have it, you may want to read my Virtual Wind Instrument Design Guidelines for a discussion of some of these issues. I have also written a tutorial for the AET in collaboration with David Carpenter. Dave is still writing part 2 of the AET Guide but Part 1 is currently available. So here is a couple of links you might want to utilize.

http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/VirtualWindIntruments.pdf (http://dl.dropbox.com/u/80404485/Kontak ... uments.pdf)

http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/AETGuidePart1.pdf (http://dl.dropbox.com/u/80404485/Kontak ... ePart1.pdf)

How about it guys, do any of you know some actual sample recording techniques for minimizing the phasing problem when the volume layers are later crossfaded? If so, please chime in.

Rejoice,

Bob


----------



## daringone (May 23, 2012)

Thanks Bob, i'll have a read through them now.


----------



## Tod (May 24, 2012)

Big Bob @ Wed May 23 said:


> If you don't have it, you may want to read my Virtual Wind Instrument Design Guidelines for a discussion of some of these issues. I have also written a tutorial for the AET in collaboration with David Carpenter. Dave is still writing part 2 of the AET Guide but Part 1 is currently available. So here is a couple of links you might want to utilize.
> 
> http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/VirtualWindIntruments.pdf (http://dl.dropbox.com/u/80404485/Kontak ... uments.pdf)
> 
> http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/AETGuidePart1.pdf (http://dl.dropbox.com/u/80404485/Kontak ... ePart1.pdf)



Hi Bob,

I went through your AET tutorial and it was great, thankyou. :D I've got a couple of questions regarding setting up the AET in the auto Velocity mode.

1> Based on what you mentioned I assume the unused samples/layers cannot be deleted?

2> What about purging them, I tried this and it seemed to work?

Another unrelated question, when I purged all the MP, MF, and F samples they didn't change color like they used to in K2. I know that they were puged because the memory dropped by nearly 75%. Do not the purged samples change color anymore?

Thanks again for all your tremendous work and contribution, and God Bless you Bob.


----------



## Big Bob (May 24, 2012)

Hi Tod,



> 1> Based on what you mentioned I assume the unused samples/layers cannot be deleted?
> 
> 2> What about purging them, I tried this and it seemed to work?



You may be able to 'purge' the unused samples, for that matter you may be able to delete them. However, I think the groups they are in must be retained. Probably because NI stores their 'sonic fingerprint' with the group somewhere. But you may not need the samples (unless of course you need to re-analyze them).

I don't know anything about color change when you purge because I have not used purge for anything yet. :oops: But, I'm sure someone else can tell you :wink: 
God Bless you too Tod,

Bob


----------



## DynamicK (Jun 29, 2012)

Big Bob @ Wed May 23 said:


> If you don't have it, you may want to read my Virtual Wind Instrument Design Guidelines for a discussion of some of these issues. I have also written a tutorial for the AET in collaboration with David Carpenter. Dave is still writing part 2 of the AET Guide but Part 1 is currently available. So here is a couple of links you might want to utilize.
> http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/VirtualWindIntruments.pdf (http://dl.dropbox.com/u/80404485/Kontak ... uments.pdf)
> http://dl.dropbox.com/u/80404485/Kontakt/Docs/Guides/AETGuidePart1.pdf (http://dl.dropbox.com/u/80404485/Kontak ... ePart1.pdf)
> Rejoice,
> Bob



Just a note of thanks for these articles. I should now be able to do some basic articulation morphs :D 
Any news of AET Guide Part 2?


----------



## Big Bob (Jun 29, 2012)

Sorry but I guess David has become swamped with other things. The last time I checked with him he had not gotten very far with it yet but he said he would work on it a little harder :lol: But, that was some time ago so I guess he must still be bogged down :roll: 

Rejoice,

Bob


----------

