# Where to start?



## arcy (Jul 30, 2018)

Hi all! I'm a flutist, sound designer and developer. I would create my first custom flute library. I own both Falcon and Kontakt, but I want to start with Kontakt. I wouldn't waste time studying not needed/overkill info. I prefer to start to do it well samples recording (I'm finding the right way to do it)...I think that well-recorded samples are the 60% of the work...anyway, about scripting I read in this forum that we can do good (and sometimes better) things directly from Kontakt graphical interface, whereas others can be possible only with the script. Is it true? Which kind of features do you recommend me to concentrate on with script and which ones with Kontakt GUI instead?
Thanks!


----------



## Tod (Jul 30, 2018)

Ha ha, that's kind of a can of worms. There are a few features that work okay without scripting, round robins and vibrato being two of them unless, and that's a big unless. For example, Kontakt's RR system does not work well with drums along with any instrument that has parts that require individual RR counting.

I would imagine a good flute instrument would have a lot of parameters that will require scripting to get the most out of them. You'll also have several groups for different articulations and then different levels for the articulations, like pp, p, mf, f, etc.. 

The biggest question for me would be how simple or complex the instrument will be?

Good luck with it arcy, sounds like a good project.


----------



## Kyle Preston (Jul 30, 2018)

I encourage you to start simply. For example, focus on something small and easy to achieve, like a staccato flute patch or some other articulation that you can capture in the sample itself. Building a legato flute or something of that caliber for your first instrument will be much, _much_ harder and require a lot of scripting wizardry that isn't easy to learn. If I could do it over again, I would master my understanding of the Kontakt GUI _before _scripting my own functions. If you're an extremely mediocre programmer (like me) and you learn the Kontakt GUI first, then you'll benefit from the fact that the coders at NI have built something very stable and reliable, more so than most of us could on our own.


----------



## Paul Grymaud (Jul 30, 2018)

I'm afraid it takes a long time to obtain good results. 
Good luck !


----------



## kitekrazy (Jul 30, 2018)

I can do my own oil changes but I prefer to take it and have it done by someone else.


----------



## Olfirf (Jul 31, 2018)

I think starting with a simple thing, like a shorts patch sounds like a good plan. But if you play the instrument yourself and don't have to pay for every session, you can really take that to your advantage. I think with sample libraries (vs modelling instruments not created by samples) time has shown that there is nothing like one patch that can do anything the real instrument can. Good sample libraries mostly offer a certain sound and do that very well. That is why I think the most important work would be to record some musical passages that you want to be able to replicate with the library. That could be the starting point for observations and ideas how you could record different samples which can replicate these musical situations. You could start with that by connecting bits and pieces in an audio track, so you don't have to think about the technical stuff in advance. After knowing what you actually want, it will be a lot easier and you can concentrate on the technical things like scripting.


----------



## arcy (Jul 31, 2018)

Tod said:


> Good luck with it arcy, sounds like a good project.


Thanks Tod 
Yep, I play and record flute by myself so I can take all the needed time to do a good work. I want to record thrills, vibrato, legato and so on..no virtual articulations. I work as developer, I’m not an hobbyist, so scripting doesn’t bother me.
I started with a single sustained note through 3 dynamic crossfaded layers and 3 round robin. The next step will be record legato to the next note...and here will start the pain!!


----------



## arcy (Jul 31, 2018)

How can I organise my tracks for samples recording? One track per note? One track per articulation? One track per velocity? What do you suggest?


----------



## EvilDragon (Jul 31, 2018)

That is entirely up to you...


----------



## arcy (Jul 31, 2018)

no best practice at all?


----------



## EvilDragon (Jul 31, 2018)

Really depends on project, so yeah, it's on a case by case basis. Use whatever you think will give you the best overview.


----------



## arcy (Jul 31, 2018)

EvilDragon said:


> Really depends on project, so yeah, it's on a case by case basis. Use whatever you think will give you the best overview.



ah ok, thanks


----------



## d.healey (Jul 31, 2018)

arcy said:


> How can I organise my tracks for samples recording? One track per note? One track per articulation? One track per velocity? What do you suggest?


I use one track per mic position and divide articulations and dynamics over time. I record everything in a single project but you could record different articulations in different projects if you prefer.


----------



## arcy (Jul 31, 2018)

d.healey said:


> I use one track per mic position and divide articulations and dynamics over time. I record everything in a single project but you could record different articulations in different projects if you prefer.



It sounds good. I think that I'll work on a single project, maybe a better solution to maintain visual and sound consistency especially for automations, volumes etc...


----------



## arcy (Jul 31, 2018)

And what about tuning? Flute changes its tuning while it is being played. How can I keep tuning consistency through the various recorded samples? Somewhere I read about using pitch correction tool before export samples...


----------



## EvilDragon (Jul 31, 2018)

That would be the only way, yeah.


----------



## arcy (Jul 31, 2018)

Tod said:


> round robins and vibrato being two of them


I read even crossfading sounds better from native then script. Right? Maybe d.healey said that in another post on this forum.


----------



## d.healey (Jul 31, 2018)

arcy said:


> I read even crossfading sounds better from native then script. Right? Maybe d.healey said that in another post on this forum.


There is usually no need to script crossfading, just use modulators.


----------



## Erick - BVA (Jul 31, 2018)

arcy said:


> And what about tuning? Flute changes its tuning while it is being played. How can I keep tuning consistency through the various recorded samples? Somewhere I read about using pitch correction tool before export samples...


I have Reatune open on the masterbus when I am editing samples --before I export them. I usually try to organize the samples so that each pitch is on a single track so I can solo it and tune them as I go. I try to have them tuned before I put them into Kontakt (because, as far as I know Kontakt has no visual tuner?). It's a pain using the test tone for tuning purposes. In my recording sessions I take audio notes by saying something like "C note vibrato." And then I just play the note and even vary the velocity. So I keep a separate recording for each note and articulation.


----------



## Erick - BVA (Jul 31, 2018)

I've tried more broad recording techniques, but I've found that separating recordings by Pitch and Articulation is the best for my workflow. It's a pain to sit there and try to extract every separate note from a recording that's several minutes long. But you also don't want to narrow it down too much, then it's harder to manage in a different way. So I guess just whatever works best for you.


----------



## arcy (Jul 31, 2018)

d.healey said:


> There is usually no need to script crossfading, just use modulators.


Ok thanks David 



Sibelius19 said:


> I have Reatune open on the masterbus when I am editing samples --before I export them. I usually try to organize the samples so that each pitch is on a single track so I can solo it and tune them as I go. I try to have them tuned before I put them into Kontakt (because, as far as I know Kontakt has no visual tuner?). It's a pain using the test tone for tuning purposes. In my recording sessions I take audio notes by saying something like "C note vibrato." And then I just play the note and even vary the velocity. So I keep a separate recording for each note and articulation.



Reatune in auto or manual mode?


----------



## Erick - BVA (Jul 31, 2018)

arcy said:


> Ok thanks David
> 
> 
> 
> Reatune in auto or manual mode?


I actually use Reatune for detecting the pitch and then I tune it via track properties. I'm not sure if it's using the same algorhythms or whatever, but I seem to remember hearing that the built in repitching (in track properties) is pretty advanced. Not sure if it's the same thing as Reatune. So I literally use it as a tuner just to detect the pitch so that I know how much to change it by (if it needs it). 
But if reatun's pitch correction capabilities are basically the same then I could save myself a lot of time


----------



## arcy (Jul 31, 2018)

Sibelius19 said:


> But if reatun's pitch correction capabilities are basically the same then I could save myself a lot of time



Mmh, this is an interesting topic...


----------



## d.healey (Jul 31, 2018)

If you have a single mic position it's easy, just stick ReaTune on it (increase the window size for low instruments) and put it in auto-mode (you may need manual mode for some notes though). Reaper uses the same tuning algorithm whether you're using ReaTune or pitch envelopes, you can set the default algorithm in the project properties. I find Elastique efficient gives the best results, I get artifacts with nearly all the others. Sometimes a note doesn't need any tuning, when you have this lucky situation you can bypass ReaTune via automation. If you have a sample that has a steady pitch but it's the wrong pitch (a little bit flat or sharp perhaps) then rather than using ReaTune you could adjust the media item's pitch using the pitch item down/up actions, then you can use the convert pitch to rate script to get higher quality tuning (this script is available in ReaPack I think, it doesn't come with Reaper).

Now multi-mic samples are a different, evil, horrible beast  You can't just stick an autotune plugin on each track because they all work via FFT and will screw up the phase relationship of all the samples so you'll get some nasty chorusing between the mics. Apparently there's a version of Melodyne that can work with multi-mic recordings but I haven't been able to fit Melodyne into my workflow and I don't really want more proprietary software than necessary. So there are only two solutions I have found and they both require some manual work. X-Raym made an awesome script that is basically auto-tune using item pitch envelopes, and there is a script to copy pitch envelopes between items. So you put the script on one mic, tune the samples, and copy the envelope to the other mics. It still uses FFT but because all of the tuning points will be the same it doesn't create any noticeable chorusing. Although the script is automatic and can be used on multiple media items at once, it isn't 100% accurate and you will need to go through the samples one by one and adjust the pitch envelope as necessary (many won't need any adjustment, or very little). I tuned a few thousand samples this way last month. The script isn't free, you can get it from here - https://www.extremraym.com/en/downloads/reascripts-envelope-based-pitch-corrector/.

Now the other method, that I have yet to try in production: Paolo from Fluffy Audio informed me that you can tune a mic position with Melodyne, then rename the file so Melodyne can't find it, and then point Melodyne to a different file of the next mic position. Melodyne will then apply the tuning it applied to the first file to the second file. So I figured why not do the same thing with ReaTune, and in my brief tests it seems to work. You have to use manual mode, tune one media item (best to do this before cutting the samples) then swap the media item for the next mic position and ReaTune should apply the same pitch adjustment. I only did some quick tests with this but if it does work I think it will be faster than Raymond's script for sustain samples but perhaps not for short samples.


----------



## arcy (Jul 31, 2018)

Wow David! thanks for the good reply!


d.healey said:


> I find Elastique efficient gives the best results


Yep! I use Elastique too.


d.healey said:


> you can tune a mic position with Melodyne, then rename the file so Melodyne can't find it, and then point Melodyne to a different file of the next mic position


Eh, I know...but Melodyne apply a little bit synthy sound...I prefer Waves Tune or Cubase VariAudio. ReaTune seems to be a good solution.
It is a good thing normalize samples before edit?


----------



## d.healey (Jul 31, 2018)

arcy said:


> It is a good thing normalize samples before edit?


You should normalize your samples at the very end before exporting. Within Reaper normalization is none destructive so you could normalize at any point. I normalize at the very end so that while I'm editing the samples are at roughly the volume they will be when played back by the sampler. This is important so I don't waste time cleaning up little noises that I'd hear in a normalized sample but not at playback.


----------



## arcy (Aug 1, 2018)

Ok, I changed idea and I'm starting sample my C Bansuri flute 
I have one that I bought in India. Today I finished to record staccatos with 4 RR and 3 dynamic layers


----------



## Lindon (Aug 2, 2018)

OK well, it might be worth mentioning there are a number of scripted approaches (and by this I mean KSP scripts) that will offer you an alternative to RR's and velocity layers.

Essentially RRs can be seen as the same note with minor timing, EQ and pitch differences (what else could they be?). It is possible to script these changes in Kontakt at play-back, so you dont need to do RRs at all. 

Almost the same for dynamics, here you can use the "synth" approach of changing the gain of a single sample and rolling off high end the lower the velocity goes. Of course there can be serious differences between velocity layers that cant be emulated in this way - the "chiff" part of your flute sounds might be a good example. But there are work-arounds for this too. However both these approaches are ways to reduce the amount of sampling (and associated editing) that you need to do. If you want to be a purist then sure: "all-power-to-your-elbow", and dig in for that long sample editing process. Just thought I'd mention it.


----------



## d.healey (Aug 2, 2018)

Lindon said:


> OK well, it might be worth mentioning there are a number of scripted approaches (and by this I mean KSP scripts) that will offer you an alternative to RR's and velocity layers.
> 
> Essentially RRs can be seen as the same note with minor timing, EQ and pitch differences (what else could they be?). It is possible to script these changes in Kontakt at play-back, so you dont need to do RRs at all.


Very good points, I just spent ages editing 3RR samples for an instrument and then once I got it in the player I decided the effect was so subtle I might as well just use the borrowed sample approach - no one can tell the difference


----------



## Tod (Aug 2, 2018)

d.healey said:


> Very good points, I just spent ages editing 3RR samples for an instrument and then once I got it in the player I decided the effect was so subtle I might as well just use the borrowed sample approach - no one can tell the difference



Were the samples recorded in half steps or whole steps?


----------



## d.healey (Aug 2, 2018)

Tod said:


> Were the samples recorded in half steps or whole steps?


Half, I always record chromatically


----------



## Light and Sound (Aug 2, 2018)

d.healey said:


> If you have a single mic position it's easy, just stick ReaTune on it (increase the window size for low instruments) and put it in auto-mode (you may need manual mode for some notes though). Reaper uses the same tuning algorithm whether you're using ReaTune or pitch envelopes, you can set the default algorithm in the project properties. I find Elastique efficient gives the best results, I get artifacts with nearly all the others. Sometimes a note doesn't need any tuning, when you have this lucky situation you can bypass ReaTune via automation. If you have a sample that has a steady pitch but it's the wrong pitch (a little bit flat or sharp perhaps) then rather than using ReaTune you could adjust the media item's pitch using the pitch item down/up actions, then you can use the convert pitch to rate script to get higher quality tuning (this script is available in ReaPack I think, it doesn't come with Reaper).
> 
> Now multi-mic samples are a different, evil, horrible beast  You can't just stick an autotune plugin on each track because they all work via FFT and will screw up the phase relationship of all the samples so you'll get some nasty chorusing between the mics. Apparently there's a version of Melodyne that can work with multi-mic recordings but I haven't been able to fit Melodyne into my workflow and I don't really want more proprietary software than necessary. So there are only two solutions I have found and they both require some manual work. X-Raym made an awesome script that is basically auto-tune using item pitch envelopes, and there is a script to copy pitch envelopes between items. So you put the script on one mic, tune the samples, and copy the envelope to the other mics. It still uses FFT but because all of the tuning points will be the same it doesn't create any noticeable chorusing. Although the script is automatic and can be used on multiple media items at once, it isn't 100% accurate and you will need to go through the samples one by one and adjust the pitch envelope as necessary (many won't need any adjustment, or very little). I tuned a few thousand samples this way last month. The script isn't free, you can get it from here - https://www.extremraym.com/en/downloads/reascripts-envelope-based-pitch-corrector/.
> 
> Now the other method, that I have yet to try in production: Paolo from Fluffy Audio informed me that you can tune a mic position with Melodyne, then rename the file so Melodyne can't find it, and then point Melodyne to a different file of the next mic position. Melodyne will then apply the tuning it applied to the first file to the second file. So I figured why not do the same thing with ReaTune, and in my brief tests it seems to work. You have to use manual mode, tune one media item (best to do this before cutting the samples) then swap the media item for the next mic position and ReaTune should apply the same pitch adjustment. I only did some quick tests with this but if it does work I think it will be faster than Raymond's script for sustain samples but perhaps not for short samples.



Melodyne tunes stereo tracks based on the left signal alone, meaning that you can do a multimic batch tune based on a single recording as follows:

Track 1: Close mic
Track 2: Decca left
Track 3: Close mic (track 1)
Track 4: Decca center
Track 5: Close mic (track 1)
Track 6: Decca right
etc etc

Then simply use the right audio channel from the stereo tracks, along with a a single track of left audio for that mic and you're done, all kept within the original phase too.


----------



## Tod (Aug 2, 2018)

Light and Sound said:


> Melodyne tunes stereo tracks based on the left signal alone, meaning that you can do a multimic batch tune based on a single recording as follows:
> 
> Track 1: Close mic
> Track 2: Decca left
> ...



I think the same would be true with Reaper's Reatune, if you put all the samples in a multi channel file. From there you could choose which mic channel you wanted to use for tuning. You could even tune more than one channel, actually all of them if you wanted to, however I'm not sure how practicle that would be.


----------



## d.healey (Aug 2, 2018)

Tod said:


> I think the same would be true with Reaper's Reatune, if you put all the samples in a multi channel file. From there you could choose which mic channel you wanted to use for tuning. You could even tune more than one channel, actually all of them if you wanted to, however I'm not sure how practicle that would be.


I did actually do this, you have to play around with the routing. I found it a bit messy though.


----------



## Tod (Aug 2, 2018)

d.healey said:


> I did actually do this, you have to play around with the routing. I found it a bit messy though.



Yeah, I've never acutally tried it David, but I know if you route all the channels to to their own separate bus, logic tells me it can be done. 

I also think ReaTune in it's manual mode would work well for it. Unless of course things are way out of tune.


----------



## Erick - BVA (Aug 2, 2018)

d.healey said:


> Very good points, I just spent ages editing 3RR samples for an instrument and then once I got it in the player I decided the effect was so subtle I might as well just use the borrowed sample approach - no one can tell the difference


I'm a huge fan of borrowed RR. It's all about suspending belief. It almost creates an auditory illusion. I think there's probably a good balance to be reached with that method though. If the notes are stretched to high, then those samples will have a quick attack due to a faster play rate. And conversely, the notes being stretched down will have a slower attack. So all in all I think it's about the type of samples. I did a quick little chromatically sampled ukulele about 2 years ago. For fun I tried a 7RR experiment by creating groups up to 3 semitones up and 3 semitones down. It actually worked really well. But I think at a certain point the illusion would be lost and our brains would know we are being fooled. Nothing wrong with experimentation though.


----------



## d.healey (Aug 2, 2018)

Sibelius19 said:


> I'm a huge fan of borrowed RR. It's all about suspending belief. It almost creates an auditory illusion. I think there's probably a good balance to be reached with that method though. If the notes are stretched to high, then those samples will have a quick attack due to a faster play rate. And conversely, the notes being stretched down will have a slower attack. So all in all I think it's about the type of samples. I did a quick little chromatically sampled ukulele about 2 years ago. For fun I tried a 7RR experiment by creating groups up to 3 semitones up and 3 semitones down. It actually worked really well. But I think at a certain point the illusion would be lost and our brains would know we are being fooled. Nothing wrong with experimentation though.


I always just do 1 up and 1 down. If I do decide to use multiple RR recordings then I will also do the pseudo RR as well to create even more variation.


----------



## Erick - BVA (Aug 2, 2018)

d.healey said:


> I always just do 1 up and 1 down. If I do decide to use multiple RR recordings then I will also do the pseudo RR as well to create even more variation.


That's probably the best practice  But if you're in a pinch or if you want to create variation in a smaller sized library, I say whatever works lol. BTW, I first learned about this technique by watching one of your Youtube videos


----------



## d.healey (Aug 3, 2018)

Prompted by @Light and Sound and @Tod I decided to revisit the process of routing ReaTune for multi-mic tuning and actually it seems like it might be a really good method, I can't remember why I stopped using it (maybe there is a hidden pitfall I'll stumble into). Anyway, here is a picture of the way I have everything routed. This is setup for 5 mics but will of course work with more.


----------



## arcy (Aug 3, 2018)

Lindon said:


> the "chiff" part of your flute sounds might be a good example


But I think that I should spend much time to create a very realistic synth-modeled RR that reproduce perfectly the "chiff" or the tongue sound rather than record a new one and apply some cut and fades, IMHO...


----------



## Tod (Aug 3, 2018)

d.healey said:


> Prompted by @Light and Sound and @Tod I decided to revisit the process of routing ReaTune for multi-mic tuning and actually it seems like it might be a really good method, I can't remember why I stopped using it (maybe there is a hidden pitfall I'll stumble into).



Yeah, that looks good David. You kept all the mics on their own tracks where I would have probably made a multi channel track out of them for editing purposes. You also made good use of the folders, in all honesty I'm prone to use bus tracks rather than folders, but that's just me.



arcy said:


> But I think that I should spend much time to create a very realistic synth-modeled RR that reproduce perfectly the "chiff" or the tongue sound rather than record a new one and apply some cut and fades, IMHO...



Humm, what do you mean by "realistic synth-modeled RR" arcy? Since you're talking about creating a realistic flute library, I guess I don't understand the "synth-modeled" part of it?


----------



## arcy (Aug 3, 2018)

Tod said:


> Humm, what do you mean by "realistic synth-modeled RR" arcy? Since you're talking about creating a realistic flute library, I guess I don't understand the "synth-modeled" part of it?


I'm referring to Lindon post that suggests me to reproduce RR via script to avoid editing too many samples...maybe I don't understand


----------



## Tod (Aug 3, 2018)

arcy said:


> I'm referring to Lindon post that suggests me to reproduce RR via script to avoid editing too many samples...maybe I don't understand



Aah, okay I understand, I went back and reread Lindon's post, it is a good way to cut down on editing time and size, and I think it works best if you record in half steps.

May I ask how your are doing this arcy? By this I mean, are you recording your samples by yourself? Are you recording it with 2 mics for stereo, or more mics for mixing.

If I'm being too nosey, don't worry, you don't have to answer.  I've been recording samples for many years and I'm allways interested in what other folks are doing.


----------



## arcy (Aug 4, 2018)

Tod said:


> Aah, okay I understand, I went back and reread Lindon's post, it is a good way to cut down on editing time and size, and I think it works best if you record in half steps.
> 
> May I ask how your are doing this arcy? By this I mean, are you recording your samples by yourself? Are you recording it with 2 mics for stereo, or more mics for mixing.
> 
> If I'm being too nosey, don't worry, you don't have to answer.  I've been recording samples for many years and I'm allways interested in what other folks are doing.



Yeah Todd. I'm happy to share with you what I'm doing. I'm a flutist and I have a project studio, so I record from myself the instrument. Now I'm sampling the Bansuri flute that is totally different from the classical flute. Bansuri has not mechanical keys to play sharp and flat notes, so I have to record even glide articulations to reproduce the legato between half tones. This is possible only opening the hole of 50%. Maybe I can do this tuning by script, but the sound change when I play with half holes...it is more...airy, so I prefer to sample the original and characteristic sound. 
Anyway, I record 4 samples in 3 dynamic layers for each note (both staccato and sustain) that I will use for RR and for velocity mapping. The result is a lot of samples, I know. There will be a better way to do it for sure, I'm an expert in recording and mixing, but not in sampling. This is my first experience so any advice is welcome.


----------



## Tod (Aug 4, 2018)

Thanks a lot arcy, for sharing. The Bansuri flute sounds difficult.

Yeah, you can make slides with Konatk's scripting that are very precise and slide at any speed. Unless there are some real nuances with the way the Bansuri slides it might work, especially if the airy sound is part of the sample. I'm currently working on a Steel Guitar library and I've got it set up to do all kinds of slides.

Something that may interest you, I'm using Reapers "take system" for recording my steel samples and it's working pretty good. You mentioned 4 samples in 3 dynamic layers for a total of 12 samples. I'm basically doing 4 samples per note at a time and since I'm not a steel guitar player, I find the take system works very well.

Here's a quick little video showing how it works. Due to some meds I'm taking I couldn't do a voice over, so I'll have to explain here how it works. 

1> I use a kick drum for a count in for a total of 2 measures for 8 counts. This is so I can record the notes at the right time.

2> As the take loop folds around and the count starts again, I continue to hold the note until it dies out or is close the where I play another note.

3> When I get all the notes recorded, I've got a custom action that grows the take item longer so the notes are all revealed right to their end. Also, the front of the notes in the item is automatically zoomed in close to where I will cut it.

4> I have another custom action that takes the item to the end of the project, adds a marker, and explodes the item in the order of the notes on the same track.

5> I have custom actions that first finds 10ms in front of the transient of each note where I locate the right place to cut it. Then when the fronts are cut, I have another action to zoom in close to the end of each note where again I can cut it. _ (You probably will not be able to use the transient of the flute, but there are other ways to easily distinguish the fronts)_

6> Then I select all 4 notes and have an action to name them from 01 to 04. I use these numbers in the file name to distinguish them apart. I also have icons for creating regions so I can quickly check the sample out. 



I first used the take system when I sampled my acoustic guitar, only with that I used it to record velocity layers, up to 64 of them. It worked well for that and is also working well with my steel. 

Incidentally, I've been working on my steel for about a month now, and finally got enough samples together to check it out and see how it will work. I found a YouTube video of a steel guitar playing "Crazy Arms", so I downloaded it and used it to check out my steel. First I threw a quick backing track together and then proceeded to program my steel to it. It actually turned out pretty good so I uploaded it to YouTube. Here you can hear how my slides are working.


----------



## arcy (Aug 5, 2018)

Thanks Tod! Very sofisticate workflow


----------



## midi-et-quart (Feb 17, 2019)

d.healey said:


> Prompted by @Light and Sound and @Tod I decided to revisit the process of routing ReaTune for multi-mic tuning and actually it seems like it might be a really good method, I can't remember why I stopped using it (maybe there is a hidden pitfall I'll stumble into). Anyway, here is a picture of the way I have everything routed. This is setup for 5 mics but will of course work with more.



Hi @d.healey , I'm just about to make my own small libraries with 2 (or maybe max. 3) different mic positions. What method would you recommend now, is reatune still that effective with this multiple mic script or is melodyne just about the same in terms of results?
I know melodyne very well and already made a few good tests with this multi mic position technique.


----------



## d.healey (Feb 17, 2019)

midi-et-quart said:


> Hi @d.healey , I'm just about to make my own small libraries with 2 (or maybe max. 3) different mic positions. What method would you recommend now, is reatune still that effective with this multiple mic script or is melodyne just about the same in terms of results?
> I know melodyne very well and already made a few good tests with this multi mic position technique.


If you're happy to use Melodyne then that's the way to go.

I still use ReaTune although I found that the multi-mic routing I was using only really works well when all mics are fairly close to each other and adds new problems when they are spaced. I also found it works better with short samples (percussion, plucked strings, etc) than with sustaining samples. As well as ReaTune I also use Reaper's pitch envelopes and playback rate adjustment of individual samples - whatever is needed for the project I'm working on.


----------

