# Resampling Samples? JunkieXL Latest Video



## ryanstrong (Mar 23, 2018)

At 12:00 in the video Tom references that to achieve the sound he gets with Cinematic Strings he has resampled the samples.... uhh what? What does that mean? Anyone else doing this?


----------



## MatFluor (Mar 24, 2018)

As far as it seems, he means that he recorded the processed samples.

As example: Let's say you EQ and process a string library to make it sound the way you want it. Then you resample:
- Record the samples
- Build your own Kontakt (or other sampler) instrument from these recorded (aka resampled) sounds
- ?
- Profit (don't sell them obviously)!

The advantage would be, to save some processing power (especially in large templates), when you know how your stuff should sound - and e.g. "XY-Samples Violins always need this and this settings", why not just resample, "bake" the effects in, construct it in your own sampler and therefore saving processing power, since you don't have realtime effects going on.


----------



## dhlkid (Mar 24, 2018)

I thnk he mentioned he mixed the original Cinematic Strings


----------



## erica-grace (Mar 24, 2018)

dhlkid said:


> I thnk he mentioned he mixed the original Cinematic Strings



How do you "mix" a sample library?


----------



## JC_ (Mar 24, 2018)

erica-grace said:


> How do you "mix" a sample library?



I believe he processed the libraries individual samples (eq, compression etc) and fed them back into Kontakt.


----------



## ryanstrong (Mar 24, 2018)

MatFluor said:


> As far as it seems, he means that he recorded the processed samples.
> 
> As example: Let's say you EQ and process a string library to make it sound the way you want it. Then you resample:
> - Record the samples
> ...


Seems like a ridiculous way to save on processing power. Knowing his setup I wouldn’t think processing power would be a concern. To reengineer an entire commercial library you would think it would be easier just to add a new slave computer.


----------



## jononotbono (Mar 24, 2018)

ryanstrong said:


> Seems like a ridiculous way to save on processing power. Knowing his setup I wouldn’t think processing power would be a concern. To reengineer an entire commercial library you would think it would be easier just to add a new slave computer.



I think it's probably to do with making the library "his" and more unique so it doesn't sound like everyone else's. But I could be wrong.


----------



## ryanstrong (Mar 24, 2018)

jononotbono said:


> I think it's probably to do with making the library "his" and more unique so it doesn't sound like everyone else's. But I could be wrong.


Right, yeah I get that. I would just have to think there has got to be some functional reason why. Resampling allowed him to get better legato transitions because he knows better developers who use different processing recipes or something. He may have even reamped them. Just a guess...


----------



## MatFluor (Mar 24, 2018)

ryanstrong said:


> Right, yeah I get that. I would just have to think there has got to be some functional reason why. Resampling allowed him to get better legato transitions because he knows better developers who use different processing recipes or something. He may have even reamped them. Just a guess...



Yeah - processing power was only one example of a possible reason - paired with the EQ and whatnot "to make it his". Make it more to his style, layer it, adjust transition times, attacks and whatnot, integrate it with his touchscreen system - there are a lot of reasons why he wouldn't take the library "verbatim". Resampling is a tad more than some finetuning, but the basic idea is the same. Make it your own - and there are nice legato scripts around. And with some cash in hand, you can easily hire a Kontakt (or other sampler) developer to really make something custom. How are the transitions for legato triggered, mix with custom samples, change release tails etc.etc. - tons of possibilities.


----------



## muziksculp (Mar 24, 2018)

Hi,

So how would one go about re-sampling a library to make it more customized to his/her needs ?

i.e. Play single notes of a specific articulation, and re-sample them at various dynamics using some Plug-Ins to give it a new character ? or ... ?? this could be quite time consuming, and labor heavy.

Maybe using an automatic sampling application such as *SampleRobot *could help in customizing a sample library ?

*Sample Library Customization* is an interesting topic, maybe worth starting a thread about it is a good idea.

Cheers,
Muziksculp


----------



## ChristopherDoucet (Mar 24, 2018)

That's so funny! I was JUUUUST about to post this topic. 

It's essentially the same idea as Hans Zimmer percussion right? Where the same raw recordings were processed by Zimmer differently than JXL, differently than Alan Meyerson etc. right?

I was really curious to do this. It seems as mysterious to others as to me. 


Also, I dont have cinematic strings, but am I correct that his "Keyswitches" are NOT the ones in the library? Did he create 3 different vibrato presets and them map them to KS's? Or is that actually how CS2 is laid out?


----------



## muziksculp (Mar 24, 2018)

ChristopherDoucet said:


> That's so funny! I was JUUUUST about to post this topic.



Please feel free to start this topic.


----------



## Nite Sun (Mar 24, 2018)

> Also, I dont have cinematic strings, but am I correct that his "Keyswitches" are NOT the ones in the library? Did he create 3 different vibrato presets and them map them to KS's? Or is that actually how CS2 is laid out?



I'm wondering this too. Maybe he took the lower velocity layers (which have a less intense vibrato) amped then up, denoised etc to create new vibrato variation patches that each have a full, simulated dynamic range.


----------



## ryanstrong (Mar 24, 2018)

muziksculp said:


> Please feel free to start this topic.


Isn't this the topic?


----------



## ryanstrong (Mar 24, 2018)

Nite Sun said:


> I'm wondering this too. Maybe he took the lower velocity layers (which have a less intense vibrato) amped then up, denoised etc to create new vibrato variation patches that each have a full, simulated dynamic range.


Sheesh all of what we are suggesting seems like it would just be easier to record and develop his own library.

I wonder WHY Cinematic Strings? Maybe he knows them and this was a bespoke project for them?

Also it raises the question... is this legal to do this? Maybe it just depends on the developers license?


----------



## erica-grace (Mar 24, 2018)

JC_ said:


> I believe he processed the libraries individual samples (eq, compression etc) and fed them back into Kontakt.



Well, that's not really _mixing_. Processing maybe.


----------



## Nite Sun (Mar 24, 2018)

Because of the beautiful legato transitions. Probably not illegal if you don't sell it on


----------



## erica-grace (Mar 24, 2018)

ryanstrong said:


> Isn't this the topic?



What topic?


----------



## erica-grace (Mar 24, 2018)

ryanstrong said:


> Also it raises the question... is this legal to do this?



Wouldn't be illegal - if all you are doing is re-processing the samples that you already have a license to use, and if it's for your own use, and as long as you don't re-distribute them.


----------



## Nite Sun (Mar 24, 2018)

Definitely easier than recording a whole sample new sample library. Just a case of Batch processing the lower velocities, matching the loudness of the original samples and replacing the samples in the container


----------



## muziksculp (Mar 24, 2018)

ryanstrong said:


> Isn't this the topic?


 
Yes, kind of, but a bit tied to Junkie XL's video.

I was thinking of a more general topic, which Discusses the *Customization of Sample libraries* via _Re-Sampling, _and_ Re-Editing_, in a broader sense. That will serve as a good info. source for this forum on this topic.


----------



## ryanstrong (Mar 24, 2018)

If Junkie resampled them to have baked-in EQ (and other processing) it's interesting that he is then adding this much more EQ to them.


----------



## ryanstrong (Mar 24, 2018)

erica-grace said:


> Wouldn't be illegal - if all you are doing is re-processing the samples that you already have a license to use, and if it's for your own use, and as long as you don't re-distribute them.


Sure that would be my assumption as well. Maybe this is why newer developers are disallowing or putting in locks to going under the hood on some Kontakt libraries.

Probably also why Spitfire may be moving to their own sample engine. To prevent stuff like this.


----------



## Nite Sun (Mar 24, 2018)

Why would you want to prevent people getting the most of their sample libraries? Only becomes a problem if you're stealing the Kontakt scripting, re-packaging and profiting commercially... which I'm sure Junkie isn't doing


----------



## ryanstrong (Mar 24, 2018)

Nite Sun said:


> Why would you want to prevent people getting the most of their sample libraries? Only becomes a problem if you're stealing the Kontakt scripting, re-packaging and profiting commercially... which I'm sure Junkie isn't doing


So they buy newer libraries...


----------



## Nite Sun (Mar 24, 2018)

That's going a bit far. If that were the case sample libraries devs would try to prevent you from processing the sound of their libraries at all with EQ and reverbs etc.

My guess is that he does the re-sampling because real-time denoising of amped-up low velocity samples would be too much of a cpu hog for any system to handle and introduce too much latency etc

One theory at least


----------



## muziksculp (Mar 24, 2018)

Nite Sun said:


> That's going a bit far. If that were the case sample libraries devs would try to prevent you from processing the sound of their libraries at all with EQ and reverbs etc.
> 
> My guess is that he does the re-sampling because real-time denoising of amped-up low velocity samples would be too much of a cpu hog for any system to handle and introduce too much latency etc
> 
> One theory at least



I don't think Junkie XL needs to worry about his computing power, he most likely has a nice line up of slave machines to easily accomplish demanding tasks, to go through the burden of re-sampling just for this. I think it would be more for the sonic aspect of the library, customization of its sound, or maybe real time playability, or ..etc.


----------



## ryanstrong (Mar 24, 2018)

muziksculp said:


> I don't think Junkie XL needs to worry about his computing power, he most likely has a nice line up of slave machines to easily accomplish demanding tasks, to go through the burden of re-sampling just for this. I think it would be more for the sonic aspect of the library, customization of its sound, or maybe real time playability, or ..etc.


Yeah thats what I think. More so on the playability coupled with the sonic aspect.

I wish someone would go in and do that with LASS... man if that library was re-sampled it'd be pretty killer.


----------



## givemenoughrope (Mar 24, 2018)

Are youz guys saying resampling the raw waveforms and then plopping them back into the same kontakt patch with all the same scripting, etc? Seems so, right?


----------



## Nite Sun (Mar 24, 2018)

I hear you but no computer can handle complex denoising in realtime so youd have to do some re-sampling.

I'm going off the fact that CSS only has x-fading between non-vib and vib and Junkie clearly has several different vibrato patches. Only way I can think of achieving this is replacing the high velocity layers with amped up mid and low velocity layers which have less vib. This would definitely require denoising and thus re-sampling. This would certainly change the sonics of the library and give it a much fatter sound. It's the same principle behind the Tundra and Metropolis ark 2 libraries - very low level velocity samples amped up.


----------



## ryanstrong (Mar 24, 2018)

givemenoughrope said:


> Are youz guys saying resampling the raw waveforms and then plopping them back into the same kontakt patch with all the same scripting, etc? Seems so, right?


I don't think anyone knows, or at least can speak from any sort of certainty. Hopefully Tom/Junkie will address it. He references even resampling CineBrass it appears.


----------



## ryanstrong (Mar 24, 2018)

Nite Sun said:


> Only way I can of achieving this is replacing the high velocity layers with amped up mid and low velocity layers which have less vib. This would definitely require denoising and thus re-sampling. This would certainly change the sonics of the library and give it a much fatter sound. It's the same principle behind the Tundra and Metropolis ark 2 libraries - very low level velocity samples amped up.


That's cool. Again I wish someone would do that with LA Scoring Strings!


----------



## Nite Sun (Mar 24, 2018)

> Are youz guys saying resampling the raw waveforms and then plopping them back into the same kontakt patch with all the same scripting, etc? Seems so, right?



Exactly. I'm sure that's what he means. Junkie has a big EDM past and re-sampling is big part of that sound. Applying effects, bouncing out to audio (re-sampling), applying more effects, re-sampling. I think it's unlikely that he changes anything at the script level as the scripts are locked and obfuscated, whereas the samples can be unpacked, edited and re-packed.


----------



## Nite Sun (Mar 24, 2018)

ryanstrong said:


> That's cool. Again I wish someone would do that with LA Scoring Strings!



I bet he's re-sampled LASS. You often see it quite prominently in his Cubase sessions


----------



## givemenoughrope (Mar 24, 2018)

I guess in the case of Charlie C/Christian H with using EXS that’s impossible. And when JXL played and used crossfades it still sounded like a tightly scripted commercial library. I’ve often thought about eqing and mono-izing my old giga-turned-kontakt vsl patches but just never got around to it. I’m not sure how LASS would really but improved this way in the same way that I don’t understand why JXL doesn’t just use an eq on these patches. 12 dB in the lows sounds quite drastic.


----------



## NoamL (Mar 24, 2018)

@Nite Sun Wait, how? Aren't NCW files encrypted? I agree that the fact he has multiple types of legato vibrato points to re-amping softer legato samples.


----------



## Nite Sun (Mar 24, 2018)

I know that there are several "unlocked" versions of commercial libraries on torrent sites (don't shoot me!) where someone has evidently manged to unencrypt the NCW files, remove mic positions and cut off uneccessarily long tails etc.


----------



## ChristopherDoucet (Mar 24, 2018)

ryanstrong said:


> If Junkie resampled them to have baked-in EQ (and other processing) it's interesting that he is then adding this much more EQ to them.



I'm guessing the "Processing" he does when resampling is for the strings to sound good and playable on their own and then I'm willing to bet the additional EQ is for blending with other libraries and spacially placing them.


----------



## NoamL (Mar 24, 2018)

Actually my bad - NCW files are not encrypted, you can load them into Kontakt one by one and play them back. So presumably there's a way to batch convert them to WAVs.

It's NKX/NKC files that are encrypted I think.


----------



## Nite Sun (Mar 24, 2018)

Apparently there is a program called inNKX for Total Commander that allows you to do this. Not sure if it's legal


----------



## Nite Sun (Mar 24, 2018)

The law suit against Junkie XL begins...


----------



## NoamL (Mar 24, 2018)

Actually @Nite Sun you might be wrong. This is Cinematic Strings 2 (not CSS). I don't own CS2 but with some googling I was able to find that CS2 has a vibrato control mode that can be assigned to CC2. There are already low and high vibrato samples in the stock library.

I'm* back to square one *wondering why resampling would be necessary.

At least in Logic, you have the MIDI Scripter which can send any kind of MIDI message you desire, based on any kind of input. Using the Scripter it would be trivial to, for instance, make it so that one track was pegged to continually send a constant CC2 message no matter what input it received. Or, you could have the CC2 vary in any kind of mathematical relationship to an input CC1 message (for example, "When the instrument gets a CC1 message, also send a CC2 message of 2x+12 the value." or literally any other mathematical relation.)


----------



## Nite Sun (Mar 24, 2018)

Yeah you're right, my bad


----------



## ryanstrong (Mar 24, 2018)

NoamL said:


> Actually @Nite Sun you might be wrong. This is Cinematic Strings 2 (not CSS). I don't own CS2 but with some googling I was able to find that CS2 has a vibrato control mode. There are already low and high vibrato samples in the stock library.
> 
> I'm* back to square one *wondering why resampling would be necessary. At least in Logic, you have the MIDI Scripter which can send any kind of MIDI message you desire, based on any kind of input. Using the Scripter it would be trivial to, for instance, make it so that one track was pegged to continually send a constant CC2 message no matter what input it received. Or, you could have the CC2 vary in any kind of mathematical relationship to an input CC1 message (for example, "When the instrument gets a CC1 message, also send a CC2 message of 2x+12 the value." or literally any other mathematical relation.)


That’s interesting.
I’ve still been sitting at square one... it just seems crazy to me to resample at the sample level. Why not perform, print, then apply?

I guess because.... he can. He’s Junkie XL!


----------



## NoamL (Mar 24, 2018)

Not only that, but with a little bit of a deeper dive into Javascript, I could create fake keyswitches, so that I could switch between four constant vibrato levels on a single track like Junkie is doing.

The pseudocode looks like:

declare variable vib
function When (event) and event is note on and MIDI value is C0, set vib = 0
function When (event) and event is note on and MIDI value is C#0, set vib = 42
function When (event) and event is note on and MIDI value is D0, set vib = 84
function When (event) and event is note on and MIDI value is D#0, set vib = 127
function When (event) and event is a CC1 message, send CC2 = vib and passthru CC1

Really simple. Now you don't need four different "vibrato level" tracks, you can do it all in one track.


----------



## germancomponist (Mar 24, 2018)

Nite Sun said:


> I hear you but no computer can handle complex denoising in realtime so youd have to do some re-sampling.
> 
> I'm going off the fact that CSS only has x-fading between non-vib and vib and Junkie clearly has several different vibrato patches. Only way I can think of achieving this is replacing the high velocity layers with amped up mid and low velocity layers which have less vib. This would definitely require denoising and thus re-sampling. This would certainly change the sonics of the library and give it a much fatter sound. It's the same principle behind the Tundra and Metropolis ark 2 libraries - very low level velocity samples amped up.


I did exactly this thousands of times with so many libraries. Makes later work on other projects so much easyer and faster .... .


----------



## jononotbono (Mar 24, 2018)

germancomponist said:


> I did exactly this thousands of times with so many libraries. Makes later work on other projects so much easyer and faster .... .



How long does something like this take?


----------



## Nite Sun (Mar 24, 2018)

germancomponist said:


> I did exactly this thousands of times with so many libraries. Makes later work on other projects so much easyer and faster .... .



What was your process? Might try out the low velocity re-sampling trick on cinematic studio solo strings as I love the sound of the low velocity layers boosted up 10db, but there's loads of noise. I find the vibrato too intense in the higher velocity layers. Would be cool to run it all through RX6 and recompile a new low velocity emotional version of the library.


----------



## germancomponist (Mar 24, 2018)

jononotbono said:


> How long does something like this take?


It depends on what you are doing.

@Nite Sun : I very often deleted the fff samples and stretched the others velocity wise, but also often built new instruments with only using one velocity recording, lets say the mp or f samples. Then I stretched it from 0 till 127 and faked volume slider also with an eq, or a filter.
In drum libraries I often built new instruments also, because there often is a
conspicuous different, soundwise, between the different velocity recordings and round robin samples.
A good example also are bass guitar libraries. I never used any out of the box. And and and ..... .


----------



## Nite Sun (Mar 24, 2018)

Very interesting, Thanks for sharing!


----------



## JC_ (Mar 24, 2018)

@NoamL 

I'm not sure if it changes your findings at all but I thought JunkieXL said it was the old Cinematic Strings (as in Cinematic Strings 1).


----------



## AdamKmusic (Mar 25, 2018)

Tempted to do this with HZ Perc, could make some good ensembles and mangle them a bit. I assume as JXL is doing this it’s it illegal as long as you don’t sell them?


----------



## robgb (Mar 25, 2018)

ryanstrong said:


> Sure that would be my assumption as well. Maybe this is why newer developers are disallowing or putting in locks to going under the hood on some Kontakt libraries.
> 
> Probably also why Spitfire may be moving to their own sample engine. To prevent stuff like this.


I wouldn't prevent anything. If you play the sounds at various velocities and save them to wav files, you can import them into Kontakt and create your own library. It doesn't matter what sample engine you use to produce the sounds.


----------



## ryanstrong (Mar 25, 2018)

robgb said:


> I wouldn't prevent anything. If you play the sounds at various velocities and save them to wav files, you can import them into Kontakt and create your own library. It doesn't matter what sample engine you use to produce the sounds.


Sure, then let’s say... make it harder.


----------



## Kent (Mar 25, 2018)

ryanstrong said:


> Sure, then let’s say... make it harder.


It would mean a few seconds’ more work. A few minutes’ at most.


----------



## kavinsky (Mar 26, 2018)

I'll put it this way - most of the people who work professionally have a really different workflow to what consumer products have to offer.
It's essential to redo most of the scripting to suit their needs and to have consistency in the template.
I've been doing the same thing for ages and thought that I was going nuts, until just recently found out that guys at RCP go through the exact same process (its quite handy when you have assistants to do the dirty work though).


----------



## ryanstrong (Mar 26, 2018)

kmaster said:


> It would mean a few seconds’ more work. A few minutes’ at most.


Maybe for some. I don't even know how to modify a Kontakt instrument when the hood is capable of being "opened" let alone with restrictions. So for me and your 'average' user it can be barrier for sure.

Either way I don't think sample developers are locking their Kontakt instruments for fun. There is a definite reason.


----------



## jononotbono (Mar 26, 2018)

ryanstrong said:


> Maybe for some. I don't even know how to modify a Kontakt instrument when the hood is capable of being "opened" let alone with restrictions. So for me and your 'average' user it can be barrier for sure.
> 
> Either way I don't think sample developers are locking their Kontakt instruments for fun. There is a definite reason.



Yes, I think it's to protect their Scripting. Imagine how much hard work goes into, for example Spitfire creating the Performance Legato Patches. Then someone lifts the hood, steals the Scripting, and uses it in their own Libraries.


----------



## robgb (Mar 26, 2018)

ryanstrong said:


> Sure, then let’s say... make it harder.


Well, not really. Especially if you use a tool like Samplit.


----------



## ryanstrong (Mar 26, 2018)

robgb said:


> Well, not really. Especially if you use a tool like Samplit.


Cool


----------



## NoamL (Mar 26, 2018)

kavinsky said:


> It's essential to redo most of the scripting to suit their needs and to have consistency in the template.
> I've been doing the same thing for ages and thought that I was going nuts, until just recently found out that guys at RCP go through the exact same process (its quite handy when you have assistants to do the dirty work though).



Could you explain what kind of resampling you've been doing?

I assume you don't mean just changing the keyswitches to be consistent across libraries because there are way easier ways to do that than resampling the entire library...


----------



## charlieclouser (Mar 26, 2018)

While I don't re-sample in the way that people suggest RCP and JXL do, I often convert / extract the wav files from Kontakt and remap a subset of them into simpler, more useful EXS instruments, which can then be turned into small Kontakt instruments that load very very quickly. I might find one aleatoric cluster sustain sample in a "menu" style patch of orchestral effects that was originally mapped to just one key, but I want it spread across all 88 keys because I like how it sounds way down low or something like that. I also find that some libraries do not normalize their samples, so the ppp samples are at like minus a zillion db, and this can make it very difficult, if not impossible, to get the quiet layers to play loudly, or to re-adjust the velocity curve the way I want. In cases like that, you'd use zero velocity>volume modulation to play back the samples with their original volume curve, but this means that you'd have to use "upside down" velocity>volume modulation if you wanted the quiet stuff to be louder. I don't like this. I prefer to have it more like "every sample layer is normalized, and velocity>volume is used to determine the response curve". So here's my very basic list of things I do to extracted wav files before re-mapping and building new instruments:

- Better Sample Names. If I'm not going to attempt to convert the original Kontakt instrument file using Chicken Systems Translator (which can be hit or miss), but am instead going to just build new instruments from scratch using Redmatica KeyMap (which you'll have to pry from my cold, dead hands!), then I have the freedom to fix the absolutely awful sample naming conventions used by most sample developers. My sample naming format is:

instrument - library title - articulation - dynamic layer - root key - round robin group

So a sample might have a name like:

cellos-SAW2-SPICCATO-ff-A#2-rr2

Where the instrument is a cello section (solo cello would be named cello [singular] instead of cellos [plural], etc.), the library title or recording session was from SAW2, the articulation is spiccato, the dynamic layer is ff, the root key is A#2 (I always notate these as sharps instead of flats so things will alphabetize better), and the round robin layer is #2. (Yes, I know it's supposed to be "celli" and not "cellos", but for consistency I just use the plural - like violins vs. violin for section vs. solo, etc.) When samples are all named this way, they sort into logical groups in an alphabetized list in the MacOS Finder, making it much easier to grab the ones you want for drag-n-dropping into Sample Manager or KeyMap or whatever. Each dynamic layer will sort as a group, then by root key, then by round robin layer. This puts rr1 through rr4 right next to each other in the list for easy and quick comparison. I use A Better Finder Rename as my batch renaming utility, but there are simpler yet usable renamers in AudioFinder, Sample Manager, and Myriad.

- Normalize by group. I still use the old Sample Manager batch-processing program by AudioFile Engineering (but I will switch to Myriad by Aurchitect, which is an improved version, once I update beyond Yosemite), and both of these have a process which will scan a group of samples, find the loudest one, and then raise the gain of all of them by the same amount until that loudest one hits a desired ceiling. This preserves the relative levels between all of the selected samples but allows for more healthy waveforms. If the quiet layers are extremely soft, I might normalize each velocity layer in this manner, but preserve relative levels for all samples within a layer. Sometimes I will just group normalize all samples in an instrument, and then boost the layers in 3db intervals, so FFF = normalized to -1db, MF = boost by 3db, MP = boost by 6db, PP = boost by 9db, PPP = boost by 12db, etc. This approach gives me a little more room when it comes to adjusting the levels of each layer within the instrument, because EXS maxes out at 12db of boost I think.

- Remove Dead Air. Some developers don't truncate the tops and tails very tightly, preferring to do the final tweaking within the Kontakt instrument. I don't like this, as these tweaks often get lost when converting formats, and obviously if I'm building new instruments from scratch then these tweaks would have to be done manually for every sample. Some developers also prefer to leave a fixed amount of dead air at the top of short articulations or percussion samples so that one can get the "pre-lap" if it's wanted - capturing the swoosh of the drum stick approaching the drum or whatever. I prefer tight, perfectly truncated start points and a minimum of excessive decay time. In some cases, like with very tight spicatto samples or drums, I get pretty manual and truncate the starts by hand, one sample at a time. What a pain! But Sample Manager makes things a little easier with its auto-play function - when a sample is selected it will instantly play, so you can use the down arrow key to scroll through a list of samples and trigger them as you go, and playback is very quick, so it's almost as though you're playing the samples from a MIDI controller. This makes it fairly easy to "play" the samples as you edit the starts and be able to hear how they'll respond when played in a rhythmic passage. As to the sample ends, when you look at the raw wav files, some of these developers leave wayyyyy too much on the tail of the sample for my taste. To me it's just a waste of disc storage and voice count to have all of the samples still playing when they've faded past minus a zillion db. So I use Sample Manager's "Trim Start / End Below Threshold" to trim the tops and tails as best I can. In the case of some libraries that have a fixed amount of dead air at the top, Sample Manager has a "Trim Start / End To" process which will remove a fixed amount (in milliseconds) from the top / tail of each sample, and I use this sometimes. I set these processes to NOT search zero-crossings as this can wildly affect where the truncate point is detected (if there is significant room tone and/or low frequency rumble present), but might result in tiny clicks when a sample starts. For this reason, I always apply a tiny fade at the start - see next process.

- Fade Starts / Ends. I always apply a fade to the start of samples - it might be 1 millisecond for short files, maybe as much as 50 or more on long, slow-starting samples. I also apply a fade-out at the end of every sample, with the length and curve depending on the content - it might be 10ms for short stuff that already fades well, or as long as 500ms when I want to build my own fade-out. Applying these fades means that I don't have to rely on the volume ADSR in the instrument to repair any clicks so much. The only exception to this is when I'm dealing with percussion samples with very sharp, hard starts. If it's just an instant jump to full scale then I don't apply a fade-in. I'm very careful not to mess with hard starts on percussion, but if we're talking about spiccato strings that have a bit of a mushy start, a 1msec fade-in won't be damaging anything.

- Noise Reduction / EQ out the "room rumble". Some libraries have very pronounced low-frequency rumble recorded into the samples, most noticeable on the quietest layers. Fair enough, the players are trying their best to play super-quiet, and the room might have a bit of rumble, and the engineers did not want to change the settings on the recording chain once recording commences - for good reason. But the result can be significant rumble when the softest samples are boosted in level. Since I often like to use just the quietest sample layers while boosting their level by large amounts, something must be done. Since Sample Manager allows you to batch-process through any AU plugin, I might high-pass the samples at 150hz in the case of high violins con sord or similar samples, or apply Waves or iZotope noise reduction plugins in an effort to clean things up a bit. I try to do this process before trimming and fading starts and ends so that the rumble will have less of an effect on the threshold detection. I also routinely perform a DC Offset Removal at the very start of any sample processing operation.

But I don't get into fully processing the raw samples as I might process the final instrument in a mix - that's a little too much commitment for me! I just want to clean up the raw content as best as I can to provide a firm foundation for further processing at mix time. So lots of what people are suspecting JXL might be doing is not what I'm talking about here - I'm not willing to compress or eq the raw samples because then there's no going back. I just save a Channel Strip preset that includes the desired mix processing so I don't have to re-do it every time I use the Instrument.

Of course, if one does not mess with the sample file names or the lengths of the files (and is working with unlocked Kontakt libraries like 8dio), then it's certainly possible to batch process all of the raw files through plugins or whatever, and then replace the original files with the processed versions, and move on from there. Even if the original library uses .ncw files, you just use the Batch Export / Convert Format function in Kontakt to make a new copy of the library in .wav format, process the .wav files, and then Batch Export back into .ncw format. Works fine.


----------



## Vadium (Mar 27, 2018)

I think resampling so "slow" libraries as CSS and HS by Samplit and SampleRobot software may help for more comfortable realtime performing, because resampled instrument will not have slow legato logic engine, etc. If simplified resampled instrument has lower latency than original CSS (about 300ms), I will to use resampled instrument for record MIDI to Cubase track, and then change it to original instrument for audio export.


----------



## StatKsn (Mar 27, 2018)

IL's Direct Wave can automate the resampling process (other than that, there is SampleRobot which I have no experience with).

I never resampled the library in the way the JXL video explains or charlieclouser kindly shared, but here are few things I did before to give an idea of what can be done:

Resampling an old single velocity layer piano patch (proprietary format) to Kontakt. Add a velocity-modulated mild low-pass filter to simulate softer touch
Layering EQ/stereo-adjusted release samples from other libraries (especially piano and guitar).

Make a pp/ff dynamic layer-only patch that goes from 1 to 127 rather than dynamic cross-fading


----------



## chris massa (Mar 28, 2018)

I can understand re-sampling libraries, building new instruments from others and experimenting. It does set you apart from the pack with designed sounds that are uniquely you. Like layering string libraries or others to get what you hear in your head. Or instead of buying library after library looking for that sound you open the one you have and "fix" it to taste.


----------



## meradium (Mar 28, 2018)

Funny to see this topic popping up here. I was wondering the same because I often felt it would have been great to make my own mix of an instrument rather than having to load all mic positions individually.

Maybe sample companies could start reconsider to open up again at least the samples.


----------



## ryanstrong (Mar 28, 2018)

Would anyone be kind of enough to bullet point out third-party tools that would aid in doing re-sampling? Like programs to open the Kontakt samples. Programs or processes to batch-process multiple WAVs etc?


----------



## AlexRuger (Mar 28, 2018)

Hey guys, I work for Tom.

The term "resampling" as he uses it means he's just printing the MIDI to audio and effecting it (separately from the MIDI instrument's return bus). 

It's just quick and dirty parallel processing, that's all, as well as a nice springboard for creative sound design.


----------



## gsilbers (Mar 28, 2018)

AlexRuger said:


> Hey guys, I work for Tom.
> 
> The term "resampling" as he uses it means he's just printing the MIDI to audio and effecting it (separately from the MIDI instrument's return bus).
> 
> It's just quick and dirty parallel processing, that's all, as well as a nice springboard for creative sound design.



so basically saying resampling instead of rendering?


----------



## ryanstrong (Mar 28, 2018)

AlexRuger said:


> Hey guys, I work for Tom.
> 
> The term "resampling" as he uses it means he's just printing the MIDI to audio and effecting it (separately from the MIDI instrument's return bus).
> 
> It's just quick and dirty parallel processing, that's all, as well as a nice springboard for creative sound design.


That makes WAY more sense! Thank you.


----------



## gsilbers (Mar 28, 2018)

charlieclouser said:


> While I don't re-sample in the way that people suggest RCP and JXL do, I often convert / extract the wav files from Kontakt and remap a subset of them into simpler, more useful EXS instruments, which can then be turned into small Kontakt instruments that load very very quickly. I might find one aleatoric cluster sustain sample in a "menu" style patch of orchestral effects that was originally mapped to just one key, but I want it spread across all 88 keys because I like how it sounds way down low or something like that. I also find that some libraries do not normalize their samples, so the ppp samples are at like minus a zillion db, and this can make it very difficult, if not impossible, to get the quiet layers to play loudly, or to re-adjust the velocity curve the way I want. In cases like that, you'd use zero velocity>volume modulation to play back the samples with their original volume curve, but this means that you'd have to use "upside down" velocity>volume modulation if you wanted the quiet stuff to be louder. I don't like this. I prefer to have it more like "every sample layer is normalized, and velocity>volume is used to determine the response curve". So here's my very basic list of things I do to extracted wav files before re-mapping and building new instruments:
> 
> - Better Sample Names. If I'm not going to attempt to convert the original Kontakt instrument file using Chicken Systems Translator (which can be hit or miss), but am instead going to just build new instruments from scratch using Redmatica KeyMap (which you'll have to pry from my cold, dead hands!), then I have the freedom to fix the absolutely awful sample naming conventions used by most sample developers. My sample naming format is:
> 
> ...




did you get redmatica to work on mac sierra.high sierra? 
i really miss those apps :(


----------



## charlieclouser (Mar 29, 2018)

gsilbers said:


> did you get redmatica to work on mac sierra.high sierra?
> i really miss those apps :(



Nope. Redmatica compatibility is capped at El Cap. Beyond that is a no-go zone. So I keep clones of my Mac Pro Cylinder Yosemite boot drive on spare 1tb SSDs in so I can jam 'em in the MultiDock and boot from there. I also have various silver towers lying around that are still on Snow Leopard so I can fire one of those up if needed. 

The issue with APFS drives not being mountable on pre-High Sierra OS versions could prove to be a problem when I eventually go to High Sierra. The pressure is on to update the cylinder beyond Yosemite, but if I go to High Sierra I will want to reformat all of my drives to APFS to take advantage of better SSD support, yadda yadda yadda - which means that when I want to use Redmatica I will need to not only switch-boot to a Yosemite drive, but also copy any samples I'm trying to map over to a non-APFS scratch drive... it's gonna be messy. 

Another potential problem with this scenario that I haven't investigated is this: If my cylinder boot drive is APFS, and I switch-boot to a Yosemite OS on an external, when I am booted in Yosemite and I go to System Preferences > Startup Disc, will it show the APFS internal High Sierra boot drive? Or will it not show up, requiring me to shut down, remove the Yosemite boot drive from the MultiDock, and then power up again, hoping that I don't get the flashing question mark?

Ugh.


----------



## gsilbers (Mar 29, 2018)

charlieclouser said:


> Nope. Redmatica compatibility is capped at El Cap. Beyond that is a no-go zone. So I keep clones of my Mac Pro Cylinder Yosemite boot drive on spare 1tb SSDs in so I can jam 'em in the MultiDock and boot from there. I also have various silver towers lying around that are still on Snow Leopard so I can fire one of those up if needed.
> 
> The issue with APFS drives not being mountable on pre-High Sierra OS versions could prove to be a problem when I eventually go to High Sierra. The pressure is on to update the cylinder beyond Yosemite, but if I go to High Sierra I will want to reformat all of my drives to APFS to take advantage of better SSD support, yadda yadda yadda - which means that when I want to use Redmatica I will need to not only switch-boot to a Yosemite drive, but also copy any samples I'm trying to map over to a non-APFS scratch drive... it's gonna be messy.
> 
> ...



thats a great idea of having a yosemite boot drive elsewhere to use on redmatica and other old software. 

hmmmm.. about APFS... just hmm.. i have to catch on the reading material about that. 
thx


----------



## charlieclouser (Mar 29, 2018)

Yeah, the APFS issue could jump up and bite us all. As far as I've read, you don't NEED to re-format all of your drives to APFS in order to use High Sierra, BUT - the boot drive that High Sierra is installed to IS converted to APFS, even if you're not doing a total wipe-and-install.

I've also read that APFS drives will NOT be mountable by pre-High Sierra versions of MacOS - so it appears that when you are switch-booted to an older OS version on an external drive, that your High Sierra boot drive will NOT be mountable. This could get ugly.


----------



## nuyo (Oct 12, 2020)

AlexRuger said:


> Hey guys, I work for Tom.
> The term "resampling" as he uses it means he's just printing the MIDI to audio and effecting it (separately from the MIDI instrument's return bus).
> It's just quick and dirty parallel processing, that's all, as well as a nice springboard for creative sound design.



I think the original post was talking about his resampled versions
of Cinematic Strings 2 and Cinebrass. Like he says in this livestream:



Timecode: 1:01:45


----------



## jonathanwright (Oct 12, 2020)

Funnily enough I've been doing this a fair bit lately, ever since Autosampler arrived in Logic.

I mainly use it to combine strings libraires that I would normally layer. It saves a bit of time and CPU.


----------



## robgb (Oct 12, 2020)

I've been resampling lately, too. One of my libraries doesn't come with a short patch other than spiccato, so I resampled the long Marcato patch and manipulated the samples to create new shorts. Worked like a charm.


----------

