What's new

Kontakt 5 internal headroom

Max Headroom?

  • I can see sky!

    Votes: 1 33.3%
  • Bang your head

    Votes: 2 66.7%
  • Float

    Votes: 0 0.0%
  • Sink

    Votes: 0 0.0%

  • Total voters
    3
First, I don't know about the internal processing in Kontakt, which means I'm on tangents. Sorry. :)

Second:

Your results are irrelevant unless you did a controlled double blind test which showed you could tell the difference in bit depth with an accuracy better than chance

This is a big pet peeve in my bonnet of mixed metaphors that I argue with all the time. I would never presume to tell someone they didn't hear something they say they did.

As I post all the time, double-blind tests are extremely useful, but they aren't the only thing that's useful. It's much easier to hear the difference between things - and there's pretty much always a difference - when you're in control of A and B. Sometimes it takes a while to train yourself what the differences are (after which you could often hear them in a double-blind test).

There's also a difference between listening sessions; we're not always equally sharp. Plus I've found there's usually a very short period when you can hear extremely subtle differences, after which your brain seems to have a mind of its own and you aren't sure anymore.

My experience is that people aren't usually fooling themselves when they hear something. Our ears are incredibly sensitive over a huge logarithmic range of amplitude and 1000X range of frequency - and that's before the brain software in between those ears comes into play, which is another huge thing.

So if Norbz heard something, my reaction is to ask what he heard and why that is, not to tell him reflexively he's delusional. :)

(I know, EvilDragon didn't actually say that.)

***

Now.

There several things going on here. One is simple "speed up/down the tape" sample rate variation. I can't see how the bit depth would matter, because you're not changing amplitude. But in theory the sample rate would matter if you create frequencies above 1/2 fs and the filtering does audibly bad stuff.

Time-stretching/compression and pitch shifting with formants are a whole other kettle of fish, of course, because they have to do some fancy processing. That's going to depend on the algorithms, the material, and the context (for example, if you speed up a full orchestra, the hall reverb is going to shorten along with everything else).

***
None of this answers shapednoise's original question about gain. However, there's a common misconception that more bits = more headroom. More bits means you can record at a lower level and get the same low-level detail, so in that sense it's sort of true, but 0dBFs is 0dBFS no matter how many bits you're using to represent the peak.

(Edit: and it is true that 32-bit internal processing of 24-bit recordings does give you more bits, so maybe I'm just being surly.)

I think you just lower the levels of the samples if they distort and leave it at that. :)
 
Last edited:
I liken it to pixels to a camera, more pixels = more data = better editing capabilities/more pixels to work with.

Not a bad analogy, but it's not complete. There's more pixels to work with depending on the nature of processing. If changing pitch, then bit depth does NOT matter. AT ALL. Why? Because amplitude is absolutely not affected when resampling! That is all I'm saying, and I've proven it's true. QED.
 
Cheers,

I understand I cannot argue maths, all I can do is test and max out the tools available to me through my own means.

Perhaps the grey area starts when you begin applying more than changing pitch, like doing it while preserving length and or effects/tuning while applying lfos/other processing where you start noticing all samples being mangled a bit more on lower bit/rate depths. Simple testing might show one thing, yet a typical kontakt session and the various sample setups might show another. Certain samples, depending on their frequency range, also acted quite different in both scenarios when processing/pitching them so things might show more on bass tones vs highs etc.

EDIT: OP - terribly sorry for throwing this thread off :P. I don't think anyone has access to Kontakt deep enough to truly know, but Nicks advice above is pretty much what I do too.
 
Last edited:
Perhaps the grey area starts when you begin applying more than changing pitch, like doing it while preserving length and or effects/tuning while applying lfos/other processing where you start noticing all samples being mangled a bit more on lower bit/rate depths.

OK so let's see:

"preserving length" implies time stretching. Yes bit depth might have a bit of an effect here (pun intended). LFOs/envelopes? If they modulate pitch, then no effect, because that's basically resampling where speed of playback is modulated at LFO rate, so bit depth doesn't matter here. If they modulate amplitude, again no, because LFOs have their own internal resolution (which is 32-bit floating point), it doesn't care about bit depth, it just scales whatever the amplitude value is.

Frequencies present in the signal also don't matter - they are a function of sample rate, not bit depth.
 
Back to the OP question. After searching you are correct, there is little information out there. Was going to start making a signal flow chart for Kontakt (couldn't find one) but then came across this video:



If we think of the signal flow as we would in a mixer or studio patch bay, establishing the listening volume of the room, then backtracking through Kontakt (main or individual outputs, inserts, group outs, then down to sample volume) you can figure out a gain structure that will work for you. As with music you may want to pull out other percussion Kontakt instruments you have to compare, since there are many out there that don't distort yet have a healthy output volume (even the Kontakt factory library might be useful for this). Start with your loudest instruments first, since they will be most likey to distort, then work your way through the kit to the quieter stuff.

In the end it should all play "evenly", which is subjective, but thinking of what would sound balanced in an orchestral setting might be a good place to start. When setting up drum kits or percussion I try to make it well balanced so it will play naturally out of the box (as opposed to each piece "as loud as possible") on an e-kit OR keyboard, which is a balancing act (adjusting velocity layering or curve) because drum pads have to be hit much harder than a keyboard to get top velocities . Comparing mine to NI Studio Drummer and a few other well known Kontakt kits, I was able to make sure they would impress a first time user on any controller without going into distortion. Hope this helps. :)
 
Last edited:
Meanwhile, nobody actually has a definitive answer re headroom in kontakt.
Since there are very few ways to monitor the signal chain, but easy to have hundreds of voices streaming through a buss, the issue seems relevant.
 
Back to the OP question. After searching you are correct, there is little information out there. Was going to start making a signal flow chart for Kontakt (couldn't find one) but then came across this video:



If we think of the signal flow as we would in a mixer or studio patch bay, establishing the listening volume of the room, then backtracking through Kontakt (main or individual outputs, inserts, group outs, then down to sample volume) you can figure out a gain structure that will work for you. As with music you may want to pull out other percussion Kontakt instruments you have to compare, since there are many out there that don't distort yet have a healthy output volume (even the Kontakt factory library might be useful for this). Start with your loudest instruments first, since they will be most likey to distort, then work your way through the kit to the quieter stuff.

In the end it should all play "evenly", which is subjective, but thinking of what would sound balanced in an orchestral setting might be a good place to start. When setting up drum kits or percussion I try to make it well balanced so it will play naturally out of the box (as opposed to each piece "as loud as possible") on an e-kit OR keyboard, which is a balancing act (adjusting velocity layering or curve) because drum pads have to be hit much harder than a keyboard to get top velocities . Comparing mine to NI Studio Drummer and a few other well known Kontakt kits, I was able to make sure they would impress a first time user on any controller without going into distortion. Hope this helps. :)



I'm pretty good with gain structure in general ( I'm old and grew up with hardware) it's just the face we all use kontakt but there seems no way to check gain through the path.
 
I'm pretty good with gain structure in general ( I'm old and grew up with hardware) it's just the face we all use kontakt but there seems no way to check gain through the path.

Back when NI introduced scripting meters they said it could only be attached to an output, so far. There may be planned a future version where we can place meters at different points in the signal path, until that happens you are correct.
 
Back when NI introduced scripting meters they said it could only be attached to an output, so far. There may be planned a future version where we can place meters at different points in the signal path, until that happens you are correct.

While I'm developing instruments, I will put a LIMITER in the chain as a way to meter / see if I'm Peaking, then remove it, but this is obviously both tedious and perhaps not 'optimal'.

The main thing is that it seems NOBODY seems to actually know the answer.
 
I think what Norbz is hearing is a degrading signal from various effects (compressors, re-sampling etc) which alter the 'vertical' aspect of the waveform, and therefore bit-depth matters.

It's a bit theoretical anyhow. Who would edit an 8 bit, 96k recording, because he never uses compression, EQ, saturation etc. but only time-stretches his/her material? :P
 
It's the usual type of data you would use for audio processing, so I'm very doubtful it's anything else. Overloading an effect can be a factor of the algorithm itself, not the data type that was used for calculation.
 
It's the usual type of data you would use for audio processing, so I'm very doubtful it's anything else. Overloading an effect can be a factor of the algorithm itself, not the data type that was used for calculation.
Yea, so routing may have tons of headroom, but some inserts less? Yike!
Anyway, wish we could get the ear of an NI coder on this.
 
No, not like that. Effects are all using the same variables for sure. The algorithm itself might have been created in such a way that it doesn't respond well to extreme amounts of gain. That's different.
 
Yeah I understand re extreme gain, but isn't it weird that nobody can definitively say.
 
Top Bottom