What's new

Is there a worthwhile benefit to 24-bit samples over 16?

Note that a good part of what gets captured with 24-bit recording is the ends of reverb tails and room sound. You can often hear that stuff into the noise floor, especially if you raise the level way up.

Now, many if not most sample libraries are either normalized or raised so they're playable. And there's also a good chance you're going to compress them, which will raise the low-level details (even if your goal is to smooth out the sound and increase the density).

In that case it almost certainly doesn't matter whether you use 16- or 24-bit samples. That was Andrew K's conclusion when he released LASS, for instance. He included 24-bit programs as well as the 16-bit ones, but the first release was a few years ago when memory and storage were both scarce resources.
 
If I may foolishly wade into this with a simple, but more apt analogy:
24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.
 
Note that a good part of what gets captured with 24-bit recording is the ends of reverb tails and room sound. You can often hear that stuff into the noise floor, especially if you raise the level way up.

Now, many if not most sample libraries are either normalized or raised so they're playable. And there's also a good chance you're going to compress them, which will raise the low-level details (even if your goal is to smooth out the sound and increase the density).

In that case it almost certainly doesn't matter whether you use 16- or 24-bit samples. That was Andrew K's conclusion when he released LASS, for instance. He included 24-bit programs as well as the 16-bit ones, but the first release was a few years ago when memory and storage were both scarce resources.

Yes, if you've recorded at 24-bit, and then done some normalizing / level changing to bring up the samples to a healthy level before playing them in a sampler, then it's much more acceptable to use a version of the samples that have been truncated to 16-bit.

I often prefer to jack up the level of softer samples, and then use the sampler's velocity>volume functions to recreate the dynamic scale - this allows me to lessen that effect, which "compresses" the dynamic range. This sounds different to applying an audio compressor plugin on the sampler's output, and is more of a "playing" effect rather than a "mixing" effect.

So in those situations, as with LASS, storing and playing the samples in 16-bit format isn't a high crime or misdemeanor - but when it comes to the original recording process, doing it at 16-bit means you've got to be very careful about recording levels to avoid accidentally capturing only 9 bits worth of signal on the softer sounds. That's why I'm sure LASS was recorded at 24-bit, then edited (and possibly normalized / gain changed), and then output in both 16 and 24 bit formats. This makes sense.

But many sample libraries do not raise the gain of softer samples - they may "normalize as a group" but not get into the weeds with trying to normalize every sample. One reason why the developer would do this is because it's easier to accurately replicate the original playing dynamics of the original instrument, by insuring that the sampler's velocity>volume amount is set to zero (which is sadly not always the case). Then a soft sample plays with the same gain applied as the loud samples, and everything should play back as it was recorded. Although this approach is the easiest way to replicate the original dynamics of the samples as they were recorded, it can make it difficult to do the "compress dynamics" thing I described above - in order to do so you'd need to apply an inverse (negative) amount of velocity>volume, so that the softer samples get played back louder than they were recorded while the louder samples get played back normally (basically). It's this type of library where the 24-bit-ness of the source samples is more important - if they're stored at 16-bit then those soft samples will be at an unacceptably reduced bit depth and will possibly then be boosted in level when you muck about with the velocity>volume amounts, perhaps revealing their truncated depth even more.

If we're talking about a drum library where velocity usually controls volume, this analogy holds mostly true - but when we're talking about a string, brass, or winds library where you're crossfading between the quiet and loud samples from the mod wheel or whatever, then it becomes even more likely that a quiet sample will be played loud - and if you're in 16-bit mode then it's even more likely that you'll reveal a truncated / low-depth sample.

In this age of cheaper memory and storage, and fast SSDs, just staying at 24-bits all the time is not as much of a performance penalty as it was when LASS was released, when a 500gb spinning drive was state-of-the-art.

TL;DR part 2 = Always record and edit audio at 24-bit (unless maybe it's a low-dynamic-range source like metal guitar or something like that where 16-bit might be more acceptable... but still). Don't fear using 16-bit versions of sample libraries if they're provided, especially if the samples within the libraries have been normalized - but keep an eye/ear out for any situations where quiet 16-bit samples get played loudly.
 
Last edited:
@charlieclouser we practice classical music live on VI pianos. Most piano VIs are 16 bit; is there a benefit for us playing live to try 24 bit options? Or is this a waste of time?

Those might include PianoTeq, Production Voices gold, EastWest platinum, Galaxy II, etc.
 
The Wikipedia article has some good additional information, including an explanation of the 1 bit "equals" 6 db statement made by @charlieclouser previously.

I always thought of the 24 vs 16 as a basic issue of resolution, but it was interesting hear how it relates directly to accurate representation of low-level/volume recorded data (which makes sense, of course). I had not previously given the second perspective much thought, even though it's just as relevant.
 
That's why I'm sure LASS was recorded at 24-bit, then edited (and possibly normalized / gain changed), and then output in both 16 and 24 bit formats. This makes sense.

Oh yes. Again, he included both. I believe they're now all 24-bit, but I could be wrong (I don't know how to tell).
 
If I may foolishly wade into this with a simple, but more apt analogy:
24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.

The problem with analogies is that they always break down at some point, and when you're talking about digital audio - which is unintuitive - they can be misleading.

Intuitively, you'd think that increasing the sample rate means you have more points to represent the waveform, therefore it's more like the original - in the same way that increasing the color bit depth gives you more colors to represent the original picture.

But that's not at all how it works. Sound is all sine waves (picture a speaker moving in and out). So increasing the sample rate only allows you to record higher frequencies. Music that doesn't go above 4kHz sounds identical sampled at 8kHz and at 96kHz.
 
I've done a bit of testing yesterday, and I can hear a difference between 16 and 24 bit samples in scenarios where there's a lot of gain in the signal path (compression with +36 dB gain total, or a heavily distorted amp sim), and where a sample rings out completely and fades to silence with no other sounds audible.

So, let's say a quiet but compressed to hell pizz note is the last sound before a complete silence in a piece of music, and the sample itself fades away to silence, using a volume envelope applied to the WAV file before it goes into the sampler. The noise in the tail fades out audibly smoother with 24 bits, and with 16 bits it hisses for a bit, then ends suddenly.

So, there's some benefit in at least that one scenario, but is it worth using up 50% more diskspace and loading time, and twice the RAM? Or should I just convert everything to 16 bit after I'm done with all the editing?

Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit.

For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.
 
@charlieclouser we practice classical music live on VI pianos. Most piano VIs are 16 bit; is there a benefit for us playing live to try 24 bit options? Or is this a waste of time?

Those might include PianoTeq, Production Voices gold, EastWest platinum, Galaxy II, etc.

In any kind of live performance situation, you probably will struggle to hear the difference between 16 and 24 bit versions of the piano sample libraries you're using. All of those were most probably originally sampled and edited at 24 bits, and then reduced to 16 bit after the fact - mainly so that companies like East-West can offer a "lite" version with only the 16-bit version and sell the full version with 24-bit samples and more mic positions / velocity ranges / etc. for a higher price. So in all likelihood the benefits of 24-bit-ness has been baked-in from the start, and much like my example above with LASS, the fact that the developer has reduced them after the fact to 16-bit for a "lite" version does not damage the sound quality to any great degree.

The benefit of using any of those libraries in their "lite" version is that the files are much smaller and will be much less of a load on the computers used to play them back.

The benefits of using the full, "pro" version, with 24-bit samples, would mostly be when using them in a recording situation. Say you're doing a movie trailer in the style that we hear so much these days: A single, lonely piano note going "ding" every ten seconds against a backdrop of absolute silence, and you want to play very lightly because you love the way the quieter note velocities sound as opposed to the loud ones, but then you're going to boost the level of those quiet notes so they're stupidly loud in the theater. In that situation, where the listener will be hearing a single, naked, high piano note that's completely exposed, played through a massive THX theater sound system.... you might be able to hear the difference between the 16 and 24 bit versions, but probably only as the note has decayed almost to silence.

In reality? Nah. 16 vs 24 probably makes no difference, since the original samples were almost assuredly done at 24-bit depth and then truncated after the fact. So it's not something to lose sleep over.

Most of my excruciatingly long-winded explanations refers more to the process of recording samples (or whatever audio you're recording). In those situations, using 24-bit is definitely recommended, since sounds which are quiet when they're recorded will not suffer if they are boosted in level at some later point - or more accurately - quiet sounds that are originally recorded at 16-bit may suffer more than those originally recorded at 24-bit if they are boosted in level at some later point in the process.

Clear as mud, right?
 
Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit.

For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.

Yes, this is a real-world example of exactly what I was trying to explain.
 
Yeah, I'm not at all convinced it'll make any difference for quiet sounds played quietly, but for quiet songs that are brought up in volume, there's a definite difference. And I've done quite a bit of barely-audible stuff like hi-hat return noises where it would have been a good idea. I'm now pretty convinced it's worth going 24-bit from now on, and also include a 16-bit version for things that are huge and likely to be played back without a ton of gain (say, piano for classical practice, where you want the dynamics to be natural).

I'm not at all convinced that quiet sounds played back quietly sound any worse at 16 bits, and all the double blind testing I can find seems to agree, but with samples that's rarely the case these days.

Even if the noise floor in the recordings is well above -96 dB, I get tails that fade to silence smoother with 24 bits, because I can apply an artificial fade to the "real" noise floor at higher resolution. Seems dumb when you read it written like that, but makes things sound much better when things end.
If you're recording loud, blasting sounds with very little range between the loudest and softest passages (heavy metal guitar etc.), and which are always peaking near the top of the meters, then you may never hear the difference between 16 and 24 bit recordings.
Recording metal guitar coming out of a distorted amp, yeah, extra bits would pretty much be going to waste. But if you're recording your metal guitar as a DI signal to be distorted later, then 24 bits are also worth it IMO.
 
Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit.

For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.

I would think it would be better to change the mic gain when recording different dynamics, but even if you’re not going to do that, record at 24 bit, and to make the 16 bit version of the library normalize each sample to full volume, render to 16 bit, and apply inverse gain at sample playback. That way every sample, regardless of dynamic level, gets a full 16 bits of dynamic range. I bet you’d be hard pressed to hear any issues with a library made like that.
 
I would think it would be better to change the mic gain when recording different dynamics, but even if you’re not going to do that, record at 24 bit, and to make the 16 bit version of the library normalize each sample to full volume, render to 16 bit, and apply inverse gain at sample playback. That way every sample, regardless of dynamic level, gets a full 16 bits of dynamic range. I bet you’d be hard pressed to hear any issues with a library made like that.

This is sort of what you'll find if you scrutinize the samples in old 90's era ROMplers like Korg and Roland workstations. Even the pp samples in a piano multisample will often be normalized and full-on in level. That may be due to the various data compression schemes in use in that era - I think they probably weren't using fancy digital data compression algorithms like we have today, but were just simple loudness "companding" schemes like you'd find in a DBX noise reduction unit or an Emax sampler. So it's not exactly the same as what you're talking about, but... still cool. It might have been that they DID use some digital data compression scheme that performed poorly on low-level signals and therefore wanted to normalize the samples before encoding to reduce any additional artifacts coming from the compression algorithm. I can't say.

The trick of changing mic preamp gain during recording is one way to get the clearest signal and maximize the performance of your A>D converters, but can possibly lead to trouble when you attempt to edit and reassemble the samples into a realistic-sounding sampler instrument because the absolute loudness relationships between quiet and loud samples must be reproduced by the sampler's engine. Not a big deal if you keep track of what you're doing though.

One thing that makes it easier is to use a preamp with 6db detents on the mic preamp gains, so you can just go "up one click" on the middle samples and "up two clicks" on the soft samples or whatever - and in fact I've done this recently on some drum samples because my CraneSong Spider has exactly that feature. Then you just have to keep track of how many clicks you've boosted each batch of samples, and use the per-sample gain functions in the sampler to reverse the process - or not, depending on how you want the resulting sampler instrument to respond.
 
The trick of changing mic preamp gain during recording is one way to get the clearest signal and maximize the performance of your A>D converters, but can possibly lead to trouble when you attempt to edit and reassemble the samples into a realistic-sounding sampler instrument because the absolute loudness relationships between quiet and loud samples must be reproduced by the sampler's engine. Not a big deal if you keep track of what you're doing though.

That's what the most samplers do pretty well (automatically or done by hand/ear). I think this is the best way to produce a best sounding sample library.
 
If I may foolishly wade into this with a simple, but more apt analogy:
24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.

Have you not learned from my earlier post here and how humiliatingly I was bashed for it:laugh:
 
Top Bottom