What's new

Is there a worthwhile benefit to 24-bit samples over 16?

DSmolken

Senior Member
I've done a bit of testing yesterday, and I can hear a difference between 16 and 24 bit samples in scenarios where there's a lot of gain in the signal path (compression with +36 dB gain total, or a heavily distorted amp sim), and where a sample rings out completely and fades to silence with no other sounds audible.

So, let's say a quiet but compressed to hell pizz note is the last sound before a complete silence in a piece of music, and the sample itself fades away to silence, using a volume envelope applied to the WAV file before it goes into the sampler. The noise in the tail fades out audibly smoother with 24 bits, and with 16 bits it hisses for a bit, then ends suddenly.

So, there's some benefit in at least that one scenario, but is it worth using up 50% more diskspace and loading time, and twice the RAM? Or should I just convert everything to 16 bit after I'm done with all the editing?
 
I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound).
Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.
 
That is ultimately your choice. A lot of listeners and would never scrutinize a piece of music for its lack of resolution at 16bit. Some content creators could perhaps but not the intended audience for the content. That said, I can hear an audible difference in bit accuracy a lot more so than sample rate differences and would prefer to work with 24bit samples. This makes sense because going from 16bit to 24bit is not a 50% increase in resolution but a 25600% increase in resolution. Remember, every bit added doubles your resolution. The creators I work with require 24bit/44.1k. This is more of an issue for keeping the s/n ratios as low as possible and avoiding artifacts while mixing media in post.

I hope that helps.
 
  • Like
Reactions: CGR
I frankly shocked that Kontakt doesn't have a low-res mode. I would love to be able to store the 24bit version of a lib on an external drive, and keep only a low bitrate (even lossy-compressed) lib on my laptop. I really don't need it to sound pristine most of the time. Let me render it out in full high-quality only when needed.

This is how literally every 3D graphics program (also resource heavy) works.
 
I often make the comparison of 16bit vs 24bit audio, to 8bit per channel (RGB) vs 16bit per channel image files. I'll always work with 16bit image files (where possible) for colour correction, given I'm working with 281 trillion colours in 16bit per channel mode instead of 16.8 million colors in 8bit per channel mode – it has far more latitude for fine adjustments & accurate colour correction.

I believe the equivalent benefits apply to mixing/processing with 24bit audio. For solo piano, I can certainly hear the quality & detail difference (especially in the upper mids & treble area - it sounds more 'open' and 3 dimensional to my ears) even with mid-range quality monitoring.
 
This is why I hang on to both EW Hollywood Gold and Diamond. Gold requires less space and CPU but, if needed, Diamond offers higher resolution (and more articulations or mic settings).
 
  • Like
Reactions: CGR
Ultimately, most consumer play-back devices are 16 bit, so at some point everything needs to be down-converted. I would think that should happen as late as possible in the engineering process. I would certainly prefer running all FX etc on higher resolution data and go to 16 bits as part of the mastering chain.

A somewhat related issue for me is that my audio interface is natively 96KHz/24 bit, and running my projects in that resolution simply runs with less latency even though it requires more memory.
 
  • Like
Reactions: CGR
Thanks, everyone. While I'm pretty convinced that it's not possible to distinguish between 16 and 24 bit audio in blind tests of just plain playback, it's not hard to hear a difference in artifacts in quiet tails when there's a lot of gain in a signal chain. And while 36 dB of gain (where that -96 dB noise floor becomes just -60 dB) might seem like a ton in orchestral music, it's probably not rare in a lot of less natural-sounding styles.

So I'm thinking that 24 bits might actually be worth it, though I'm going to try getting around the problems by applying a fadeout inside the sampler... If that doesn't work, I'll go 24 bit. If it does work, well, I'm not sure if that will really take care of all the downsides of 16 bits, or just one particular scenario, so I'd have to think about that some more.
 
Just FYI... the REAL answer to this question is - 'Where the noise floor occurs'. That's it - the only difference between a (properly dithered) 16 bit and 24 bit audio file is where the noise floor occurs. So around -96 dB for 16 bit and -144 dB for 24 bit.

There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!

This may be of interest, it explains sample rate and bit depth -
https://www.xiph.org/video/vid2.shtml
 
I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound).
Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.

Actually it is bullshit. :)

But not in a bad way, just in a "digital theory isn't always intuitive" way. :)

Basically, having more bits allows you to capture and keep more low-level detail.

We've had lots of explanations of digital theory here, so I'm going to be a weenie and refer people to the search function.
 
Just FYI... the REAL answer to this question is - 'Where the noise floor occurs'. That's it - the only difference between a (properly dithered) 16 bit and 24 bit audio file is where the noise floor occurs. So around -96 dB for 16 bit and -144 dB for 24 bit.

There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!

This may be of interest, it explains sample rate and bit depth -
https://www.xiph.org/video/vid2.shtml

You may want to revisit your understanding of this topic. The resolution explanation and image analogies couldn’t be more spot on.

Digital sampling has two types of “errors” or “values that are above or below the actual intended signal value”.

There is sampling error, which occurs during the duration of the sampling period (t=1/sample frequency) where the sampled value is fixed and the real value is still moving or transferring. The difference between these two values is sampling error.

The second type of error is called “quantization error”. This is the difference between the actual true signal value and the closest value that can be represented digitally given the bit depth. The digital signal level has to be quantized up or down and imparts a discrepancy between the intended signal and what you get from the digitized waveform.

Audio resolution is measured in both bit depth along with sample frequency as the resolution and that is why it is quantified this way. You are correct that it affects the noise floor. But to infer that it has little to no effect on dynamic and frequency response is crudely untrue. Now with that said, there is a whole other topic on how humans perceive these inaccuracies but the math doesn’t lie.
 
  • Like
Reactions: CGR
There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!

My analogy is appropriate here. FYI:
Definition of Analogy
1a: a comparison of two otherwise unlike things based on resemblance of a particular aspect.
 
Last edited:
You are correct that it affects the noise floor. But to infer that it has little to no effect on dynamic and frequency response is crudely untrue.

What does bit depth have to do with frequency response? If I remember correctly, bit depth is simply the number of possible values a signal's amplitude can be quantized to. These values represent volume, and have nothing to do with frequency.

Say you take a PCM sound file and increase the value of each sample by one. With 16-bit audio, you increased its amplitude by 0.0015% of the entire range of possible dynamic values. Increasing bit depth simply means that you can make volume increases/decreases at an even finer resolution. 24-bit lets you increment amplitude by 0.000006% of the entire possible dynamic range. What I'm saying is, if you couldn't already hear the difference in a 0.0015% volume increase (and you can't), then the ability to make a 0.000006% increment seems completely unnecessary.

The only relevant thing that bit depth changes is the noise floor. Sure, the pixel/image analogy is apt, but we're dealing with resolutions so high that you really can't tell the difference. I can't tell the difference between a 16-bit and a 24-bit version of the same recording. They just sound the same to me. I could only tell which is which if I had to do any work where the noise floor became relevant.

That being said, when summing signals together in mixing, it helps to have as low a noise floor as possible, so that there's less build-up.
 
Last edited:
What does bit depth have to do with frequency response? If I remember correctly, bit depth is simply the number of possible values a signal's amplitude can be quantized to. These values represent volume, and have nothing to do with frequency.

Say you take a PCM sound file and increase the value of each sample by one. With 16-bit audio, you increased its amplitude by 0.0015% of the entire range of possible dynamic values. Increasing bit depth simply means that you can make volume increases/decreases at an even finer resolution. 24-bit lets you increment amplitude by 0.000006% of the entire possible dynamic range. What I'm saying is, if you couldn't already hear the difference in a 0.0015% volume increase (and you can't), then the ability to make a 0.000006% increment seems completely unnecessary.

The only relevant thing that bit depth changes is the noise floor. Sure, the pixel/image analogy is apt, but we're dealing with resolutions so high that you really can't tell the difference. I can't tell the difference between a 16-bit and a 24-bit version of the same recording. They just sound the same to me. I could only tell which is which if I had to do any work where the noise floor became relevant.

That being said, when summing signals together in mixing, it helps to have as low a noise floor as possible, so that there's less build-up.

The reason why I claimed bit depth affects frequency response is based on what happens if you playback a recording in 8bit or 4bit. The frequency response features a high end rolloff due to the inability to render audible sound of harmonic components (timbral overtones) that high in frequency which based on Fourier series are lower in amplitude the higher the harmonic number. The same happens between bit depths of 24 and 32 but the major frequency response anomalies occur well beyond sonic thresholds so the difference is far less noticed if at all but there is still a theoretical frequency response change with bit depth.

The reason why we notice a more dramatic difference at lower dynamic levels is due to the logarithmic function in which we hear sound. The dynamic grid that digital waveforms are quantized to are linear in power not logarithmic. So we perceive softer sounds to a greater degree of digital scrutiny per se.
 
I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound).
Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.

Your example is pretty much the opposite of what's true.

If you're always going to capture loud sounds - that is, you're recording sounds that always hit near the top few db on your meters, then it's more acceptable to use only 16 bits.

But you want to use 24 bits if you're going to capture quiet sounds - those that barely flicker the bottom few blocks on your meters.... or, to put it more correctly, those sounds that have a very wide dynamic range between their quietest and loudest extremes.

Say you want to record bashing on trash cans, and you will not be playing any quiet or soft passages - it's all going to be triple-forte kabooms. In that case, you might not notice a difference between 16 and 24 bit signals.

But if you want to record brushes on a frame drum, and you want to play the lightest little scrapes followed by the loudest smacks, and you set your levels so the loudest smacks are near zero - then the lightest little scrapes will be a zillion db below those loud levels, and only be lighting the bottom couple of blocks on your meters. That's an example of a source with a wide dynamic range, and which would benefit from being captured (and stored) at 24 bits.

So a typical orchestral library, which might have triple-pianissimo and triple-forte string section samples in the same patch, will definitely benefit from being stored and played back at 24 bits.

Think of it this way - each additional bit added to a digital signal doubles the number of vertical steps in the waveform that can be represented. That translates to 6db of "vertical" resolution per bit, so a 16 bit signal can have 96 db of range between the loudest full-level signal and zero, while a 24 bit signal can have 144 db of range.

So if you're recording/storing/playing back a 16 bit signal, and the quietest pp samples are 48db below the loudest ff samples (which is a realistic scenario), then those pp samples will be using up only half of the range - and will thus effectively be 8-bit samples. You would probably hear that as a grainy, noisy, or low-resolution sound.

But if you're recording/storing/playing back a 24 bit signal, that quietest sample which is 48db below the loudest sample will still be using up 16 bits (24 bits minus 8 bits = 144db minus 48db), and will still have 96db of range. So even the quiet sounds in a 24 bit recording can have the same resolution as the loudest sounds in a 16 bit recording.

(Note that none of this has anything to do with sampling rate (48 vs 96 etc) and only refers to the "vertical" resolution of the audio being captured, which audibly translates into "signal to noise ratio" or "noise floor" issues.)

If you're recording loud, blasting sounds with very little range between the loudest and softest passages (heavy metal guitar etc.), and which are always peaking near the top of the meters, then you may never hear the difference between 16 and 24 bit recordings.

But orchestral sounds ain't like that. So 24 bit capture is a must, and 24 bit storage really does help.
 
+1 Charlie!
You spoke out the truth, the way it is..... . Samples with much "built in" dynamic are crying for the 24 bit recordings ... . ;)
 
Last edited:
+1 Charlie!
You spoke out the truth, the way it is..... . Samples with much "built in" dynamic are crying for the 24 bit recordings ... . ;)

s'truth! Last week I was doing some drum sampling in a big, quiet space, and I was capturing single hits of exactly the type of thing I mentioned above - frame drums played with brushes. The loudest smacks were quite loud, and when I set the levels so they within 6db of clipping, the quietest pp samples were as much as 30db below the loudest ones.

Even after pretty heavy compression of the recorded files inside the DAW, even the soft hits were clean and clear, with no grain or noise. A little thumbnail math tells me that those quiet hits, which were recorded at between 42db and 36db below full-scale, still had around 17-18 bits of dynamic range. If that session had been captured at 16 bits, then those quiet samples would be around 9-10 bits deep - and that's starting to push the limits of what I'd consider workable when the resulting samples will be loaded into Kontakt/EXS and then further mixed and processed in the DAW.

Those 17-18 bit samples might be played from a sampler that is further manipulating the levels via velocity and whatever else, and then mixed into a complete composition where they're even quieter than they were before, so even thought the DAW's mix engine might be operating at 32 or 64 bit float (so it won't permanently truncate the bit depth if the level is reduced by the channel fader and then increased by compression/limiting on the mix bus), the samples might wind up at an even lower level than they were originally recorded at. (This will usually be the case in a finished mix, no matter what the samples are.)

So having a 24-bit source is the way to go for sure.

At the moment, I don't mess with "32-bit float" storage of audio files or samples, since I'm usually well in control of levels when recording and editing. But I can see how it could be useful for sound editors / fx designers / mixers on the dub stage, or wherever absolutely healthy levels cannot be maintained:

Say you're preparing some pre-dubs of dialog, walla, city noise backgrounds, etc. In that situation you might pre-mix a stem containing those elements, and you might want to put them "in perspective", level-wise, and then print that stem at those levels for delivery to the dub stage. Now those stems will have teeny-tiny levels so that the mixers don't have to pull the faders way the hell down to get them in perspective. That's a situation where storing the stems as 32-bit float would prevent those stems from being permanently bit-truncated when you print them at the low levels the mixers will want. They'd retain a massive dynamic range so that if the levels were boosted heavily the signal would still have a large bit depth. Similarly, recording an "Icelandic strings" session (like Spitfire Tundra) you might want to use 32-bit float storage so that those teeny-tiny signals don't get turned into 7-bit files at some point in the digital pathway.

But for most sampling, composition, and music recording needs 32-bit float storage can be overkill. And, of course, there's really no such thing as a 32-bit A>D converter for audio recording. (Yes, I know such things exist in the lab, for various non-analog-audio applications like analysis or whatever, but it's not like you can get an Avid 32-bit converter for ProTools.)

TL;DR = If you can remember to check your levels and not accidentally record your frame drums 70db below full-scale, 24-bit recording and storage is just fine. But 32-bit float storage is a nice option if you need to print stems with microscopic levels, and might need to boost them at a later point.
 
@charlieclouser are the numbers in the standard 16 bit wav files floats or ints?

Edit: I guess based on your descriptions they must be integers. I just wonder why 16 bit int and not 16 float. Would the bits lost to the "floating point" functionality of that data type make the louder volumes too low-res in bit-depth compared to ints?
 
Last edited:
Top Bottom