# Is there a worthwhile benefit to 24-bit samples over 16?



## DSmolken (Jul 9, 2019)

I've done a bit of testing yesterday, and I can hear a difference between 16 and 24 bit samples in scenarios where there's a lot of gain in the signal path (compression with +36 dB gain total, or a heavily distorted amp sim), and where a sample rings out completely and fades to silence with no other sounds audible.

So, let's say a quiet but compressed to hell pizz note is the last sound before a complete silence in a piece of music, and the sample itself fades away to silence, using a volume envelope applied to the WAV file before it goes into the sampler. The noise in the tail fades out audibly smoother with 24 bits, and with 16 bits it hisses for a bit, then ends suddenly.

So, there's some benefit in at least that one scenario, but is it worth using up 50% more diskspace and loading time, and twice the RAM? Or should I just convert everything to 16 bit after I'm done with all the editing?


----------



## Batrawi (Jul 9, 2019)

I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound). 
Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.


----------



## K. Johnston (Jul 9, 2019)

That is ultimately your choice. A lot of listeners and would never scrutinize a piece of music for its lack of resolution at 16bit. Some content creators could perhaps but not the intended audience for the content. That said, I can hear an audible difference in bit accuracy a lot more so than sample rate differences and would prefer to work with 24bit samples. This makes sense because going from 16bit to 24bit is not a 50% increase in resolution but a 25600% increase in resolution. Remember, every bit added doubles your resolution. The creators I work with require 24bit/44.1k. This is more of an issue for keeping the s/n ratios as low as possible and avoiding artifacts while mixing media in post. 

I hope that helps.


----------



## MaxOctane (Jul 9, 2019)

I frankly shocked that Kontakt doesn't have a low-res mode. I would *love* to be able to store the 24bit version of a lib on an external drive, and keep only a low bitrate (even _lossy-compressed_) lib on my laptop. I really don't need it to sound pristine most of the time. Let me render it out in full high-quality only when needed.

This is how literally every 3D graphics program (also resource heavy) works.


----------



## CGR (Jul 9, 2019)

I often make the comparison of 16bit vs 24bit audio, to 8bit per channel (RGB) vs 16bit per channel image files. I'll always work with 16bit image files (where possible) for colour correction, given I'm working with 281 _trillion _colours in 16bit per channel mode instead of 16.8 million colors in 8bit per channel mode – it has far more latitude for fine adjustments & accurate colour correction.

I believe the equivalent benefits apply to mixing/processing with 24bit audio. For solo piano, I can certainly hear the quality & detail difference (especially in the upper mids & treble area - it sounds more 'open' and 3 dimensional to my ears) even with mid-range quality monitoring.


----------



## Fleer (Jul 9, 2019)

This is why I hang on to both EW Hollywood Gold and Diamond. Gold requires less space and CPU but, if needed, Diamond offers higher resolution (and more articulations or mic settings).


----------



## AllanH (Jul 9, 2019)

Ultimately, most consumer play-back devices are 16 bit, so at some point everything needs to be down-converted. I would think that should happen as late as possible in the engineering process. I would certainly prefer running all FX etc on higher resolution data and go to 16 bits as part of the mastering chain. 

A somewhat related issue for me is that my audio interface is natively 96KHz/24 bit, and running my projects in that resolution simply runs with less latency even though it requires more memory.


----------



## DSmolken (Jul 9, 2019)

Thanks, everyone. While I'm pretty convinced that it's not possible to distinguish between 16 and 24 bit audio in blind tests of just plain playback, it's not hard to hear a difference in artifacts in quiet tails when there's a lot of gain in a signal chain. And while 36 dB of gain (where that -96 dB noise floor becomes just -60 dB) might seem like a ton in orchestral music, it's probably not rare in a lot of less natural-sounding styles.

So I'm thinking that 24 bits might actually be worth it, though I'm going to try getting around the problems by applying a fadeout inside the sampler... If that doesn't work, I'll go 24 bit. If it does work, well, I'm not sure if that will really take care of all the downsides of 16 bits, or just one particular scenario, so I'd have to think about that some more.


----------



## JeeTee (Jul 9, 2019)

Just FYI... the REAL answer to this question is - 'Where the noise floor occurs'. That's it - the only difference between a (properly dithered) 16 bit and 24 bit audio file is where the noise floor occurs. So around -96 dB for 16 bit and -144 dB for 24 bit.

There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!

This may be of interest, it explains sample rate and bit depth -
https://www.xiph.org/video/vid2.shtml


----------



## Nick Batzdorf (Jul 9, 2019)

Batrawi said:


> I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound).
> Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.



Actually it is bullshit. 

But not in a bad way, just in a "digital theory isn't always intuitive" way. 

Basically, having more bits allows you to capture and keep more low-level detail.

We've had lots of explanations of digital theory here, so I'm going to be a weenie and refer people to the search function.


----------



## K. Johnston (Jul 9, 2019)

JeeTee said:


> Just FYI... the REAL answer to this question is - 'Where the noise floor occurs'. That's it - the only difference between a (properly dithered) 16 bit and 24 bit audio file is where the noise floor occurs. So around -96 dB for 16 bit and -144 dB for 24 bit.
> 
> There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!
> 
> ...



You may want to revisit your understanding of this topic. The resolution explanation and image analogies couldn’t be more spot on.

Digital sampling has two types of “errors” or “values that are above or below the actual intended signal value”. 

There is sampling error, which occurs during the duration of the sampling period (t=1/sample frequency) where the sampled value is fixed and the real value is still moving or transferring. The difference between these two values is sampling error.

The second type of error is called “quantization error”. This is the difference between the actual true signal value and the closest value that can be represented digitally given the bit depth. The digital signal level has to be quantized up or down and imparts a discrepancy between the intended signal and what you get from the digitized waveform.

Audio resolution is measured in both bit depth along with sample frequency as the resolution and that is why it is quantified this way. You are correct that it affects the noise floor. But to infer that it has little to no effect on dynamic and frequency response is crudely untrue. Now with that said, there is a whole other topic on how humans perceive these inaccuracies but the math doesn’t lie.


----------



## CGR (Jul 9, 2019)

JeeTee said:


> There's no extra 'resolution' to be had (the camera/pixel analogy is just wrong because audio doesn't work the same way) - it's simply about the noise floor!



My analogy is appropriate here. FYI:
_*Definition of Analogy*
1a: a comparison of two otherwise unlike things based on resemblance of a particular aspect._


----------



## Geocranium (Jul 9, 2019)

K. Johnston said:


> You are correct that it affects the noise floor. But to infer that it has little to no effect on dynamic and frequency response is crudely untrue.



What does bit depth have to do with frequency response? If I remember correctly, bit depth is simply the number of possible values a signal's amplitude can be quantized to. These values represent volume, and have nothing to do with frequency.

Say you take a PCM sound file and increase the value of each sample by one. With 16-bit audio, you increased its amplitude by 0.0015% of the entire range of possible dynamic values. Increasing bit depth simply means that you can make volume increases/decreases at an even finer resolution. 24-bit lets you increment amplitude by 0.000006% of the entire possible dynamic range. What I'm saying is, if you couldn't already hear the difference in a 0.0015% volume increase (and you can't), then the ability to make a 0.000006% increment seems completely unnecessary.

The only relevant thing that bit depth changes is the noise floor. Sure, the pixel/image analogy is apt, but we're dealing with resolutions so high that you really can't tell the difference. I can't tell the difference between a 16-bit and a 24-bit version of the same recording. They just sound the same to me. I could only tell which is which if I had to do any work where the noise floor became relevant.

That being said, when summing signals together in mixing, it helps to have as low a noise floor as possible, so that there's less build-up.


----------



## Nick Batzdorf (Jul 9, 2019)

K. Johnston... well... not quite. But never mind.


----------



## Batrawi (Jul 9, 2019)

Nick Batzdorf said:


> Actually it is bullshit.


Thanks for leading me out of my dark thoughts


----------



## K. Johnston (Jul 9, 2019)

Geocranium said:


> What does bit depth have to do with frequency response? If I remember correctly, bit depth is simply the number of possible values a signal's amplitude can be quantized to. These values represent volume, and have nothing to do with frequency.
> 
> Say you take a PCM sound file and increase the value of each sample by one. With 16-bit audio, you increased its amplitude by 0.0015% of the entire range of possible dynamic values. Increasing bit depth simply means that you can make volume increases/decreases at an even finer resolution. 24-bit lets you increment amplitude by 0.000006% of the entire possible dynamic range. What I'm saying is, if you couldn't already hear the difference in a 0.0015% volume increase (and you can't), then the ability to make a 0.000006% increment seems completely unnecessary.
> 
> ...



The reason why I claimed bit depth affects frequency response is based on what happens if you playback a recording in 8bit or 4bit. The frequency response features a high end rolloff due to the inability to render audible sound of harmonic components (timbral overtones) that high in frequency which based on Fourier series are lower in amplitude the higher the harmonic number. The same happens between bit depths of 24 and 32 but the major frequency response anomalies occur well beyond sonic thresholds so the difference is far less noticed if at all but there is still a theoretical frequency response change with bit depth. 

The reason why we notice a more dramatic difference at lower dynamic levels is due to the logarithmic function in which we hear sound. The dynamic grid that digital waveforms are quantized to are linear in power not logarithmic. So we perceive softer sounds to a greater degree of digital scrutiny per se.


----------



## charlieclouser (Jul 9, 2019)

Batrawi said:


> I'm totally ignorant about sound engineering stuff, but the way I look at it is that audio bits are like visual pixels. So the more your production is aimed to big pictures (louder sound) the more you would need higher pixel rate(bit rate) as to avoid getting a distorted image(sound).
> Could be total bullshit and definitely not an answer to the OP, but I'm interested to know the answer as well.



Your example is pretty much the opposite of what's true.

If you're always going to capture loud sounds - that is, you're recording sounds that *always* hit near the top few db on your meters, then it's more acceptable to use only 16 bits.

But you want to use 24 bits if you're going to capture quiet sounds - those that barely flicker the bottom few blocks on your meters.... or, to put it more correctly, those sounds that have a very wide dynamic range between their quietest and loudest extremes.

Say you want to record bashing on trash cans, and you will *not* be playing any quiet or soft passages - it's all going to be triple-forte kabooms. In that case, you might not notice a difference between 16 and 24 bit signals.

But if you want to record brushes on a frame drum, and you want to play the lightest little scrapes followed by the loudest smacks, and you set your levels so the loudest smacks are near zero - then the lightest little scrapes will be a zillion db below those loud levels, and only be lighting the bottom couple of blocks on your meters. *That's* an example of a source with a wide dynamic range, and which would benefit from being captured (and stored) at 24 bits.

So a typical orchestral library, which might have triple-pianissimo and triple-forte string section samples in the same patch, will definitely benefit from being stored and played back at 24 bits.

Think of it this way - each additional bit added to a digital signal doubles the number of vertical steps in the waveform that can be represented. That translates to 6db of "vertical" resolution per bit, so a 16 bit signal can have 96 db of range between the loudest full-level signal and zero, while a 24 bit signal can have 144 db of range.

So if you're recording/storing/playing back a 16 bit signal, and the quietest pp samples are 48db below the loudest ff samples (which is a realistic scenario), then those pp samples will be using up only half of the range - and will thus effectively be 8-bit samples. You would probably hear that as a grainy, noisy, or low-resolution sound.

But if you're recording/storing/playing back a 24 bit signal, that quietest sample which is 48db below the loudest sample will still be using up 16 bits (24 bits minus 8 bits = 144db minus 48db), and will still have 96db of range. So even the quiet sounds in a 24 bit recording can have the same resolution as the loudest sounds in a 16 bit recording.

(Note that none of this has anything to do with sampling rate (48 vs 96 etc) and only refers to the "vertical" resolution of the audio being captured, which audibly translates into "signal to noise ratio" or "noise floor" issues.)

If you're recording loud, blasting sounds with very little range between the loudest and softest passages (heavy metal guitar etc.), and which are always peaking near the top of the meters, then you may never hear the difference between 16 and 24 bit recordings.

But orchestral sounds ain't like that. So 24 bit capture is a must, and 24 bit storage really does help.


----------



## germancomponist (Jul 10, 2019)

+1 Charlie!
You spoke out the truth, the way it is..... . Samples with much "built in" dynamic are crying for the 24 bit recordings ... .


----------



## charlieclouser (Jul 10, 2019)

germancomponist said:


> +1 Charlie!
> You spoke out the truth, the way it is..... . Samples with much "built in" dynamic are crying for the 24 bit recordings ... .



s'truth! Last week I was doing some drum sampling in a big, quiet space, and I was capturing single hits of exactly the type of thing I mentioned above - frame drums played with brushes. The loudest smacks were quite loud, and when I set the levels so they within 6db of clipping, the quietest pp samples were as much as 30db below the loudest ones.

Even after pretty heavy compression of the recorded files inside the DAW, even the soft hits were clean and clear, with no grain or noise. A little thumbnail math tells me that those quiet hits, which were recorded at between 42db and 36db below full-scale, still had around 17-18 bits of dynamic range. If that session had been captured at 16 bits, then those quiet samples would be around 9-10 bits deep - and that's starting to push the limits of what I'd consider workable when the resulting samples will be loaded into Kontakt/EXS and then further mixed and processed in the DAW.

Those 17-18 bit samples might be played from a sampler that is further manipulating the levels via velocity and whatever else, and then mixed into a complete composition where they're even quieter than they were before, so even thought the DAW's mix engine might be operating at 32 or 64 bit float (so it won't permanently truncate the bit depth if the level is reduced by the channel fader and then increased by compression/limiting on the mix bus), the samples might wind up at an even lower level than they were originally recorded at. (This will usually be the case in a finished mix, no matter what the samples are.) 

So having a 24-bit source is the way to go for sure.

At the moment, I don't mess with "32-bit float" storage of audio files or samples, since I'm usually well in control of levels when recording and editing. But I can see how it could be useful for sound editors / fx designers / mixers on the dub stage, or wherever absolutely healthy levels cannot be maintained:

Say you're preparing some pre-dubs of dialog, walla, city noise backgrounds, etc. In that situation you might pre-mix a stem containing those elements, and you might want to put them "in perspective", level-wise, and then print that stem at those levels for delivery to the dub stage. Now those stems will have teeny-tiny levels so that the mixers don't have to pull the faders way the hell down to get them in perspective. That's a situation where storing the stems as 32-bit float would prevent those stems from being permanently bit-truncated when you print them at the low levels the mixers will want. They'd retain a massive dynamic range so that if the levels were boosted heavily the signal would still have a large bit depth. Similarly, recording an "Icelandic strings" session (like Spitfire Tundra) you might want to use 32-bit float storage so that those teeny-tiny signals don't get turned into 7-bit files at some point in the digital pathway.

But for most sampling, composition, and music recording needs 32-bit float storage can be overkill. And, of course, there's really no such thing as a 32-bit A>D converter for audio recording. (Yes, I know such things exist in the lab, for various non-analog-audio applications like analysis or whatever, but it's not like you can get an Avid 32-bit converter for ProTools.)

TL;DR = If you can remember to check your levels and not accidentally record your frame drums 70db below full-scale, 24-bit recording and storage is just fine. But 32-bit float storage is a nice option if you need to print stems with microscopic levels, and might need to boost them at a later point.


----------



## MartinH. (Jul 10, 2019)

@charlieclouser are the numbers in the standard 16 bit wav files floats or ints?

Edit: I guess based on your descriptions they must be integers. I just wonder why 16 bit int and not 16 float. Would the bits lost to the "floating point" functionality of that data type make the louder volumes too low-res in bit-depth compared to ints?


----------



## Nick Batzdorf (Jul 10, 2019)

Note that a good part of what gets captured with 24-bit recording is the ends of reverb tails and room sound. You can often hear that stuff into the noise floor, especially if you raise the level way up.

Now, many if not most sample libraries are either normalized or raised so they're playable. And there's also a good chance you're going to compress them, which will raise the low-level details (even if your goal is to smooth out the sound and increase the density).

In that case it almost certainly doesn't matter whether you use 16- or 24-bit samples. That was Andrew K's conclusion when he released LASS, for instance. He included 24-bit programs as well as the 16-bit ones, but the first release was a few years ago when memory and storage were both scarce resources.


----------



## dsblais (Jul 10, 2019)

If I may foolishly wade into this with a simple, but more apt analogy:
24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.


----------



## lumcas (Jul 10, 2019)

I haven't seen a better video on this subject than this and the guy knows his stuff. Totally worth the time...


----------



## charlieclouser (Jul 10, 2019)

Nick Batzdorf said:


> Note that a good part of what gets captured with 24-bit recording is the ends of reverb tails and room sound. You can often hear that stuff into the noise floor, especially if you raise the level way up.
> 
> Now, many if not most sample libraries are either normalized or raised so they're playable. And there's also a good chance you're going to compress them, which will raise the low-level details (even if your goal is to smooth out the sound and increase the density).
> 
> In that case it almost certainly doesn't matter whether you use 16- or 24-bit samples. That was Andrew K's conclusion when he released LASS, for instance. He included 24-bit programs as well as the 16-bit ones, but the first release was a few years ago when memory and storage were both scarce resources.



Yes, if you've recorded at 24-bit, and then done some normalizing / level changing to bring up the samples to a healthy level before playing them in a sampler, then it's much more acceptable to use a version of the samples that have been truncated to 16-bit.

I often prefer to jack up the level of softer samples, and then use the sampler's velocity>volume functions to recreate the dynamic scale - this allows me to lessen that effect, which "compresses" the dynamic range. This sounds different to applying an audio compressor plugin on the sampler's output, and is more of a "playing" effect rather than a "mixing" effect.

So in those situations, as with LASS, storing and playing the samples in 16-bit format isn't a high crime or misdemeanor - but when it comes to the original recording process, doing it at 16-bit means you've got to be very careful about recording levels to avoid accidentally capturing only 9 bits worth of signal on the softer sounds. That's why I'm sure LASS was recorded at 24-bit, then edited (and possibly normalized / gain changed), and then output in both 16 and 24 bit formats. This makes sense.

But many sample libraries do not raise the gain of softer samples - they may "normalize as a group" but not get into the weeds with trying to normalize every sample. One reason why the developer would do this is because it's easier to accurately replicate the original playing dynamics of the original instrument, by insuring that the sampler's velocity>volume amount is set to zero (which is sadly not always the case). Then a soft sample plays with the same gain applied as the loud samples, and everything should play back as it was recorded. Although this approach is the easiest way to replicate the original dynamics of the samples as they were recorded, it can make it difficult to do the "compress dynamics" thing I described above - in order to do so you'd need to apply an inverse (negative) amount of velocity>volume, so that the softer samples get played back louder than they were recorded while the louder samples get played back normally (basically). It's this type of library where the 24-bit-ness of the source samples is more important - if they're stored at 16-bit then those soft samples will be at an unacceptably reduced bit depth and will possibly then be boosted in level when you muck about with the velocity>volume amounts, perhaps revealing their truncated depth even more.

If we're talking about a drum library where velocity usually controls volume, this analogy holds mostly true - but when we're talking about a string, brass, or winds library where you're crossfading between the quiet and loud samples from the mod wheel or whatever, then it becomes even more likely that a quiet sample will be played loud - and if you're in 16-bit mode then it's even more likely that you'll reveal a truncated / low-depth sample.

In this age of cheaper memory and storage, and fast SSDs, just staying at 24-bits all the time is not as much of a performance penalty as it was when LASS was released, when a 500gb spinning drive was state-of-the-art.

TL;DR part 2 = Always record and edit audio at 24-bit (unless maybe it's a low-dynamic-range source like metal guitar or something like that where 16-bit might be more acceptable... but still). Don't fear using 16-bit versions of sample libraries if they're provided, especially if the samples within the libraries have been normalized - but keep an eye/ear out for any situations where quiet 16-bit samples get played loudly.


----------



## newman (Jul 10, 2019)

@charlieclouser we practice classical music live on VI pianos. Most piano VIs are 16 bit; is there a benefit for us playing live to try 24 bit options? Or is this a waste of time?

Those might include PianoTeq, Production Voices gold, EastWest platinum, Galaxy II, etc.


----------



## AllanH (Jul 10, 2019)

The Wikipedia article has some good additional information, including an explanation of the 1 bit "equals" 6 db statement made by @charlieclouser previously. 

I always thought of the 24 vs 16 as a basic issue of resolution, but it was interesting hear how it relates directly to accurate representation of low-level/volume recorded data (which makes sense, of course). I had not previously given the second perspective much thought, even though it's just as relevant.


----------



## Nick Batzdorf (Jul 10, 2019)

charlieclouser said:


> That's why I'm sure LASS was recorded at 24-bit, then edited (and possibly normalized / gain changed), and then output in both 16 and 24 bit formats. This makes sense.



Oh yes. Again, he included both. I believe they're now all 24-bit, but I could be wrong (I don't know how to tell).


----------



## Nick Batzdorf (Jul 10, 2019)

dsblais said:


> If I may foolishly wade into this with a simple, but more apt analogy:
> 24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
> Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.



The problem with analogies is that they always break down at some point, and when you're talking about digital audio - which is unintuitive - they can be misleading.

Intuitively, you'd think that increasing the sample rate means you have more points to represent the waveform, therefore it's more like the original - in the same way that increasing the color bit depth gives you more colors to represent the original picture.

But that's not at all how it works. Sound is all sine waves (picture a speaker moving in and out). So increasing the sample rate only allows you to record higher frequencies. Music that doesn't go above 4kHz sounds identical sampled at 8kHz and at 96kHz.


----------



## ProtectedRights (Jul 10, 2019)

DSmolken said:


> I've done a bit of testing yesterday, and I can hear a difference between 16 and 24 bit samples in scenarios where there's a lot of gain in the signal path (compression with +36 dB gain total, or a heavily distorted amp sim), and where a sample rings out completely and fades to silence with no other sounds audible.
> 
> So, let's say a quiet but compressed to hell pizz note is the last sound before a complete silence in a piece of music, and the sample itself fades away to silence, using a volume envelope applied to the WAV file before it goes into the sampler. The noise in the tail fades out audibly smoother with 24 bits, and with 16 bits it hisses for a bit, then ends suddenly.
> 
> So, there's some benefit in at least that one scenario, but is it worth using up 50% more diskspace and loading time, and twice the RAM? Or should I just convert everything to 16 bit after I'm done with all the editing?



Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit. 

For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.


----------



## charlieclouser (Jul 10, 2019)

newman said:


> @charlieclouser we practice classical music live on VI pianos. Most piano VIs are 16 bit; is there a benefit for us playing live to try 24 bit options? Or is this a waste of time?
> 
> Those might include PianoTeq, Production Voices gold, EastWest platinum, Galaxy II, etc.



In any kind of live performance situation, you probably will struggle to hear the difference between 16 and 24 bit versions of the piano sample libraries you're using. All of those were most probably originally sampled and edited at 24 bits, and then reduced to 16 bit after the fact - mainly so that companies like East-West can offer a "lite" version with only the 16-bit version and sell the full version with 24-bit samples and more mic positions / velocity ranges / etc. for a higher price. So in all likelihood the benefits of 24-bit-ness has been baked-in from the start, and much like my example above with LASS, the fact that the developer has reduced them after the fact to 16-bit for a "lite" version does not damage the sound quality to any great degree.

The benefit of using any of those libraries in their "lite" version is that the files are much smaller and will be much less of a load on the computers used to play them back.

The benefits of using the full, "pro" version, with 24-bit samples, would mostly be when using them in a recording situation. Say you're doing a movie trailer in the style that we hear so much these days: A single, lonely piano note going "ding" every ten seconds against a backdrop of absolute silence, and you want to play very lightly because you love the way the quieter note velocities sound as opposed to the loud ones, but then you're going to boost the level of those quiet notes so they're stupidly loud in the theater. In that situation, where the listener will be hearing a single, naked, high piano note that's completely exposed, played through a massive THX theater sound system.... you *might* be able to hear the difference between the 16 and 24 bit versions, but probably only as the note has decayed almost to silence.

In reality? Nah. 16 vs 24 probably makes no difference, since the original samples were almost assuredly done at 24-bit depth and then truncated after the fact. So it's not something to lose sleep over.

Most of my excruciatingly long-winded explanations refers more to the process of *recording* samples (or whatever audio you're recording). In those situations, using 24-bit is definitely recommended, since sounds which are quiet when they're recorded will not suffer if they are boosted in level at some later point - or more accurately - quiet sounds that are originally recorded at 16-bit *may* suffer more than those originally recorded at 24-bit if they are boosted in level at some later point in the process.

Clear as mud, right?


----------



## charlieclouser (Jul 10, 2019)

ProtectedRights said:


> Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit.
> 
> For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.



Yes, this is a real-world example of exactly what I was trying to explain.


----------



## DSmolken (Jul 10, 2019)

Yeah, I'm not at all convinced it'll make any difference for quiet sounds played quietly, but for quiet songs that are brought up in volume, there's a definite difference. And I've done quite a bit of barely-audible stuff like hi-hat return noises where it would have been a good idea. I'm now pretty convinced it's worth going 24-bit from now on, and also include a 16-bit version for things that are huge and likely to be played back without a ton of gain (say, piano for classical practice, where you want the dynamics to be natural).

I'm not at all convinced that quiet sounds played back quietly sound any worse at 16 bits, and all the double blind testing I can find seems to agree, but with samples that's rarely the case these days.

Even if the noise floor in the recordings is well above -96 dB, I get tails that fade to silence smoother with 24 bits, because I can apply an artificial fade to the "real" noise floor at higher resolution. Seems dumb when you read it written like that, but makes things sound much better when things end.


charlieclouser said:


> If you're recording loud, blasting sounds with very little range between the loudest and softest passages (heavy metal guitar etc.), and which are always peaking near the top of the meters, then you may never hear the difference between 16 and 24 bit recordings.


Recording metal guitar coming out of a distorted amp, yeah, extra bits would pretty much be going to waste. But if you're recording your metal guitar as a DI signal to be distorted later, then 24 bits are also worth it IMO.


----------



## Dex (Jul 10, 2019)

ProtectedRights said:


> Often all dynamic levels of an instrument (p, mf, f, ff) are recorded with the same microphone gain, so if you pull up the quieter articulations you get way better sound with 24 bit.
> 
> For example I play a soft gong hit, and pull it up in volume a lot, additionally compressing the tail so it lasts much longer. I had terrible noise when using ISW Orchestral Percussions in 16 bit, then installed 24 bit, and then it was practically noise free.



I would think it would be better to change the mic gain when recording different dynamics, but even if you’re not going to do that, record at 24 bit, and to make the 16 bit version of the library normalize each sample to full volume, render to 16 bit, and apply inverse gain at sample playback. That way every sample, regardless of dynamic level, gets a full 16 bits of dynamic range. I bet you’d be hard pressed to hear any issues with a library made like that.


----------



## OleJoergensen (Jul 11, 2019)

lumcas said:


> I haven't seen a better video on this subject than this and the guy knows his stuff. Totally worth the time...



This was interesting! Thank you for sharing.


----------



## charlieclouser (Jul 11, 2019)

Dex said:


> I would think it would be better to change the mic gain when recording different dynamics, but even if you’re not going to do that, record at 24 bit, and to make the 16 bit version of the library normalize each sample to full volume, render to 16 bit, and apply inverse gain at sample playback. That way every sample, regardless of dynamic level, gets a full 16 bits of dynamic range. I bet you’d be hard pressed to hear any issues with a library made like that.



This is sort of what you'll find if you scrutinize the samples in old 90's era ROMplers like Korg and Roland workstations. Even the pp samples in a piano multisample will often be normalized and full-on in level. That may be due to the various data compression schemes in use in that era - I think they probably weren't using fancy digital data compression algorithms like we have today, but were just simple loudness "companding" schemes like you'd find in a DBX noise reduction unit or an Emax sampler. So it's not exactly the same as what you're talking about, but... still cool. It might have been that they DID use some digital data compression scheme that performed poorly on low-level signals and therefore wanted to normalize the samples before encoding to reduce any additional artifacts coming from the compression algorithm. I can't say.

The trick of changing mic preamp gain during recording is one way to get the clearest signal and maximize the performance of your A>D converters, but can possibly lead to trouble when you attempt to edit and reassemble the samples into a realistic-sounding sampler instrument because the absolute loudness relationships between quiet and loud samples must be reproduced by the sampler's engine. Not a big deal if you keep track of what you're doing though.

One thing that makes it easier is to use a preamp with 6db detents on the mic preamp gains, so you can just go "up one click" on the middle samples and "up two clicks" on the soft samples or whatever - and in fact I've done this recently on some drum samples because my CraneSong Spider has exactly that feature. Then you just have to keep track of how many clicks you've boosted each batch of samples, and use the per-sample gain functions in the sampler to reverse the process - or not, depending on how you want the resulting sampler instrument to respond.


----------



## germancomponist (Jul 11, 2019)

charlieclouser said:


> The trick of changing mic preamp gain during recording is one way to get the clearest signal and maximize the performance of your A>D converters, but can possibly lead to trouble when you attempt to edit and reassemble the samples into a realistic-sounding sampler instrument because the absolute loudness relationships between quiet and loud samples must be reproduced by the sampler's engine. Not a big deal if you keep track of what you're doing though.



That's what the most samplers do pretty well (automatically or done by hand/ear). I think this is the best way to produce a best sounding sample library.


----------



## Batrawi (Jul 11, 2019)

dsblais said:


> If I may foolishly wade into this with a simple, but more apt analogy:
> 24-bit vs 16-bit audio is like 8-bit vs 16-bit color depth in a picture. It is sample resolution (e.g. 44.1, 96 kHz) that is like image resolution and resulting pixelation.
> Sample rate is much more important than bits per sample, in my opinion, although there are diminishing returns with that as well.



Have you not learned from my earlier post here and how humiliatingly I was bashed for it


----------



## dsblais (Jul 11, 2019)

Batrawi said:


> Have you not learned from my earlier post here and how humiliatingly I was bashed for it


I normally have to pay extra to be humiliatingly bashed.


----------



## Fleer (Jul 11, 2019)

Nothing beats some humiliating bashing. Except maybe the Spanish Inquisition.


----------



## Nick Batzdorf (Jul 11, 2019)

dsblais said:


> I normally have to pay extra to be humiliatingly bashed.


----------



## Mike Greene (Jul 11, 2019)

I've asked myself this same question about whether 24 bits is worthwhile for a sample library. For the reasons Charlie stated really well, my opinion is that it isn't. We (Realitone) record at a 24-bit sample rate, but we then normalize all samples, so at that point, my opinion is there's no need for the extra dynamic range that 24-bit offers.

For example, a quiet guitar sample when normalized turns into a very loud guitar sample, so in the mapping editor (or in the scripting), that sample gets volume-reduced by 30 or 40 db so that it will sound right. So the _effective_ dynamic range of that guitar zone is 30 or 40 db _plus_ whatever the dynamic range of 16 bits would be. That's plenty.

So if I were making libraries for myself, where I record 24 bit and then normalize samples individually, I'd make the final samples 16 bit. Partly to save hard drive space (minor issue) and partly for the instrument's RAM footprint (bigger issue). Plus I assume 16-bit is easier on the processor, although I'm not sure about that part.

But ... I'm not making libraries for myself anymore, so not all my decisions can be based on cold hard facts. I'm trying to _sell_ these things, and many potential customers have a lot of preconceived notions. In many people's minds, bigger is always better, whether it's in total gigabytes, or whether it's number of round robins, or whether it's bit depth. So for that handful of sales I might lose to the guys who think our quality isn't up to snuff if we use 16-bits, I stay at 24-bit.

That's all just my opinion, mind you, and by no means have I cracked the mystery of how to run a successful sample library company, so take it for what it's worth. In fact ... having said all that, I'm considering switching (quietly) to 16 bit for an upcoming library where the RAM footprint and processor load will be a major issue.


----------



## Fleer (Jul 11, 2019)

I think you’re right about that, Mike. Otherwise, 48-bit should be next. Better to focus on mic options and the like. Those are the options that matter, apart from good source material and recording quality.


----------



## GtrString (Jul 11, 2019)

If you do decide for 16bit, please write it in the specs. I read that for everything, and will not buy a library with 16bit samples (maybe except for something cheapo, retro sounding stuff).

I even prefer 24bit, 48khz. Nobody is even close to the perfect sample library, so skimping on specs is the kiss of death, imho.


----------



## Jeremy Spencer (Jul 11, 2019)

Mike Greene said:


> I've asked myself this same question about whether 24 bits is worthwhile for a sample library. For the reasons Charlie stated really well, my opinion is that it isn't. We (Realitone) record at a 24-bit sample rate, but we then normalize all samples, so at that point, my opinion is there's no need for the extra dynamic range that 24-bit offers.



+1 to this. I never use 24bit samples if available in a VI, always seems like overkill. 24/48 for recording and stem deliver? Absolutely, but not for the instruments themselves.


----------



## charlieclouser (Jul 11, 2019)

Yes, I agree with Mike Greene's view. As long as the original signal acquisition was done at 24-bit, and the samples have been normalized during the editing process (while still at 24-bit), and *then* reduced to 16-bit, all is well - and your cpu and storage system will thank you!

I certainly don't bother with attempting to down-convert Kontakt libraries to 16-bit, or take any other tedious data-saving measures like that, but I have no qualms with using 16-bit sample libraries. 

But I generally keep my own samples at 24-bit all the time, since more than once I've zeroed in on some tiny little squeak or squonk at the end of a sample and decided that's the bit I want to normalize and turn into a featured sound, so it makes sense to keep everything at maximum resolution just in case.


----------



## DSmolken (Jul 11, 2019)

Mike Greene said:


> In many people's minds, bigger is always better, whether it's in total gigabytes, or whether it's number of round robins, or whether it's bit depth. So for that handful of sales I might lose to the guys who think our quality isn't up to snuff if we use 16-bits, I stay at 24-bit.


Ha, this is what it really comes down to! Staying at 24 bits lets me skip the effort of normalizing and then compensating for the normalization gain of each sample in the sampler. Sell a few more, and _move on to making the next instrument faster_. I think I'll do that with any instruments that aren't really resource-heavy or intentionally lo-fi.


----------



## Dex (Jul 12, 2019)

GtrString said:


> If you do decide for 16bit, please write it in the specs. I read that for everything, and will not buy a library with 16bit samples (maybe except for something cheapo, retro sounding stuff).
> 
> I even prefer 24bit, 48khz. Nobody is even close to the perfect sample library, so skimping on specs is the kiss of death, imho.



On the other hand, for most things I prefer 16 bit libraries (assuming they’re done right, as above) for the ram, cpu, and disk space savings. I deleted my 24 bit sonokinetic libraries and just use the 16 bit versions now.


----------



## GtrString (Jul 12, 2019)

Dex said:


> On the other hand, for most things I prefer 16 bit libraries (assuming they’re done right, as above) for the ram, cpu, and disk space savings. I deleted my 24 bit sonokinetic libraries and just use the 16 bit versions now.



Ok, but for commercial projects I just cant justify that. The competition is so fierce that every small inch counts, and I can hear a difference in depth and dimension between 16 and 24 bit samples, as well as a difference in the top end between 44khz and 48khz (with acoustic instruments). Also developers dont count in the processing you might do in the mix stage, and I need to be able to take a little hit from da/ad conversion as well (running outboard gear). So I won’t deal with samples that juust cuts it for the intended use.

Often Im on the fence using samples at all, so while I understand the reasons for size ect (allthough computers gets more powerful all the time), I feel that is more of an alternative route. If the customer base accepts it, of course it is fine, but then Im not in it. But it does matter if we talk acoustic type samples or just electronic sounds. For electronic sounds, the differences might not be as noticeable.


----------



## Nick Batzdorf (Jul 12, 2019)

GtrString said:


> The competition is so fierce that every small inch counts, and I can hear a difference in depth and dimension between 16 and 24 bit samples



Um...



> as well as a difference in the top end between 44khz and 48khz (with acoustic instruments).



Um...


----------



## Jeremy Spencer (Jul 12, 2019)

Nick Batzdorf said:


> Um...
> 
> 
> 
> Um...



Yeah....not going there


----------



## Voider (Jul 12, 2019)

We've made a little blindtest here in the forum around a year ago, me and a few users included, and we all failed to tell the 16 bit from the 24 bit files apart. But before when we compared them side by side, we felt that 24 bit would sound so much bigger, wider and more three dimensional. The blind test proved us wrong.


----------



## Nick Batzdorf (Jul 12, 2019)

Voider said:


> We've made a little blindtest here in the forum around a year ago, me and a few users included, and we all failed to tell the 16 bit from the 24 bit files apart. But before when we compared them side by side, we felt that 24 bit would sound so much bigger, wider and more three dimensional. The blind test proved us wrong.



Whether you hear a difference is highly dependent on many things.


----------



## Voider (Jul 13, 2019)

Nick Batzdorf said:


> Whether you hear a difference is highly dependent on many things.



Or it is simply not that huge of a difference. What you said, professional violin players said about the Stradivari, but in a blind test most of the top players couldn't tell them from other violins apart. Same goes for wine with experts. We feel what we believe and expect. If the $2 wine is in the $2000 bottle it might taste like a $2000 wine, and if we read that a file is 24 bit instead of 16 bit, we might hear more than there actually is. Our brain tries to satisfy our high expectations if we got them, and the same vice versa. The $2000 wine tasted cheap and bad experts claimed, because they drank it out of a $2 package without knowing it's the 2000 dollar one.


----------



## AllanH (Jul 13, 2019)

Even if I'm looking at a 24bit vs. 16 bit recording, how do I really know what I'm hearing? How did it get to 16 bit from 24? Was it recorded in true 16 bit or down-converted by the audio interface from 24 during recording? How about the DACs used during playback? I'm not really sure there is one "best answer" as there are too many steps involved from source to ears.


----------



## JohnG (Jul 13, 2019)

AllanH said:


> I'm not really sure there is one "best answer" as there are too many steps involved from source to ears.



Exactly.

Apropos the "blind tests" some alluded to, if you're playing back on Soundcloud or something like that, it's mighty hard to know what you're really listening to at all. So I don't even think the tests are very valid, unless conducted in a pristine manner. 

I don't know how one could arrange a pristine test online unless people all are downloading the original files _and_ listening back on very good equipment, with good D/A and all that. As Allan says, "too many steps" to be all that confident about what people are reporting.


----------



## Dex (Jul 13, 2019)

I figure you could just make a simple kontakt vi where, for instance, C4 is either the 16-or 24-bit version of a well-recorded 24-bit sample and C5 is the other version, and have people report back which they think is which. 

If you want to make sure they're not cheating by looking at file sizes, re-encode the 16-bit file as a 24-bit file so both versions of the sample have the same file size.


----------



## Nick Batzdorf (Jul 13, 2019)

Voider said:


> Or it is simply not that huge of a difference



Sometimes it's all but inaudible, sometimes it's very audible. If you record something really complex, say a piano or a ride cymbal, and let it ring out in a room... yeah, you'll hear the difference.

16 bits is great as a release format, but it does often make an audible difference when you use 24 bits as a production format and then dither down to 16 bits at the last stage.



AllanH said:


> How did it get to 16 bit from 24?



Hopefully using dither.

Now, I'm going to be honest and say that 20 years ago when I listened to different kinds of dither, I couldn't hear a difference. And while my hearing is still very good, touch wood, I'm sure I'd hear nothing whatsoever different even better now. 

But truncating from 24 bits down to 16 without dither can sound harsh - again, sometimes.


----------



## GtrString (Jul 13, 2019)

General online blindtests are not dependable. What matters is what you hear in your own listening environment, because that's where you react to the music and make mix desicions. Of course, this can always be contested, and there are a myriad of variables. But general consensus online don't mean shi*, really.


----------



## tack (Jul 13, 2019)

GtrString said:


> What matters is what you hear in your own listening environment


... when blinded.

Unblinded listening tests don't mean shit either.


----------



## Nick Batzdorf (Jul 13, 2019)

tack said:


> ... when blinded.
> 
> Unblinded listening tests don't mean shit either.



I think they mean a lot, they just tell you something different.

https://vi-control.net/community/threads/kontakt-5-internal-headroom.62043/page-2#post-4088608


----------



## taldesce (Oct 18, 2019)

Resurrecting this old thread. There are some very knowledgeable people in here and I hope you're willing to let me pick your brains for a moment! I'm not a musician or producer myself, I just love music and am an audio enthusiast in general. I have spent more on my sound systems than my (admittedly mediocre) car, because I just absolutely love great sound. So, I know more than your average joe for sure, but there's a _lot_ I don't know too. A question I've had regarding 24bit vs 16bit is what affect this would have on the bass -- especially sub bass -- frequencies. Specifically when it comes to music or movie sound tracks. I'm a huge basshead and to my ear, the biggest differences I've noticed (or maybe just imagine I notice) seems to be a cleaner (i.e. higher 'resolution'?) bass response. Does it make sense that I would notice a difference there, or am I just placebo-ing myself into thinking that? I've searched around google many times in search of the answer to this but haven't been able to find a good answer yet, so apologies if this seems like a dumb question. The discussion in this thread about 24bit vs 16bit in general has definitely been enlightening to me, so thank you all!


----------



## Willowtree (Oct 18, 2019)

taldesce said:


> Resurrecting this old thread. There are some very knowledgeable people in here and I hope you're willing to let me pick your brains for a moment! I'm not a musician or producer myself, I just love music and am an audio enthusiast in general. I have spent more on my sound systems than my (admittedly mediocre) car, because I just absolutely love great sound. So, I know more than your average joe for sure, but there's a _lot_ I don't know too. A question I've had regarding 24bit vs 16bit is what affect this would have on the bass -- especially sub bass -- frequencies. Specifically when it comes to music or movie sound tracks. I'm a huge basshead and to my ear, the biggest differences I've noticed (or maybe just imagine I notice) seems to be a cleaner (i.e. higher 'resolution'?) bass response. Does it make sense that I would notice a difference there, or am I just placebo-ing myself into thinking that? I've searched around google many times in search of the answer to this but haven't been able to find a good answer yet, so apologies if this seems like a dumb question. The discussion in this thread about 24bit vs 16bit in general has definitely been enlightening to me, so thank you all!


No, as far as playback is concerned, there'll be no audible difference between 24-bit and 16-bit, no matter what people say. Any difference is imaginary, unless you plan on listening above 120 dB, which for your ears' sake, I hope you're not.

It's not a dumb question at all, I might add. However, we're strictly talking playback here. Mixing, where you might use a lot of gain, is a completely different matter and here 24-bit is clearly superior.

However, depending on what you're doing, 16-bit is in many cases going to be enough too.


----------



## tcb (Oct 18, 2019)

I would like to quote myself reply in another topic
"
The difference between 16 and 24 bit audio is all about quantization error.16bit audio has intrinsic -98dBFS white noise in it.But my experience told me the analog noise magnitude is much more higher than 16bit quantization noise.So 16bit sample library is enough.
I didn't say 24bit or higher 32bit floating processing is useless.My opinion is,for a audio file or sample library,16bit is enough.I often heard Low frequency hums or chair noise etc in a sample library.But I never heard a 16bit quantization noise."

---
I don't think 24bit sample library is necessary because their difference is only a -98dBFS white noise(if proper dithered).Compared with analog and environment noise,this is too small.DAW will convert all samples,and processed by 32bit or 64bit floating or etc—depends on your settings.
I usually tend to use 16bit library when possible.I think I can do much more instead of expend my cpu……RAM and hard drive.


----------



## Willowtree (Oct 18, 2019)

tcb said:


> I would like to quote myself reply in another topic
> "
> The difference between 16 and 24 bit audio is all about quantization error.16bit audio has intrinsic -98dBFS white noise in it.But my experience told me the analog noise magnitude is much more higher than 16bit quantization noise.So 16bit sample library is enough.
> I didn't say 24bit or higher 32bit floating processing is useless.My opinion is,for a audio file or sample library,16bit is enough.I often heard Low frequency hums or chair noise etc in a sample library.But I never heard a 16bit quantization noise."
> ...


You're probably right, but I still end up using 24-bit all the time, even though in most cases I'm just wasting RAM.

Didnt say it was rational!


----------



## tcb (Oct 18, 2019)

Willowtree said:


> You're probably right, but I still end up using 24-bit all the time, even though in most cases I'm just wasting RAM.
> 
> Didnt say it was rational!


I am sorry.My expression may inaccuracy……It is not a waste,because 24bit indeed better than 16bit in audio quality.But I think the difference is too small.


----------



## AllanH (Oct 19, 2019)

A related issue is the audio interface. Mine is native 24 bits, and in my experience, it seems as if Cubase adjusts to the native expected format of the interface and converts/stores everything in 24/32 bits no matter the original format. I am guessing that modern 64 bit CPUs really work best on 32 or 64 bit blocks of data and the 16 bit shorts get converted anyway.


----------



## vitocorleone123 (Oct 19, 2019)

If you’re adding effects to the library, especially 3rd party, I’d still prefer and recommend 24bit. Well-recorded samples you aren’t really manipulating but, rather, are composing? 16bit should be fine, I’d think.


----------



## fixxer49 (Oct 19, 2019)

Mike Greene said:


> I've asked myself this same question about whether 24 bits is worthwhile for a sample library. For the reasons Charlie stated really well, my opinion is that it isn't. We (Realitone) record at a 24-bit sample rate, but we then normalize all samples, so at that point, my opinion is there's no need for the extra dynamic range that 24-bit offers.
> 
> For example, a quiet guitar sample when normalized turns into a very loud guitar sample, so in the mapping editor (or in the scripting), that sample gets volume-reduced by 30 or 40 db so that it will sound right. So the _effective_ dynamic range of that guitar zone is 30 or 40 db _plus_ whatever the dynamic range of 16 bits would be. That's plenty.
> 
> ...



In the case of *time-stretched* audio, would the bit depth make any difference in the sound? Will there be difference if the frequency is always the same? I.e. 24/44.1 vs. 16/44.1?


----------



## Willowtree (Oct 19, 2019)

vitocorleone123 said:


> If you’re adding effects to the library, especially 3rd party, I’d still prefer and recommend 24bit. Well-recorded samples you aren’t really manipulating but, rather, are composing? 16bit should be fine, I’d think.


I agree, though I still sometimes get an irrational uncomfortable feeling working with 16-bit recordings. Bias, preconceptions and assumptions ... They're commonplace in the audio world, and we're all vulnerable to them.

Even then, I would argue, for 90% of work, the benefit of 16-bit is questionable if existent in the first place (when working with sample libraries) Anything that involves significant gain or that needs an enormous headroom, 24-bit is clearly superior, even if the benefit may not be as significant as we'd like it to be.



fixxer49 said:


> In the case of *time-stretched* audio, would the bit depth make any difference in the sound? Will there be difference if the frequency is always the same? I.e. 24/44.1 vs. 16/44.1?


I'm not Mike Greene (obviously), but I don't think there would be any difference. Sample rate would be more important for this. If bit depth is vertical (bits used to describe each sample), sample rate is horizontal (the playback rate of the samples). So, the information in each sample isn't going to matter much, but the _rate _of those samples is if you're time stretching.

But someone correct me if I'm wrong.


----------



## Tanuj Tiku (Oct 19, 2019)

DSmolken said:


> I've done a bit of testing yesterday, and I can hear a difference between 16 and 24 bit samples in scenarios where there's a lot of gain in the signal path (compression with +36 dB gain total, or a heavily distorted amp sim), and where a sample rings out completely and fades to silence with no other sounds audible.
> 
> So, let's say a quiet but compressed to hell pizz note is the last sound before a complete silence in a piece of music, and the sample itself fades away to silence, using a volume envelope applied to the WAV file before it goes into the sampler. The noise in the tail fades out audibly smoother with 24 bits, and with 16 bits it hisses for a bit, then ends suddenly.
> 
> So, there's some benefit in at least that one scenario, but is it worth using up 50% more diskspace and loading time, and twice the RAM? Or should I just convert everything to 16 bit after I'm done with all the editing?



My understanding is this:

Any audio process should either make an immediate sonic difference (to ears, not on paper only) that actually can be heard in the outside world to some degree or the cumulative effect of a technique over all. So, in that case things like 24-bit sources are great. Again, recordings at 24-bit are also great. 

But you really do need a good acoustic environment with blind tests to make a decision on most of these types of things. Technically, many things make a difference. 

It also very much depends on the dominant playback system for your material. You do need to keep that in mind.

For example, I have a speaker system that is very high resolution and goes down to 20Hz. The transparency is great and I did a piece of music for a film which had this very low C. The sub sounded amazing in my room but at the dub stage, we had quite a surprise. It just wasn't able to reproduce those extremely low frequencies at the volume and distance required. In effect, every time the low C kicked in, it sounded like a black hole had sucked the life out of it. 

And we are talking cinema mixing! And so, the final mixing engineer had to make some adjustments. May be some cinemas will be able to reproduce it, but it would be an unwise and risky move as the audible difference was great between the low frequencies of the other notes versus the low C. Many cinemas would never be able to achieve that. Forget Netflix, Laptops etc! 

So, yeah that amazing bass patch I made in Diva was indeed very powerful but if most of the world cannot hear it, then I better make some adjustments!


----------

