# Composing in 96k, Summing at 192k, Bouncing at 48k?



## Prockamanisc (Apr 24, 2018)

Does anyone know much about sample rates? 

Here's my setup: 

Tons of VI's running in Cubase, the project is set at 96k. All tracks get grouped down into pair of stereo outputs, which go out externally into a summing unit. That summing unit goes back into a Burl B2 which is converting at 192k, which then gets recorded back as an audio track in 96k. Then I bounce out what I recorded as the deliverable, and I bounce it out as 48k. 

Is this a good way to work? I figure that capturing the analog summing back in at 192k would be beneficial, even though it's being recorded back in at a lower rate. Am I correct in this assumption, or am I missing something?


----------



## robgb (Apr 24, 2018)

Snarky comment removed...
Just record in 48k. You'll be fine. Most of your targets will likely be listening in lower resolution anyway.


----------



## Alex Fraser (Apr 24, 2018)

First time I scored a short film, I sat in the theatre on opening night anxiously awaiting my "moment" in the score - an amazing string crescendo as the camera rose above the trees.
When the moment came, the dubbing guy had buried all my hard work under some tractor engine sound fx.
There's probably a lesson in there somewhere.

You're obviously working at a higher skill level than I am..but seeing as you're asking..it all sounds like a lot of work. Next logical step is to bounce straight to 48 within the DAW, and another version via your mastering chain and see if we can hear the difference.

I smell a 30 page argument incoming..


----------



## Jeremy Spencer (Apr 24, 2018)

robgb said:


> Snarky comment removed...
> Just record in 48k. You'll be fine. Most of your targets will likely be listening in lower resolution anyway.



I agree, to me it seems pointless to work in anything higher...let alone 96k.

@Alex Fraser that is a hilarious story! Almost exactly what happened to me once. My music never seems as "epic" as it does when I'm scoring it, and in the final cut, it's usually at such a low level that it's hardly noticeable in most scenes.


----------



## LinusW (Apr 24, 2018)

Prockamanisc said:


> ...into a Burl B2 which is converting at 192k, which then gets recorded back as an audio track in 96k


No, you would either set the Burl ADC sample rate at 192 kHz and record that 192 kHz signal - or set the Burl DC at 96 kHz and record that. 

There is no point in using 192 kHz ADC and then resample to 96 kHz using your computer. First, it adds another time consuming step. Second, sample-rate conversions in your DAW may add aliasing and you would end up with worse audio than if your ADC used 96 kHz in the first place. 
Also, your 96 kHz score would probably end up in a 48 kHz project anyway so it would be resampled twice. 

Finally, which interface is being used by Cubase? Were you thinking of recording the 2 ch mixdown (AES/EBU signal from Burl) into the same interface+computer+Cubase? Or using another interface?


----------



## JohnG (Apr 24, 2018)

Unless you have some original source material recorded at a resolution above 48k, I would think there is no benefit, and certainly no audible benefit, of working at a higher resolution.

I have heard people debate this for a long time. Here's a white paper on it by someone more knowledgeable than most: http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf

It reads, in part:

"...high quality audio converters operating at sample rates no higher than 96 KHz offer results that are very close to the desired theoretical limits. Yet, there are many who subscribe to the false notion that operating above the optimal sample rate can improve the audio. The truth is that there is an optimal sample rate, and that operating above that optimal sample rate compromises the accuracy of audio. To some, this may seem counterintuitive, but is completely proven; whereas most supporters of higher than optimum sample rates offer only subjective results in support." Dan Lavry et al.


----------



## Tfis (Apr 24, 2018)

192k is so 90s.

Everything below 384k is lofi.


----------



## gsilbers (Apr 24, 2018)

Prockamanisc said:


> Does anyone know much about sample rates?
> 
> Here's my setup:
> 
> ...



you wont get any benefit. and its overcomplicating things. in audio post production the last thing you want is sample rate issues. keep it at 48k. the only time you might want to have anything more "high fi" would be when you are recording a string session or real instruments and once recorded you converted into 48k 24bit. and THATS even too much. 
48k is more than enough.


----------



## Divico (Apr 24, 2018)

gsilbers said:


> you wont get any benefit. and its overcomplicating things. in audio post production the last thing you want is sample rate issues. keep it at 48k. the only time you might want to have anything more "high fi" would be when you are recording a string session or real instruments and once recorded you converted into 48k 24bit. and THATS even too much.
> 48k is more than enough.




Not necesarily. Some people set their project rate higher for upsampling plugins (although most processes vulnerable to aliasing should upsample internally anyway). Some also claim that reverbs rendered at higher sample rates sound better. I havent AB that though. 
I dont think that high sample rates are necessary. And if you go up dont go down again until the final export.


----------



## Light and Sound (Apr 24, 2018)

I'd ask why you're using 96k, or 192k? Are you doing heavy sound design work along the way? It might be worth taking a look at why 48k is a standard, even though we can output at 96k very easily and without issue.

48k was landed on (after 44.1 of course) because of the Nyquist frequency (https://en.wikipedia.org/wiki/Nyquist_frequency) - this is generally why so many thing won't need to be output at anything above 48k, since it will cover everything a human can hear *and *it's corresponding Nyquist frequency. When you're using 96k you're getting Nyquist frequencies but only ones available WAY outside human hearing. It's handy to keep this information before summing *if *you are planning to do heavy editing to the source audio (ie massive detuning, denoising or whatever for sound effects or sampling, common for those big low "booms"), but again, you only need it on the _source _audio, the final output doesn't need that information, you'll detune it or do whatever you need to do to it, then sum is down to 48k, still maintaining those Nyquist frequencies now kept within human hearing.

This is just *one *small look at why 48k is considered "optimal" (and I use that term very loosely as it's always up for debate among audiophiles), but it's a good fundamental look at one of the basic principles of how 48k became a standard.


----------



## Gerhard Westphalen (Apr 24, 2018)

If your VI's are all 48k then why do this? A lot of scores are mixed at 96k (because the studio requires it) but then everything was recorded at 96k and the synth masters are still at 48k.

Why take the huge performance hit? I would just use oversampling on select plugins. Theoretically if you're going out and back in that could help removing any artifacts introduced by the antialiasing filters.

Why not try out the setup at various sample rates and decide for yourself whether its worth doing?


----------



## Prockamanisc (Apr 24, 2018)

Spitfire is recorded at 96k, plugins supposedly work better at 96k, and this video sounds viscerally better at 96k (). Having both played in and conducted real orchestras, it's hard to be completely happy with the sound I'm getting out of my DAW, and this is my latest attempt to close the gap.


----------



## Jeremy Spencer (Apr 24, 2018)

Sorry man, there's no way you can hear the difference between 48 and 96.


----------



## Light and Sound (Apr 24, 2018)

Prockamanisc said:


> Spitfire is recorded at 96k, plugins supposedly work better at 96k, and this video sounds viscerally better at 96k (). Having both played in and conducted real orchestras, it's hard to be completely happy with the sound I'm getting out of my DAW, and this is my latest attempt to close the gap.



Their output is 48k though. They record at 96k (just like us) but deliver in 48k.


----------



## SillyMidOn (Apr 24, 2018)

Alex Fraser said:


> First time I scored a short film, I sat in the theatre on opening night anxiously awaiting my "moment" in the score - an amazing string crescendo as the camera rose above the trees.
> When the moment came, the dubbing guy had buried all my hard work under some tractor engine sound fx.
> There's probably a lesson in there somewhere.
> 
> ...


I had a placement in US tv show, where my music is playing on the radio as an elderly lady and her son are driving in a convertible car with the roof down. They are having a conversation, but after about 10sec the mum says: 

"Can we turn the radio off, please"


----------



## Divico (Apr 24, 2018)

Interesting opinion about distortion introduced by ultrasound when using high sample rates
http://productionadvice.co.uk/high-sample-rates-make-your-music-sound-worse/


----------



## AlexRuger (Apr 24, 2018)

There is no scientific reason to work above 44.1. Film is 48 due to convention (easily divisible by 24 frames per second), but going above 48 makes zero sense unless you're going to be pitching it down later, in which case, yes, 96 or whatnot is a good idea.

You're also causing your rig to work *much* harder, so you're losing out on tons of CPU headroom.

Don't work how you are. Work at 48 if you're in film, 44.1 if you're not. End of story.


----------



## Divico (Apr 24, 2018)

AlexRuger said:


> There is no scientific reason to work above 44.1. Film is 48 due to convention (easily divisible by 24 frames per second), but going above 48 makes zero sense unless you're going to be pitching it down later, in which case, yes, 96 or whatnot is a good idea.
> 
> You're also causing your rig to work *much* harder, so you're losing out on tons of CPU headroom.
> 
> Don't work how you are. Work at 48 if you're in film, 44.1 if you're not. End of story.



Well if it comes to recording and non linear processing thats not true. The idea behind higher sample rates is to use less steep anti alias filters and thus getting less artifacts out of them. 

44.1 Khz is definetly enough to grab the whole audible spectrum (unless you are a super infant and hear even higher than 22k  ). Thats why it is our common standard. Its just the arbitrary chosen sample rate that covers our whole frequency spectrum. We need more than twice the sample rate to reproduce a given frequency.


----------



## NoamL (Apr 24, 2018)

The only time I ever worked with 96k was on a film where we had sampled rhythmic elements and the composer wanted to radically timestretch & tune them to match different tempos and keys. Even for that, we created a 96k resampling session to bounce out material at the demanded tempo & pitch, and the actual VI sessions were still in 48k.


----------



## robgb (Apr 24, 2018)

Divico said:


> Some also claim that reverbs rendered at higher sample rates sound better.


Some claim that Monster cables with gold tips are far superior to regular cables, too...


----------



## Divico (Apr 24, 2018)

NoamL said:


> The only time I ever worked with 96k was on a film where we had sampled rhythmic elements and the composer wanted to radically timestretch & tune them to match different tempos and keys. Even for that, we created a 96k resampling session to bounce out material at the demanded tempo & pitch, and the actual VI sessions were still in 48k.


I doubt that resampling to 96 from 48 khz will give any benefit if it comes to postprocessing.


----------



## JohnG (Apr 24, 2018)

robgb said:


> Some claim that Monster cables with gold tips are far superior to regular cables, too...



lol but they're shiny!

Actually -- for full confession -- I used to scoff at people who spent more than $2.50 on speaker (and most) cables. They I got Mogami and Canari and...

...became a believer. I practically fell off my chair at what a difference it made.

Mind you, this was after substantial upgrades to everything else in the chain -- D/A converter, amp and speakers.


----------



## AlexRuger (Apr 24, 2018)

Divico said:


> I doubt that resampling to 96 from 48 khz will give any benefit if it comes to postprocessing.


Try reading his post again.


----------



## AlexRuger (Apr 24, 2018)

Divico said:


> Well if it comes to recording and non linear processing thats not true. The idea behind higher sample rates is to use less steep anti alias filters and thus getting less artifacts out of them.
> 
> 44.1 Khz is definetly enough to grab the whole audible spectrum (unless you are a super infant and hear even higher than 22k  ). Thats why it is our common standard. Its just the arbitrary chosen sample rate that covers our whole frequency spectrum. We need more than twice the sample rate to reproduce a given frequency.



*sigh*

This thread is just going to be filled with pedantic people agreeing with each other, but with a tone of disagreement so that they can show how much they gleaned from the one YouTube video they saw on the subject, right? In that case, forget I was here, I'm out.


----------



## NoamL (Apr 24, 2018)

Divico said:


> I doubt that resampling to 96 from 48 khz will give any benefit if it comes to postprocessing.



I probably explained poorly - the original acoustic rhythmic elements were recorded in 96k, we timestretched & pitched them in a 96k session, then bounced "down" to 48k so we could use them in 48k scoring sessions.


----------



## jcrosby (Apr 25, 2018)

Divico said:


> I doubt that resampling to 96 from 48 khz will give any benefit if it comes to postprocessing.


 It won't, all it does is add twice as many data points of empty information. Just like you can't turn a low res mp3/lossy file into a 24 bit uncompressed audio file you can't magically add information back if it wasn't there in the first place... Certainly not with technology as we currently know it...


----------



## Divico (Apr 25, 2018)

jcrosby said:


> It won't, all it does is add twice as many data points of empty information. Just like you can't turn a low res mp3/lossy file into a 24 bit uncompressed audio file you can't magically add information back that's either been thrown away or wasn't there in the first place... At least not with technology as we currently know it...


I know that . I misunderstood Noams comment thats why I wrote this


----------



## VinRice (Apr 28, 2018)

AlexRuger said:


> There is no scientific reason to work above 44.1


 
Wrong. Can you really not hear a difference between 48 and 44.1? I certainly can and my hearing stops at 11.5kHz. It's not just about bandwidth.


----------



## Gerhard Westphalen (Apr 28, 2018)

VinRice said:


> Wrong. Can you really not hear a difference between 48 and 44.1? I certainly can and my hearing stops at 11.5kHz. It's not just about bandwidth.


I'm not commenting on whether the is an audible difference, but it is only about bandwidth. People who say that there is a higher temporal resolution are wrong and it is an easy to misunderstand part of sampling. Keep in mind that converters, clocks, and other gear behave differently at different sample rates so it's pretty difficult to properly compare.


----------



## AlexRuger (Apr 28, 2018)

VinRice said:


> Wrong. Can you really not hear a difference between 48 and 44.1? I certainly can and my hearing stops at 11.5kHz. It's not just about bandwidth.



I'm not going to argue about the color of the sky with someone who's colorblind.


----------



## yhomas (Apr 28, 2018)

I work as a EE and having dealt with some of these issues first hand (not in audio), there certainly is a benefit for enormous over sampling because it makes analog/digital filter design trivially easy (and less likely to mess up). For example, in one project, the digital data rate that we store is at 500kHz, but we sample at 20MHz. We sample at higher rates is to accurately capture high frequency noise sources that the analog filter isn’t good enough to remove.

If everything is perfectly done, just stay at [email protected] Few if any ears on the plantet can actually tell the difference between (properly dithered) [email protected] vs [email protected]

However, any DSP math _will_ degrade the quality of the data. Even a simple gain change causes a loss of information (except in special cases). This is why audio engines/plugins generally do internal processing at higher bit depths and sample rates before a final conversion back to the appropriate resolution. 

By forcing the project to a higher sample rate, you are buying insurance against defective/subpar plugin/synth math. So there can be no doubt that in real world usage, there really are instances where 96kHz/384kHz sounds better than 48kHz; but in the typical case using respectable equipment/software I could also imagine that one would struggle to contrive a situation where any difference was discernible in blind listening. 

Bottom line, it doesn’t matter a lot, but it’s intersting conversation for nerds.


----------



## VinRice (Apr 29, 2018)

AlexRuger said:


> I'm not going to argue about the color of the sky with someone who's colorblind.


 ...but you could still discuss the shape of the clouds and the sharpness of their outline...


----------



## blougui (Apr 29, 2018)

Prockamanisc said:


> Does anyone know much about sample rates?
> 
> Here's my setup:
> 
> ...



Yeah : You're missing a big chunk of money you could have well spent somewhere else


----------



## Prockamanisc (Apr 29, 2018)

I...just don't know where this thread went. This is the argument that most people seem to be making in this thread: 

Premise 1) The human hearing range stops at ~20kHz. 
Premise 2) Sampling above 44.1kHz is pointless because we can't hear anything beyond that.
Premise 3) The only two premises that matter are premises 1 and 2. 
Premise 4) Anything that would show me otherwise is invalid because premise 3 says that it doesn't.
Premise 5) This video doesn't prove anything, even if it does (it does). (). 

The question is: what do we do with this information? What's happening within our audible range that's somehow affected by recording in 96kHz and above?


----------



## Erick - BVA (Apr 29, 2018)

Prockamanisc said:


> Does anyone know much about sample rates?
> 
> Here's my setup:
> 
> ...


Seems like far too much work than it's worth. 
When I'm sampling for a library, that is the only time I'll go above 48khz. I'll record at 96khz and down sample to 48khz or 41khz by the time the instrument is done. 
When recording for songs or music tracks I never go above 41khz or 48khz. It simply causes too much latency, and the supposed benefits in sound quality are either marginally noticeable. or probably due to some kind of placebo effect.
I'd like to see a comprehensive blind study on this with hundreds of participents. Without a high number of parcipitents, you could easily attribute the differences to subjectivity or chance.


----------



## Erick - BVA (Apr 29, 2018)

Prockamanisc said:


> I...just don't know where this thread went. This is the argument that most people seem to be making in this thread:
> 
> Premise 1) The human hearing range stops at ~20kHz.
> Premise 2) Sampling above 44.1kHz is pointless because we can't hear anything beyond that.
> ...




The video doesn't prove anything. There is no way to legitimately test the difference in sound quality in this way. It's too subjective and leading --we know which sound is which and we expect to hear certain things. You need a blind test with lots of people.


----------



## Erick - BVA (Apr 29, 2018)

My hunch is that simply recording at higher hz is changing some of the harmonic characteristics, which then give a different "quality" to the sound --not necessarily better or worse. The higher fequencies (not audible by us) are interacting with lower frequencies (which are audible by us). So the higher the hz you record in, the more changes in harmonic interactions (however subtle they may be). But that could be total bs. I'm just going on a hunch based on my limited knoweldge of auditory illusions.


----------



## Prockamanisc (Apr 29, 2018)

The second half of that video is so much richer than the first. Do you hear it? I wouldn't even call it subjective. Objectivity is whether it's there or not. Subjectivity is whether we like it or not. 

Objectively, it's there. Subjectively, I love it.


----------



## Prockamanisc (Apr 29, 2018)

Sibelius19 said:


> My hunch is that simply recording at higher hz is changing some of the harmonic characteristics, which then give a different "quality" to the sound --not necessarily better or worse. The higher fequencies (not audible by us) are interacting with lower frequencies (which are audible by us). So the higher the hz you record in, the more changes in harmonic interactions (however subtle they may be). But that could be total bs. I'm just going on a hunch based on my limited knoweldge of auditory illusions.


That's exactly what my hunch is, but I don't have any science to back it up.


----------



## Prockamanisc (Apr 29, 2018)

As a (probably) completely false thought experiment, I'd also put this forward (just for fun): Speakers can only handle so much, so they distribute the freq. spectrum equally across all ranges. By inserting freqs above the normal cutoff range, it re-balances the speaker output to capture more of the high end. Or something like that.


----------



## Prockamanisc (Apr 29, 2018)

Light and Sound said:


> Their output is 48k though. They record at 96k (just like us) but deliver in 48k.


Yeah, that's fine. I deliver in 48k. And that video, which was recorded in 96k, sounds obstensibly better than the one in 48k, even though it's delivered at 48k. Even downsampling will keep some of the extra that's in there. But what's in there that we're hearing?


----------



## yhomas (Apr 29, 2018)

Prockamanisc said:


> I...just don't know where this thread went. This is the argument that most people seem to be making in this thread:
> 
> Premise 1) The human hearing range stops at ~20kHz.
> Premise 2) Sampling above 44.1kHz is pointless because we can't hear anything beyond that.
> ...




Math errors are the most reasonable explanation. People tend to assume that everything is “perfect”, but it is quite easy to degrade audio with improper processing—but this was much more the case in the days of 32-bit processing.


----------



## Prockamanisc (Apr 29, 2018)

That video doesn't sound random, though (as I assume the math errors would be). It sounds much richer and fatter, the way I know real synths to be. To me, that video is showing that recording at a higher sample rate gets the sound closer to reality.


----------



## jcrosby (Apr 29, 2018)

Prockamanisc said:


> The second half of that video is so much richer than the first. Do you hear it? I wouldn't even call it subjective. Objectivity is whether it's there or not. Subjectivity is whether we like it or not.
> 
> Objectively, it's there. Subjectively, I love it.



You realize you're listening to a downsampled version that has also been converted to a lossy format no?
This reminds of optical illusions that trick your brain into thinking one object is larger than the other when in fact they are identical in size...

It also brings to mind the McGurk effect which illustrates that our auditory perception can be completely altered by what we see. Regardless of whether you "think" it is or isn't true, visual bias is evolutionarily hardwired into us... Vision pretty much always wins in fMRI research...




Downsampling also creates artifacts in the audible range, and not all daws are equal. Ironically Ableton Live, (which I have a love-hate relationship with), has SRC on par with Izotope's SRC, whereas Sadie 6, (a mastering DAW with a hefty price tag shows severely audible garbage when downsampling. Other daws, daws that people swear up and down as having a supremely better sound engine produce visible and/or audible artifacts downsampling 96k to 44.1.

Link below, see how your DAW stacks up in terms of downsampling: http://src.infinitewave.ca/


----------



## Alex Fraser (Apr 30, 2018)

jcrosby said:


> It also brings to mind the McGurk effect which illustrates that our auditory perception can be completely altered by what we see. Regardless of whether you "think" it is or isn't true, visual bias is evolutionarily hardwired into us... Vision pretty much always wins in fMRI research...


Really interesting. More than once I've started working with a plugin which is placed on a different track to the one I believe it to be on. And it takes me about 30 secs to realise the mistake, as at the time, I'm "hearing" the effects.
Glad I'm not going mad anyway..


----------



## jcrosby (Apr 30, 2018)

Alex Fraser said:


> Really interesting. More than once I've started working with a plugin which is placed on a different track to the one I believe it to be on. And it takes me about 30 secs to realise the mistake, as at the time, I'm "hearing" the effects.
> Glad I'm not going mad anyway..



We've all done this


----------



## Erick - BVA (Apr 30, 2018)

jcrosby said:


> We've all done this


And I thought I was the only one!


----------



## Prockamanisc (Apr 30, 2018)

Alex Fraser said:


> Really interesting. More than once I've started working with a plugin which is placed on a different track to the one I believe it to be on. And it takes me about 30 secs to realise the mistake, as at the time, I'm "hearing" the effects.
> Glad I'm not going mad anyway..


That used to happen to me, until I realized that was the thing, so now I always let me ears be the guide. So now the reverse happens: I'll put a plugin on the wrong track and tweak it. When I don't hear anything changing, I'll start to say to myself "I spent all that money on this plugin and it doesn't do anything?". Then I realize my mistake and adjust.

For instance, when I was just watching that McGurk video, I kept saying to myself "that's weird, that doesn't really sound like an F. It just sounds like what I was just hearing...I wonder if they used a bad mic that didn't pick up the air that escapes between the lip and the teeth just before the plosive." And then they explained it and then it made sense what they were doing.


----------



## robgb (Apr 30, 2018)

jcrosby said:


> It also brings to mind the McGurk effect which illustrates that our auditory perception can be completely altered by what we see


I wonder if this comes into play when we're watching sample library demos. What if I were to do a video showing a Spitfire instrument in action in Kontakt, but replace the sound with what many consider an inferior product (pick your poison). Would people be influenced by that Spitfire name and think it's the greatest thing since sliced bread? Or would they be able to recognize the sound they had had called "synthetic" the month before when it had a different developer's label on it?


----------



## yhomas (Apr 30, 2018)

Prockamanisc said:


> That video doesn't sound random, though (as I assume the math errors would be). It sounds much richer and fatter, the way I know real synths to be. To me, that video is showing that recording at a higher sample rate gets the sound closer to reality.



There are all kinds of math errors that don’t sound random. But IMO, a simple single pass of good a quality (properly implemented) 48k vs 96k recording (of anything), played back at 48kHz isn’t going to show differences that anyone can hear in blind testing. So any audible differences represent some gross error in the test methodology or in the hardware/software implementation. But IMO, errors are much more common than people think.


----------



## Josh Richman (Apr 30, 2018)

Just to get some clarity. So are you guys typically working at 44.1k or 48k?


----------



## Nick Batzdorf (Apr 30, 2018)

Prockamanisc said:


> Tons of VI's



Whoa whoa whoa - let's stop right there.

Can you hear any difference working at 96K - sacrificing half your computer system's performance - rather than 48 or 44.1? These are VIs, not acoustic recordings.

My very, very strong hunch is no.

And I'm not an audio skeptic. (For example, I've heard the difference with some tweak audio cables, and I do believe they can be a legitimate improvement when you're carrying analog signals.)


----------



## Nick Batzdorf (Apr 30, 2018)

Divico said:


> Not necesarily. Some people set their project rate higher for upsampling plugins (although most processes vulnerable to aliasing should upsample internally anyway). Some also claim that reverbs rendered at higher sample rates sound better. I havent AB that though.
> I dont think that high sample rates are necessary. And if you go up dont go down again until the final export.



^ Divico knows wassup. That right there is the answer.


----------



## Nick Batzdorf (Apr 30, 2018)

AlexRuger said:


> *sigh*
> 
> This thread is just going to be filled with pedantic people agreeing with each other, but with a tone of disagreement so that they can show how much they gleaned from the one YouTube video they saw on the subject, right? In that case, forget I was here, I'm out.



It's also worth pointing out that Divico is absolutely right!

But look, this is a question for your ears rather than opinions. Prockman, you need to listen yourself.


----------



## jcrosby (Apr 30, 2018)

robgb said:


> I wonder if this comes into play when we're watching sample library demos. What if I were to do a video showing a Spitfire instrument in action in Kontakt, but replace the sound with what many consider an inferior product (pick your poison). Would people be influenced by that Spitfire name and think it's the greatest thing since sliced bread? Or would they be able to recognize the sound they had had called "synthetic" the month before when it had a different developer's label on it?



Although I don't know for sure it certainly wouldn't surprise me if this were the case...

To me the McGurk Effect suggests a few things...
A. Our brains evolved to prioritize visual cues as a means of survival, and probably as an adaptation for social bonding.
B. It also seems to suggest that expectations alone might have the capability to modify our perception of what we hear...
I.E. if something as simple as seeing the lip curled under to form an f hacks your brain into 'hearing' an f, I'd imagine it's just as easy to be led by any number of other factors... The room, and equipment in the room, User Interface etc.
I'd imagine even what someone is saying in a walkthrough can influence your perception of sound.

Speaking of which there's a great EDM mastering series where they basically discuss and demonstrate all of these things... Not only is it a solid, no frills mastering series, it's great fun as well... (And useful across all genres despite being EDM centric.)... The series is actually where I picked up on this video... If anyone's interested post back and I'd be happy to paste a link...


----------



## Divico (Apr 30, 2018)

And dont forget to do proper blind testing 
Had an argue with my dad once that high res mp3s are worse than wavs. Like audibly worse.
Blind tested me. In the end he played me the same file 3 times and #i was like. Yeah this ones definetly the wav and here we have the mp3. Our ears can be fooled so easily.
And yes I can hear the difference on a good system


----------



## JohnG (Apr 30, 2018)

Josh Richman said:


> Just to get some clarity. So are you guys typically working at 44.1k or 48k?



I've met some Real Famous Composers who work at 44.1 for "regular" composing (not live orchestra -- when they are doing stuff based on samples) and upsample at the end to whatever delivery format is needed.

Most guys seem to record live orchestra for films at 96k and work in that until final delivery (if final delivery is 48). I don't know anyone who does full orchestra for TV but I would be surprised if they bother with 96 -- maybe they do so I'm not condemning it.

I work at 48k all the time, ignoring the matter that nearly all my samples were recorded at 44.1. Some people have told me that's crazy but plenty of guys do it.

So how's that for a fuzzy answer?


----------



## Divico (Apr 30, 2018)

Unless youre not rendering and doing destructive edits there is no wrong. Keep in mind to render at 48 for film and your good to go. I doubt that the difference between 44 and 48 is crucial for composing and probably also for mixing.
Some claim that reverbs sound better in higher sample rate so if you can hear the difference go with a higher one for mixing and rendering and downsample afterwards.


----------



## robgb (Apr 30, 2018)

jcrosby said:


> I'd imagine even what someone is saying in a walkthrough can influence your perception of sound.


You may be onto something there. I'm reminded of 8Dio and Spitfire walkthroughs where they often talk about the "beauty" of the sound or describe its "silkiness" or whatever. These things may or may not be true, but we tend to want to believe them.


----------



## Divico (May 1, 2018)

omiroad said:


> That video has its audio encoded with the opus codec, which only encodes frequencies up to 20kHz.


I highly doubt that frequencies above 20k make a difference for us adults. Maybe for young mozarts they do 
As I´ve written earlier the point in those high sample rates is not to capture ultrasound but a technical one.
Mainly this has to do with the anti alias filtering


----------



## Nick Batzdorf (May 1, 2018)

As Mr. or Ms. Divico says, the argument for higher sample rates is that they put the ringing from the brick wall filter out of range of human hearing, not that they capture higher frequencies.

Here's the basic explanation.

All audio, once you zoom way in, is a combination of sine waves. Picture a speaker: it can only go backward or forward.

Because of that, you only need to sample twice the rate of a frequency - to capture the positive and negative sine wave oscillations - to reproduce it; any higher frequencies produce garbage sound (the technical term is aliasing). That's what the brick wall filter does: remove any frequency higher than twice the sampling rate.

But all filters "ring" - distort, basically - around their cutoff frequencies. That ringing can be exploited for its color, but like all distortion it can also sound like arse. You don't want to hear it in a brick wall filter.

Ergo the argument for higher SRs: my first sentence in this post.


----------



## Piano Pete (May 1, 2018)

Unless I am doing some form of crazy sound design and manipulation, I typically submit most of my stuff at 48k as that is what has been asked of me.


----------



## PaulieDC (May 1, 2018)

Remember the megapixel war? People thought a 6mp camera provided better quality than a 3mp camera, when that has nothing to do with quality. It only dictates how big of a print you can make before upsampling. For Facebook it's irrelevant, a 2mp image is more than enough, lol. The quality issue is the size and closeness proximity of the sensors, that determines how much light you capture, which is why night imagery on a droid or iPhone looks like a photo in a newspaper when you blow it up on screen or try to edit... totally falls apart. The low light imagery coming off of today's full frame DSLRs is no short of phenomenal, but you need lenses that can resolve light well enough to feed the sensors. My three lenses ran me $4500, but image quality is off the chart and the y have paid for themselves 5 times over.

Point is, given you start with great libraries, add in a good quality interface and proper signal routing and high end preamps if you're grabbing any analog and proper gainstaging in the mix will give you fantastic output... 48 or 96 or 192 doesn't really factor in most cases. Especially when today's listener sets the iPhone on the table and that's how they have gotten used to listening (that one drives me nuts, we have to tell people to use decent headphones ).

Go with 48, your CPU will love you. I had to be convinced, took almost a year.

Exception to the rule: If you plan on iTunes distribution and want the Mastered for iTunes logo, that means 24/96 final output. JSYK.


----------



## Jeremy Spencer (May 1, 2018)

PaulieDC said:


> Especially when today's listener sets the iPhone on the table and that's how they have gotten used to listening (that one drives me nuts, we have to tell people to use decent headphones )



That reminds me of a production meeting I went to last year, the director was stoked about my music...and wanted to play a few of the cues through one of those little "ball" speakers that plugged into his iPhone. I was unsuccessful with talking him out of it, and to my terror, the cues were played. The rest of the people at the meeting said they were stoked about the music and that it sounded fantastic; but it was all high end, and you could barely hear any of the main elements (brutal for a composer!!). But like you said, John-Q Public are accustomed to that level of quality now.


----------



## PaulieDC (May 1, 2018)

Wolfie2112 said:


> That reminds me of a production meeting I went to last year, the director was stoked about my music...and wanted to play a few of the cues through one of those little "ball" speakers that plugged into his iPhone. I was unsuccessful with talking him out of it, and to my terror, the cues were played. The rest of the people at the meeting said they were stoked about the music and that it sounded fantastic; but it was all high end, and you could barely hear any of the main elements (brutal for a composer!!). But like you said, John-Q Public are accustomed to that level of quality now.


Boy, you said it! Well, I guess if a listener is quite familiar with that output quality, then they can TELL if something works. I know one mix engineer that does that very thing, after listening to a couple different near-fields and headphones, he plays it on the iPhone last to assess the mix. Might as well get all scenarios in! Makes me want to pick up a '67 Buick with a pushbutton radio and 8-Track player, and whip that Tone control back and forth and see how the mix sounds. 

So, 48Khz probably won't be the crutch in our workflow any time soon...


----------



## Nick Batzdorf (May 1, 2018)

omiroad said:


> So you were able to pass ABX double blind tests where you distinguish 48kHz and 96kHz+ audio?
> 
> Oh, you didn't? Then ssshhh...



Are you being facetious, or is that related to the discussion about the price of pajamas in Uruguay?


----------



## Tfis (May 2, 2018)

Nick Batzdorf said:


> Ergo the argument for higher SRs: my



Not higher sample rates but oversampling.

https://people.xiph.org/~xiphmont/demo/neil-young.html#toc_o


----------



## Nick Batzdorf (May 2, 2018)

Tfis said:


> Not higher sample rates but oversampling.
> 
> https://people.xiph.org/~xiphmont/demo/neil-young.html#toc_o




Actually both higher SRs and oversampling filters.


----------



## germancomponist (May 2, 2018)

I mostly work in 44,1 and/or 48 kHz. Most sample libraries are recorded or matched to this. I think this is much more important:


----------



## Divico (May 2, 2018)

Nick Batzdorf said:


> As Mr. or Ms. Divico says, the argument for higher sample rates is that they put the ringing from the brick wall filter out of range of human hearing, not that they capture higher frequencies.
> 
> Here's the basic explanation.
> 
> ...



Mr. 
Good explanation though there is a little mistake. Not important in a real life usage though 
You need *more *than twice the sample rate to reproduce a frequency. 
Higher sampe rates are good because the filter doesnt have to be that steep.


----------



## Nick Batzdorf (May 2, 2018)

Divico said:


> Mr.
> Good explanation though there is a little mistake. Not important in a real life usage though
> You need *more *than twice the sample rate to reproduce a frequency.
> Higher sampe rates are good because the filter doesnt have to be that steep.



Right, you need a little more than 2xfs. This was a perfunctory explanation. Not a mistake, just avoiding unimportant details.


----------

