# What audio buffer setting can your system handle?



## composerguy78 (Jan 4, 2019)

I have a 5 year old machine (hackintosh) with an Intel i3770k chip in it - this is my DAW rig. The fastest audio buffer setting (in Reaper OR Logic) that I can work with is 256. I have all my sample libraries loaded on a connected slave machine running Vienna ensemble pro.

256 feels pretty slow and I would like to get it down to 128 if possible. I am considering buying a new computer, not a hackintosh, and I would be curious to know if people who have newer machines are easily able to achieve lower audio buffer settings.

What is your audio buffer normally set to?


----------



## PeterBaumann (Jan 4, 2019)

I've never been able to get mine lower than 1024 without introducing pops and clicks! I've got used to the latency now but no idea how others manage to get anything close to 128 if running large orchestral templates. How many tracks do you have running at once?


----------



## Giscard Rasquin (Jan 4, 2019)

I7 8700k
1000 track template with most samples hosted on slave PC
I start with 256 and at some point during a project I have to switch to 512 because of pops and clicks


----------



## AdamKmusic (Jan 4, 2019)

i7-7820X
64gb RAM
Template is currently around 200+ instances of kontakt & Zebra, all in one box + plugins etc, I run at 256 with no problems.


----------



## Studio E (Jan 4, 2019)

GuitarG said:


> I7 8700k
> 1000 track template with most samples hosted on slave PC
> I start with 256 and at some point during a project I have to switch to 512 because of pops and clicks



I don't have such a large template but I do have about the same settings. I can usually work at 128 or 256 for a while, and it feels fine really, but then I end up at 512 while continuing to add parts and eventually, way more for mixing, like 2048. My machine is an older i7 6700. I'm not sure how much of this has to do with the machine vs your audio interface


----------



## PeterBaumann (Jan 4, 2019)

GuitarG said:


> I7 8700k
> 1000 track template with most samples hosted on slave PC
> I start with 256 and at some point during a project I have to switch to 512 because of pops and clicks


Sheesh - wonder how much difference the 8700k would make here or if it's something else in the setup which is causing the issues :/ Although I've got it working and I've got used to the latency, sure would be nice to be running at lower buffer settings!

Audio interface is Focusrite Clarett 8Pre (previously used the Steinberg UR22)


----------



## Studio E (Jan 4, 2019)

PeterBaumann said:


> Sheesh - wonder how much difference the 8700k would make here or if it's something else in the setup which is causing the issues :/ Although I've got it working and I've got used to the latency, sure would be nice to be running at lower buffer settings!
> 
> Audio interface is Focusrite Clarett 8Pre (previously used the Steinberg UR22)



I'm not slamming Focusrite as I really like some of their products, but there has been much discussion online about the stability of their drivers for the Clarett 8Pre. I actually ordered one and had to return it after even Sweetwater's techs couldn't make it work on a brand new Thinkpad. Just saying, if you get the opportunity to tryout an RME or Orion (I use one of each), maybe you could see if it helps.


----------



## Giscard Rasquin (Jan 4, 2019)

PeterBaumann said:


> Sheesh - wonder how much difference the 8700k would make here or if it's something else in the setup which is causing the issues :/ Although I've got it working and I've got used to the latency, sure would be nice to be running at lower buffer settings!
> 
> Audio interface is Focusrite Clarett 8Pre (previously used the Steinberg UR22)



Focusrite Scarlett 18i20 on this side. 
About to change my NVIDIA graphics card as Latencymon says it’s causing extra DPC latency 
Hope to be able to work at 256 for the whole project then.


----------



## will_m (Jan 4, 2019)

GuitarG said:


> About to change my NVIDIA graphics card as Latencymon says it’s causing extra DPC latency
> Hope to be able to work at 256 for the whole project then.



If you go into the Nvidia control panel, under power management mode change it to 'prefer maximum performance.' I believe the default setting of 'adaptive' can be the cause of additional DPC latency.


----------



## Giscard Rasquin (Jan 4, 2019)

will_m said:


> If you go into the Nvidia control panel, under power management mode change it to 'prefer maximum performance.' I believe the default setting of 'adaptive' can be the cause of additional DPC latency.



Thanks, read that somewhere as well, but can´t find that option in my NVIDIA control panel

EDIT: found it now under 3D settings. Let´s see if that changes anything


----------



## rgames (Jan 4, 2019)

Three important points about latency when using VIs:

First, min latency is a function of a lot of things. One of them is the type of music you write - large orchestral templates with dense orchestrations and hundreds of VIs will require higher latency (i.e. buffer settings) than a track with only audio and some synths. So type of music is the first consideration - that narrows it down a bunch. Playing a Mahler symphony with VIs will probably require much higher latency than a pop song.

Second, it's best to discuss latency in units of time, not buffer sizes. The reason is that the actual latency that you feel is a function of the buffer size, the sample rate and (if you're using it) the network connection (e.g. with VE Pro over Ethernet). My standard buffer setting is 128 samples for a large orchestrla template. However, I run at 44.1 kHz and use one buffer for VE Pro over the network. So my total latency is about 7 ms (3.5 ms for sound card plus 3.5 ms for VE Pro). For someone running two VE Pro buffers at 128 samples the total latency is more than 10 ms. For someone not running VE Pro, a 128 buffer is only 3.5 ms. For someone running 96 kHz and no VE Pro a 128 buffer size gives under 1.5 ms latency.

Third, after you figure out your true latency in units of time, compare your latency to acoustic instruments. The fastest acoustic instruments have latencies of maybe 30 ms and many (e.g. bass instruments) have latencies that approach 500 ms in their lowest registers. Then ask yourself: if people have been making music for millennia with such high latencies then why do people strive for such low latencies with virtual instruments? Odds are most virtual pianos have significantly lower latencies than those present on real pianos.

So, bottom line, don't spend a lot of time worrying about latency. It was a problem 20 years ago and people forgot to stop worrying about it.

However, having said all that, the four things that have the biggest impact on VI latency are CPU clock speed, audio drivers, network drivers and video drivers. Number of cores in general is not much of a help for latency. Also, dual-CPU systems tend to have higher latency because of the time required to coordinate tasks between CPUs. But again, pretty much any i7 or better system these days gives more than adequate latency.

rgames


----------



## clisma (Jan 4, 2019)

Classic MacPro 12-core 3.06Ghz. Running orchestral templates around 100 tracks at 256 no problem. No VePro. Only time things get dicey is with heavy plugs on the master (Acustica Audio, Frei:Raum, limiters, etc.)


----------



## composerguy78 (Jan 4, 2019)

Hey there everyone, thank you so much for your replies. 
rgames - You have alerted me to the issue I have been actually having the problem with! It's the audio buffer in the Vienna Ensemble Pro plugin! It was a combination of that and the MIR Pro running on the slave which is really slowing things down for me! 

Thank you!


----------



## marclawsonmusic (Jan 4, 2019)

PeterBaumann said:


> I've never been able to get mine lower than 1024 without introducing pops and clicks! I've got used to the latency now but no idea how others manage to get anything close to 128 if running large orchestral templates. How many tracks do you have running at once?



Hi Peter, for what it's worth, I am running a similar setup - with a late 2013 iMac i7 - and am using 128 samples at 48 kHz. However, I have a UAD Apollo Twin interface, which might explain the differences. PS - I am using Logic, not Cubase.


----------



## Nick Batzdorf (Jan 4, 2019)

I run Logic at 128. 5,1 12 x 3.46 Mac Pro, SSDs.

Question: do you have multithreading set to playback tracks (Preferences -> Audio -> Devices)? That makes a difference.

Actually, now I'm only 99% sure that was the setting that enabled multithreading (so you see 24 CPU meters rather than 12 when you double-click CPU in the transport to bring up its little window). I had to change a default setting, and I'm 99% sure that was the one.


----------



## Mishabou (Jan 4, 2019)

rgames said:


> Three important points about latency when using VIs:
> 
> First, min latency is a function of a lot of things. One of them is the type of music you write - large orchestral templates with dense orchestrations and hundreds of VIs will require higher latency (i.e. buffer settings) than a track with only audio and some synths. So type of music is the first consideration - that narrows it down a bunch. Playing a Mahler symphony with VIs will probably require much higher latency than a pop song.
> 
> ...




My buffer is set at 64 and when I use lots of VI, I might have to bump it to 128. Anything higher than 256 is unacceptable for tracking/overdubs. I guess if you don't need to play in your parts and / or rarely do overdubs than this won't be an issue.

Science aside, every musicians perceive and deal with latency differently. Most musicians I work with can definitely feel if latency reaches above 6 - 7 ms. Not sure how you arrive at the 30 ms latency for the fastest acoustic instrument but I'm certain that when tracking my drums, the time between the hit and the sound to reach my ears is way less than 30 ms.


----------



## Gerhard Westphalen (Jan 4, 2019)

When working with mockups I'm normally at 512. I could probably lower it.

When it comes to mixing, you can only get so much by increasing the buffer (gives the computer more leeway in how quickly it needs to deliver results but at the end of the day, it still needs to process the same amount of data) so I'm also normally at 512.

When I'm mastering at 96k I'll go directly to 4,096 and even then, that's often not enough. That has to do with DAWs not being able to multithread the plugins on a single track so I ended up basically only pushing a single core. I could overclock but then my computer wouldn't be silent. If anyone knows of a DAW that can multithread "serial" plugins, please let me know!!!

For tracking, even 64 can be too much.


----------



## puremusic (Jan 4, 2019)

GuitarG said:


> Focusrite Scarlett 18i20 on this side.
> About to change my NVIDIA graphics card as Latencymon says it’s causing extra DPC latency
> Hope to be able to work at 256 for the whole project then.



Hmm, how do you learn that from Latencymon? I've tried out the program but it's mostly Greek to me. I really don't understand what 'good' "interrupt to process latency" or "interrupt to DPC latency" is and what would fix it. I do get a 'green' report at least.

Right now I'm running integrated graphics and the only reason I really consider getting a video card is to help with latency just in case.


----------



## Giscard Rasquin (Jan 5, 2019)

puremusic said:


> Hmm, how do you learn that from Latencymon? I've tried out the program but it's mostly Greek to me. I really don't understand what 'good' "interrupt to process latency" or "interrupt to DPC latency" is and what would fix it. I do get a 'green' report at least.
> 
> Right now I'm running integrated graphics and the only reason I really consider getting a video card is to help with latency just in case.



Because it states that the highest reported DPC latency is from the NVIDIA driver. Also the highest ISR execution has to do with graphics so I’m done with NVIDIA. Tried the high performance option but that doesn’t make any difference in my case.


----------



## puremusic (Jan 5, 2019)

I see, well, AMD is supposed to come out with a new series soon, I've been waiting on that too, as much as I dislike being a pioneer. The older ones are pretty power intensive compared to the competition.

Let's see according to Latency Mon my:

Highest measured interrupt to process latency (µs): 140.80
Average measured interrupt to process latency (µs): 4.503161
Highest measured interrupt to DPC latency (µs): 108.80
Average measured interrupt to DPC latency (µs): 1.547953

Highest ISR routine execution time (µs): 52.235529
Driver with highest ISR routine execution time: Wdf01000.sys - Kernel Mode Driver Framework Runtime, Microsoft Corporation

Highest reported total ISR routine time (%): 0.018597
Driver with highest ISR total time: Wdf01000.sys - Kernel Mode Driver Framework Runtime, Microsoft Corporation

Total time spent in ISRs (%) 0.018597

ISR count (execution time <250 µs): 67034
ISR count (execution time 250-500 µs): 0

How's that compare to your results?


----------



## Havoc911 (Jan 5, 2019)

Here's mine. Also note that I'm using an Nvidia GTX 1070 currently set to favor performance.


Highest measured interrupt to process latency (µs): 58.892107
Average measured interrupt to process latency (µs): 2.819626

Highest measured interrupt to DPC latency (µs): 55.586938
Average measured interrupt to DPC latency (µs): 1.141431

Highest ISR routine execution time (µs): 61.549296
Driver with highest ISR routine execution time: dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

Highest reported total ISR routine time (%): 0.066634
Driver with highest ISR total time: dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

Highest DPC routine execution time (µs): 81.845070
Driver with highest DPC routine execution time: Wdf01000.sys - Kernel Mode Driver Framework Runtime, Microsoft Corporation

Highest reported total DPC routine time (%): 0.282494
Driver with highest DPC total execution time: Wdf01000.sys - Kernel Mode Driver Framework Runtime, Microsoft Corporation

Total time spent in DPCs (%) 0.362387


----------



## rgames (Jan 5, 2019)

Mishabou said:


> Most musicians I work with can definitely feel if latency reaches above 6 - 7 ms


Unless your musicians are robots I'm not so sure about that. Think about it in a musical context: take a track where the quarter note is at 120 bpm. That's the same as two beats per second, so each quarter note has a duration of 500 ms. An eight note has duration 250 ms. A 16th note has duration 125 ms. A 32nd note has duration 62.5 ms. A 64th note has duration 31.3 ms. A 128th note has duration 15.5 ms. A 256th note has duration 7.7 ms.

If people could really sense and control down to 7.7 ms then it would be used in a musical context. Therefore, you would regularly see 1/256th notes written in compositions performed at 120 bpm. "Hey - you're coming in 1/256th note early there" has never been said in the history of music.

There's nobody who's accurate to even a 64th note at 120 bpm and that's only 31.3 ms. 64th notes at 120 BPM are rare and used only for runs and other effects, not hard rhythmic hit points because you really can't sense/control down to that level of timing.

Plus, remember that sound travels at about 1 ft per ms. So if your monitors are 4 ft away, that's 4 ms just to travel from the monitor to your ear. If people could really sense that delay then you'd constantly hear about people who get tighter timing by wearing headphones. But I've never heard of that. The guy sitting at a 9 ft grand piano is going to have to wait 9 ms for the sound to get to his ear from the end of the lowest strings. An orchestra might have musicians separated by 30 ft, so 30 ms latency, and they "feel" like they're playing in tight timing. A guitaritst whose amp is 20 ft away while walking around on stage is playing with 20 ms latency. People seem to have no problem with that kind of latency.

If you really think people can sense timing down to 7 ms then go create a MIDI drum loop at 120 bpm. Now create versions with some notes that are delayed/advanced by 1/256 note. Now post the audio and see who can tell which notes are 1/256 note early/late. Maybe a bunch of people can sense that timing - I honestly don't know. But I'm extremely doubtful.

rgames


----------



## rgames (Jan 5, 2019)

Here's my take on LatencyMon: it's useful for tracking down major latency problems (1000+ ms) but not much use beyond that.

I've had systems that have reported latencies of 10 us (microseconds, not ms=milliseconds) that run the same as ones with 200 us latency. So tweaking things to get a low value in LatencyMon doesn't seem to be related to meaningful DAW performance.

Plus, you need to specify how much time you ran the test. I'd say 30 min is a practical minimum. More is better. If you run it for only a minute or so you're not collecting enough data to really understand what's going on.

rgames


----------



## Havoc911 (Jan 5, 2019)

In addition to the good examples Rgames gave, think about phase shifting tracks to thicken vocals. Generally you stick to delay values of less than 30ms, because above 30ms is where humans can start to hear 2 distinct signals.


----------



## chillbot (Jan 5, 2019)

rgames said:


> If you really think people can sense timing down to 7 ms


Sense? I dunno. But I can certainly feel it when playing piano. 7ms might be a bit of a reach but 10ms, definitely. I don't know if it's because I grew up with hardware synths and low latency whereas my assistant who grew up with VIs is much more immune to latency values. I'm not happy unless it's about 3-4ms under my fingers.


----------



## tack (Jan 5, 2019)

When you say "3-4ms under my fingers" I assume you're referring specifically to time spent in the audio buffer? Because as Richard points out there's a lot of things in the signal path that accumulates latency, most of which we can't do much about.

So I suspect this comes down to matter of threshold. Dropping from an ASIO buffer of 256 to 128 would have roughly the same effect as switching from speakers to headphones if your speakers are about a meter away. You might notice neither, one, or both, depending on how much other latency there is in the chain.

In my case, playing piano under headphones on an M-Audio Fast Track C400 (the cheap interface I use for my standalone piano), I can perceive the difference between 512 and 256 if I'm playing something fast (say Fantasie Impromptu) but not below that. In my DAW doing stuff with VIs I'm perfectly happy with 512.


----------



## rgames (Jan 5, 2019)

chillbot said:


> Sense? I dunno. But I can certainly feel it when playing piano. 7ms might be a bit of a reach but 10ms, definitely. I don't know if it's because I grew up with hardware synths and low latency whereas my assistant who grew up with VIs is much more immune to latency values. I'm not happy unless it's about 3-4ms under my fingers.


You could have some hidden latencies somewhere. What your DAW reports is not necessarily what you're actually getting. Effects are a good example - lots of effects add latency that's compensated for by the DAW but not included in the reported latency value. The new version of Cubase added displays for these values on each track. Not sure how other DAWs deal with it.

Or maybe you're just superhuman 

rgames


----------



## Ronny D. Ana (Jan 5, 2019)

You all have probably studio equipment so just measure latency analog! Position the mic near the key of the keyboard you are playing, press record and then press one key. On the recording you will hear the noise your finger made (or use something else to make a noisy hit on your key) and after some ms you should hear the sound your audio source made. Audio Source can be a VST or if you have a hardware synth with speakers, this will do it also. The hardware synth has latency too!
This should give you the latency you really hear and not the latency of some tools who do not measure MIDI latency, speakers latency, keyboard latency, ...
rgames is right in theory, but I think it is a difference between actively making noise with your body, mostly your fingers, and hearing the generated noise, and being able to hear or being able to kind of measure time intervals between noises in the range of 0 to 20ms or so.
Hopefully everybody understands what I am trying to say.


----------



## tack (Jan 5, 2019)

The usual way to measure round trip latency is to loop back from your output directly to your input on your audio interface and then you can use https://www.oblique-audio.com/free/rtlutility (some sort of utility) to measure it. I've never bothered as it just seems to be academic: I either notice the latency or I don't, and the specific number doesn't much matter.


----------



## Giscard Rasquin (Jan 5, 2019)

puremusic said:


> I see, well, AMD is supposed to come out with a new series soon, I've been waiting on that too, as much as I dislike being a pioneer. The older ones are pretty power intensive compared to the competition.
> 
> Let's see according to Latency Mon my:
> 
> ...



Not around my studio this weekend but if I remember correctly highest DPC was about 600 
Will check next week and thanks for the heads-up about the new AMD series


----------



## Gerhard Westphalen (Jan 5, 2019)

rgames said:


> Unless your musicians are robots I'm not so sure about that. Think about it in a musical context: take a track where the quarter note is at 120 bpm. That's the same as two beats per second, so each quarter note has a duration of 500 ms. An eight note has duration 250 ms. A 16th note has duration 125 ms. A 32nd note has duration 62.5 ms. A 64th note has duration 31.3 ms. A 128th note has duration 15.5 ms. A 256th note has duration 7.7 ms.
> 
> If people could really sense and control down to 7.7 ms then it would be used in a musical context. Therefore, you would regularly see 1/256th notes written in compositions performed at 120 bpm. "Hey - you're coming in 1/256th note early there" has never been said in the history of music.
> 
> ...


Some percussionists are incredibly sensitive to latency. If they're playing a snare drum and monitoring it through headphones, they can feel that their hit isn't directly lining up with what they're hearing. Even down to the level of HDX can be noticeable. 

Having said that, I think this is an extreme case and pretty much all other musicians won't be bothered by it. Both because they're used to having latency in their instrument (like on a piano), being further away, and no having such a transient instrument.


----------



## Nick Batzdorf (Jan 5, 2019)

rgames said:


> If people could really sense and control down to 7.7 ms then it would be used in a musical context.



You can also argue that 7.7ms is like having a speaker ≈ 7' away, or about the 40ms Haas precedence effect, but I agree 100% about there being way too much focus on latency *specs* rather than actual reality.

What I think is going on is feel rather than delay: you don't feel the sound the same time you feel the "hit" at the bottom of the key travel, or when your stick hits a percussion pad. (By the way, where do most MIDI keyboards send the Note-on message - at the bottom of the key travel?)

As Gerhard says, we're not all the same. I can certainly feel a 512 sample buffer when I'm playing percussive sounds (as opposed to instruments with a slower attack), and if I really concentrate I can sort of tell the difference between 256 and 128. But that's on top of MIDI, which adds a few ms itself, along with other factors such as an additional buffer for Vienna Ensemble Pro.

What I have seen is musicians asking about latency specs first, and - being a cranky guy - I find that almost as irritating as people reflexively writing "room treatment" when anyone mentions the word "speakers." The most important thing is SOUND!


----------



## puremusic (Jan 5, 2019)

Hmm. Reading through the LM help files is helping me understand. I tend to feel latency differences when I change the sample size and play the piano, so I try to keep it constant at 64, which works for me perhaps because I haven't started working with large #s of instruments yet. 

I can feel the difference between that 64 sample size and my digital piano's internal sounds too, which are instant.

I'll have to give LM a longer run since I only ran it briefly the last time. 

I still get glitching with some instruments but the impression I get is it's not a sample size setting with those, but something else, even if other people have no problems with them. For example the Embertone piano, light version, snap crackle pops for me no matter what but many other people are happy. What mysterious thing is the problem? No one knows. 

Bought a popular reverb recently, got an endless feedback loop. Dev is looking into it. Was it having issues with my DAW software probably. I had another dev have to fix their reverb when working with Studio One too.

4.0Ghz i8086k 32GB of 3466Mhz RAM, Windows 10, RME PCI card.

64 samples, 2.27 ms input latency, 2.98 output latency.


----------



## Nick Batzdorf (Jan 5, 2019)

puremusic said:


> 64 samples, 2.27 ms input latency, 2.98 output latency



Plus (guessing) at least 10ms for MIDI!


----------



## puremusic (Jan 5, 2019)

Maybe I'll try that mic test later!


----------



## J-M (Jan 5, 2019)

Gerhard's point about percussionists is a very good one. I occasionally track drums (using an electronic kit and samples) with my drummer buddy and he is very sensitive to latency, especially when it's intricate stuff.


----------



## Nick Batzdorf (Jan 5, 2019)

MrLinssi said:


> Gerhard's point about percussionists is a very good one. I occasionally track drums (using an electronic kit and samples) with my drummer buddy and he is very sensitive to latency, especially when it's intricate stuff.



"Indubitably." - Tigger

There are drummers who don't like to track through digital consoles because of the 3ms latency.


----------



## rgames (Jan 5, 2019)

Gerhard Westphalen said:


> If they're playing a snare drum and monitoring it through headphones, they can feel that their hit isn't directly lining up with what they're hearing.


Keep in mind that hearing a difference is not the same as being able to sense a delay. Detecting differences in sonic characteristics are not the same as detecting differences in start times of notes. You can tell the trumpet apart from the flute but you can't tell which one started playing first if they're off by, say, 20 ms or less.

Here's another way to think about it: if you can detect timing differences of 7 ms then you can hear the individual cycles in a 143 Hz sine wave. The time between peaks on a 143 Hz sine wave is 7 ms.

That seems unlikely - music wouldn't even sound like music if that were the case. Everything up to 143 Hz would sound like percussion, there would be no sense of a tonal bass region. But people do sense a tonal bass region, all the way down to 30 Hz or so, which is 33 ms. Frequencies higher than that are interpreted as "continuous tones" and frequencies lower than that are interpreted as "repeated sounds".

rgames


----------



## dohm (Jan 5, 2019)

iMac with I7 4.2 GHz and 64G ram. Logic Pro and UAD Apollo interface. No issues with buffer setting of 128. Will sometimes set to 64 when playing piano sketches, but I am also happy with 128. Can also turn on Logic low latency mode when playing piano, but do not feel it is needed when working with large templates and composing. Latency does not start to get noticeable on piano until about 15ms-20ms, in my opinion.


----------



## Mishabou (Jan 5, 2019)

rgames said:


> Keep in mind that hearing a difference is not the same as being able to sense a delay. Detecting differences in sonic characteristics are not the same as detecting differences in start times of notes. You can tell the trumpet apart from the flute but you can't tell which one started playing first if they're off by, say, 20 ms or less.
> 
> Here's another way to think about it: if you can detect timing differences of 7 ms then you can hear the individual cycles in a 143 Hz sine wave. The time between peaks on a 143 Hz sine wave is 7 ms.
> 
> ...



Look I appreciate all the math but at the end of the day music and FEEL is not always an exact science. Musicians hear and deal with latency differently, for example, 6 - 12 ms might be fine for some but a real drag for others. As a drummer I can definitely feel the difference when I switch my buffer from 64 to 128. I have worked with musicians who preferred tracking on my HDX system instead of HD Native, we're talking about less than 3 ms vs 7 ms, now that's picky, but never the less and contrary to what science might prove, they do feel the difference.

Anyways, all the power to the guys and gals who can lay down killer tracks with buffer at 256, 512 or higher. At the end of the day, it's the music that counts


----------



## Havoc911 (Jan 5, 2019)

Mishabou said:


> Look I appreciate all the math but at the end of the day music and FEEL is not always an exact science. Musicians hear and deal with latency differently, for example, 6 - 12 ms might be fine for some but a real drag for others. As a drummer I can definitely feel the difference when I switch my buffer from 64 to 128. I have worked with musicians who preferred tracking on my HDX system instead of HD Native, we're talking about less than 3 ms vs 7 ms, now that's picky, but never the less and contrary to what science might prove, they do feel the difference.


You have to admit, though, that someone saying they can tell the difference doesn't mean there's a difference. Even if they sincerely think they are perceiving one. Human perception is notoriously bad and that's exactly why we developed the scientific method; as a control for our crappy perception.


----------



## Nick Batzdorf (Jan 5, 2019)

Havoc911 said:


> You have to admit, though, that someone saying they can tell the difference doesn't mean there's a difference. Even if they sincerely think they are perceiving one. Human perception is notoriously bad and that's exactly why we developed the scientific method; as a control for our crappy perception.



If someone says they hear or feel something, my default is to assume that they do. The scientific method - aka a double-blind test - will tell you if they can hear or feel the difference at the time you're giving the test, not necessarily at all times!

Obviously there's a range of subtlety, i.e. I think everyone could feel the difference between a 32 and 1024 buffer. And Mishabou may well be able to tell the difference between 32 and 64 100% of the time. But even if it's 50%, that doesn't invalidate his or her experience!

Point being, I agree with Mishabou that it's subjective. I've seen it with different musicians.

Also, there's a difference between being able to detect latency under deep concentration, and it bothering you enough to care.

Having said that, Mishabou is talking about being bothered by a couple of milliseconds' latency, meaning he/she absolutely needs to track through an analog monitor path. No interface will be good enough.


----------



## jcrosby (Jan 6, 2019)

Bands experience latency all the time.

If you're in a practice room and you're standing 3 feet from the drummer you have about 3ms of latency between you and the drummer. When you hear to your bandmate on the other side of the practice room, (say 6 feet away), there's roughly 6ms of latency... If you're in a concert stage, big band, or orchestral performance? Triple that... Musicians can, and do naturally adapt to reasonable latencies, to a point obviously...

This is why the haas effect works. There's a predictable range of time that humans perceive sound as emanating from of the same source before they perceive the delay as being separated from the source.

I personally agree with statements about perception... IMO seeing a latency value can create perceptive prejudices. (Like how it's easy to get on a soapbox about how _accurate _or_ perfect_ someone's hearing is, when in reality the odds are stacked against them that they'll fall for the McGurk effect.)

Latency's a natural aspect of live performance. And it's part of what makes a band, ensemble, or orchestra _feel_ the way they do.


----------



## Saxer (Jan 6, 2019)

The main problem of latency is that it sums up. You always have to act way before a reaction. If you hit a drum you have a brain impulse going to the muscles to start a complex pattern of movement to move a stick into the direction of a drum. After all that and the way the stick moves through the air it hits a pad which starts to swing and moves air which comes back to the ear. So there's a lot of time until you hear the feedback of your initial thought. You get used to this latency over decades and have a certain expectation. Going through recording equipment adds another small part which can be irritating. Even more with samples. When I use a wind controller I have to start notes way earlier than with an acoustic instrument. Most of the latency comes from note detection inside the wind controller and changing it into midi signals and feeding it into a virtual instrument which has attack phases too. The system latency is just another delay on top but it might be the difference between noticeable and imperceptive.


----------



## Havoc911 (Jan 6, 2019)

Nick Batzdorf said:


> ... even if it's 50%, that doesn't invalidate his or her experience!
> 
> Point being, I agree with Mishabou that it's subjective. I've seen it with different musicians.
> 
> Also, there's a difference between being able to detect latency under deep concentration, and it bothering you enough to care.



No, it doesn't invalidate anything, how could it? 

My point is simply that, because someone says they feel a difference, doesn't mean there actually is one, or if there is one it doesn't necessarily mean they can actually perceive it. This is true even if they sincerely believe they perceive the latency. 

The same principle applies to the analog v. digital debate, the sample rate debate and alien abduction accounts. Someone may sincerely believe that they have perceived these things. I don't say they are lying and I believe that they are accurately portraying their experience. I'm simply saying that we aren't justified in accepting personal experience over objective, repeatable data. 

I really wish I had access to people who claim to be able to tell a difference because I would love to test this and see how sensitive humans really are to latency.


----------



## Mishabou (Jan 6, 2019)

Havoc911 said:


> No, it doesn't invalidate anything, how could it?
> 
> My point is simply that, because someone says they feel a difference, doesn't mean there actually is one, or if there is one it doesn't necessarily mean they can actually perceive it. This is true even if they sincerely believe they perceive the latency.
> 
> ...






Wow, I never thought this would bring up such a heated debate 

The funny thing is I never care about latency until I switched to native system. My studio back then was mostly analog (API console) and 2 inch tape that was later replaced by a Radar and PT TDM rig.

When I downsize and went native, my biggest concern was latency while tracking live musicians. I tested different configurations in order to find the best compromise. My rig consist of a PT HDX connected to DAD AX32 via Digilink, a PT HD Native connected to a second DAD AX32 via Digilink, a Cubase rig with Focusrite Rednet PCIe. All audio is pipe through Dante. So basically, all systems are connected digitally to their interface, audio is piped through Dante and they all share the same DAD's AD/DA. With the above config I can switch between systems in matter of seconds.

We tracked for 2 weeks and all sessions consist of drums, bass, piano, rhythm guitar and horns, 16 inputs in all with 4 different headphone mixes. Instruments are directly connected to A/D and straight to tape, no FX inserts within any DAW.

With a network latency setting of less than 250uS, the Native rig (PT Native and Cubase) has a round trip latency of less than 3 ms at 44.1Khz with a DAW buffer size set at 32, 7 ms at 64 buffer and 11 ms at 128 buffer. HDX is less than 3 ms at all buffer setting.

The results were as expected, at buffer 512 and above, forget it. Buffer 256 is workable if they have to, buffer 128 is better and buffer 64 is ideal. All musicians involved can hear the difference 100 % of the time when I switch from 64 - 128 - 256 etc. Two musicians (bass and piano) can consistently 100 % hear the difference when I go from HDX (less than 3 ms) to native (7 ms at buffer 64).

Regarding Nick's quote that musicians being bothered by a couple of milliseconds absolutely needs to track through an analog monitor path is not totally accurate as all examples refer only to the latency caused by the DAW's buffer. Of course if you factor in the whole chain, including AD/DA, we're talking well into the 10 ms latency or more even at 64 buffer. Btw, HDX and TDM take also into account AD/DA latency (assuming you use Avid or supported hardware) so to be accurate when you compare and HDX vs native at 64 buffer, we're taking less than 3 ms vs 10 ms figure.


----------



## Havoc911 (Jan 6, 2019)

Who's heated? Certainly not me. We're still talking about what musicians say they can perceive. They might actually be able to, but we won't know that until we can test it in a controlled situation, partly because we can't even be sure the total amount of latency they are experiencing.


----------



## rgames (Jan 6, 2019)

Mishabou said:


> The results were as expected, at buffer 512 and above, forget it. Buffer 256 is workable if they have to, buffer 128 is better and buffer 64 is ideal. All musicians involved can hear the difference 100 % of the time when I switch from 64 - 128 - 256 etc. Two musicians (bass and piano) can consistently 100 % hear the difference when I go from HDX (less than 3 ms) to native (7 ms at buffer 64).


Remember, too (as pointed out earlier in the thread), that the audio buffer is only one part of the total latency. If you're quoting numbers from your DAW or sound card driver then that's only part of the total latency. You really need to set up a mic and measure it to understand what the latency actually is.

As also pointed out, being able to hear a difference (e.g. because of comb filtering) is not the same as being to detect the latency. Your ears are certainly sensitive to time scales much shorter than 1 ms and you will hear different sounds as you change the buffer size. But that doesn't mean you can detect when those sounds began (i.e. latency).

rgames


----------



## dflood (Jan 6, 2019)

rgames said:


> Your ears are certainly sensitive to time scales much shorter than 1 ms...



I’m not sure what you mean by that. How would that square with this? Not saying you are wrong, just trying to understand.
_
The shortest perceivable time division – sensory psychologists call it the fusion threshold – is between 2 and 30 milliseconds (ms) depending on sensory modality. Two sounds seem to fuse into one acoustic sensation if they are separated by less than 2 to 5 milliseconds. Two successive touches merge if they occur within about 10 milliseconds of one another, while flashes of light blur together if they are separated by less than about 20 to 30 milliseconds.
_
(Nick Herbert, Elemental Mind, Dutton, 1993, p. 50.)


----------



## Havoc911 (Jan 6, 2019)

I found this peer-reviewed study: https://www.ncbi.nlm.nih.gov/books/NBK92837/

Here's an excerpt from a relevant section (emphasis mine):

_One of the classic studies on sensitivity for intersensory synchrony was done by Hirsh and Sherrick (1961). They presented audio–visual, visual–tactile, and audio–tactile stimuli in a TOJ task and reported JNDs to be approximately 20 ms regardless of the modalities used. Although more recent studies have found substantially bigger JNDs and larger differences between the sensory modalities. For simple cross-modal stimuli such as auditory beeps and visual flashes, J*NDs have been reported in the order of approximately 25 to 50 ms (Keetels and Vroomen 2005; Zampini et al. 2003a, 2005b), but for audio–tactile pairs, Zampini et al. (2005a) obtained JNDs of about 80 ms, and for visual–tactile pairs, JNDs have been found in the order of 35 to 65 ms* (Keetels and Vroomen 2008b; Spence et al. 2001). More importantly, JNDs are not constant, but have been shown to depend on various other factors like the spatial separation between the components of the stimuli, stimulus complexity, whether it is speech or not, and—more controversial—the semantic congruency. Some of these factors will be described below._

JND is an abbreviation of Just Noticeable Difference. You can see that with an audio-tactile pair (like a snare hit), the point at which people notice asynchrony is 80ms which is way above 7-20ms. That tells us that either the latency is much higher than you think due to extraneous factors or you really aren't perceiving those latencies.

Edited to correct a typo


----------



## Nick Batzdorf (Jan 6, 2019)

Havoc911 said:


> No, it doesn't invalidate anything, how could it?



My point is only what I said: our senses don't have the same degree of acuity every time we test them. That's especially true of hearing subtle differences. Of course you can fool yourself, but as I say all the time, I believe our ears are more sensitive than we think they are!



Saxer said:


> Most of the latency comes from note detection inside the wind controller and changing it into midi signals and feeding it into a virtual instrument which has attack phases too.



Do you use a WX or an EWI? I actually haven't had any latency issues with latency when I'm playing the EWI 3020m synth, nor my Yamaha VL1, so I suspect that the latency is due to the other things you mention. Also, there's a key delay setting to delay the response and avoid false finger triggers.

What is an issue with the VL1 is the same thing as with mics: latency going through the computer.

I've been saying for years that there should be a standard command for DAWs to tell audio interfaces to switch a track to direct monitoring mode while it's in Record (or Input). Simul-sync is hardly new technology; switching to monitoring off the record head was a feature on every tape recorder.


----------



## Saxer (Jan 6, 2019)

I use the WX7 mostly. But I also have an EWI and a Roland Aerophone AE10. The WX feels most comfortable to me and it's the only one with controllable vibrato speed and amplitude via lip pressure pitch bend. The latency depends a lot on the instrument. Synths and Wallander instruments are tightest and Samplemodeling feels slower.


----------



## Nick Batzdorf (Jan 6, 2019)

Saxer said:


> I use the WX7 mostly. But I also have an EWI and a Roland Aerophone AE10. The WX feels most comfortable to me and it's the only one with controllable vibrato speed and amplitude via lip pressure pitch bend.



Yeah, I'm originally a recorder player, so I use breath vibrato (and I can't play WX, because I don't play any Boehm instruments).


----------



## NYC Composer (Jan 6, 2019)

I used to be a pretty good groove keyboard player in funk bands. After years of having to play ahead of the beat at various latencies (they used to be quite dreadful, still bad) working by myself, I've been sitting in at a local jam session and noticed I have to REALLY lay back to find the pocket with the other players, because my instinct is to play EVERYTHING ahead of the beat so it will sound right. Anecdotal, but still.


----------



## Saxer (Jan 6, 2019)

Maybe you should add some latency to your live keyboards. But don't do unplugged gigs then...


----------



## NYC Composer (Jan 6, 2019)

The sit-in gig is on piano. I’m not sure they have an “add latency” button on Steinways


----------



## dflood (Jan 6, 2019)

Speaking of pianos,

“in the case of a piano, the delay between a key reaching the key bottom and the hammer striking the string can be about 35ms for pp notes and -5ms for ff notes. These figures do not include the key travel time (the time elapsed between initial touch and the key reaching the key bottom) which for pressed touch can be greater than 100ms for pp notes and 25ms for ff notes”
(http://www.eecs.qmul.ac.uk/~andrewm/jack_am2016.pdf)

So I guess it’s the added or unexpected latency that we find disconcerting since we deal with it all the time in conventional instruments. Sorry, I know this has drifted quite a distance from buffer settings.


----------



## ZenFaced (Jan 7, 2019)

Low buffer settings are only required when recording a virtual instrument through your DAW.

For mixing set your buffer size very high - you don't need low buffer for mixing.

If you are recording an instrument via microphone and you have a low latency interface (eg UAD Apollo) you don't need a low buffer - set it high then mute the track armed for recording in your DAW and listen through your interface.

For recording virtual instruments, use low latency recording on your DAW if it has that and try not to use any FX plugins in your DAW. If you have to, you can turn off other virtual instrument tracks as well and just use enough necessary to play your recorded track to. If you can use 64 to 128 latency that is great however 256 to 512 ms latency is fine for recording virtual instruments.


----------



## NYC Composer (Jan 7, 2019)

A lot of people mix as they go, so for them,monitoring with effects is somewhat necessary. Buffer size needs are subjective as well. It’s not one size fits all.


----------



## Nick Batzdorf (Jan 7, 2019)

NYC Composer said:


> A lot of people mix as they go, so for them,monitoring with effects is somewhat necessary.



...which is why I say that all DAWs should all know how to tell all interfaces (via a standard code) to switch to direct monitoring when they're in input.


----------



## NYC Composer (Jan 7, 2019)

Nick Batzdorf said:


> ...which is why I say that all DAWs should all know how to tell all interfaces (via a standard code) to switch to direct monitoring when they're in input.


Interesting idea. I know that Logic has their own scheme which seems pretty successful.


----------



## Nick Batzdorf (Jan 7, 2019)

NYC Composer said:


> Interesting idea. I know that Logic has their own scheme which seems pretty successful.



You may be talking about the Apogee software they had, but it didn't really do anything different from having a stand-alone program to control the interface.


----------



## JamieLang (Jan 7, 2019)

Nick Batzdorf said:


> ...which is why I say that all DAWs should all know how to tell all interfaces (via a standard code) to switch to direct monitoring when they're in input.



If only that had already been worked out say 20+ years ago and Apple removed it at OS level.


----------



## Nick Batzdorf (Jan 8, 2019)

JamieLang said:


> If only that had already been worked out say 20+ years ago and Apple removed it at OS level.



Explain please. That's interesting!


----------



## JamieLang (Jan 8, 2019)

Nick Batzdorf said:


> Explain please. That's interesting!



ASIO 2.0 Direct Monitoring. Implemented on 99% of interfaces made in the last 20 years not made by Avid. Hitting input enable in the DAW (or rec enable depending on preferences) rerouted the signal to loopback through the hardware mixer on the interface--the fader and pan controls controlled that mixer then instead of the software one.

You can hypothesize that Apple removed it because in the "make it easy for people who don't know signal flow 101", the idea that there was a hardware mixer they couldn't see was too much...or because if you make the assumption everyone needs to monitor in round trip buffered software, it will spur a LOT of buying new computers and newer (and ironically inferior in terms of latency) interfaces for them...it fosters a LOT of spending on computers for what is basically mature tech. Either way--or in whatever combination--they gutted it. RME can likely give better details, but Apple won't ALLOW you to do hardware monitoring directly from a DAW on OSX. People who are DAW makers with interfaces get around that with proprietary code--so, Presonus Interfaces and Studio One....Cubase and Yamaha/Steinberg interfaces....now, I believe Logic and Apogee, who are basically first party.

So, it still works in Windows 10....64bit...fully updated...with a DAW that support it. I don't use it...because I monitor analog....and even when I don't--I'll just use the Totalmix console so that I can do a software routing to a software reverb...which is often cut off when you do hardware monitoring, since it's going to loopback the input. So, it's not like the 20 year old thing is perfect in every scenario, but it does most of what anyone wants in terms of not having to have a second app open to control the hardware mixer on the interface and just automagically "making it happen" from within the DAW.


----------



## Nick Batzdorf (Jan 8, 2019)

JamieLang said:


> ASIO 2.0 Direct Monitoring



Hah. I had no idea it existed.

But it had to, it's so obvious.

I use the Metric Halo software to do that - same idea as Totalmix.


----------



## NYC Composer (Jan 10, 2019)

I often wish that CueMix (MOTU) and now TotalMix (RME) actually integrated INTO Cubase, removing the need to click in and out of the software mixer and the DAW.


----------



## JamieLang (Jan 10, 2019)

NYC Composer said:


> I often wish that CueMix (MOTU) and now TotalMix (RME) actually integrated INTO Cubase, removing the need to click in and out of the software mixer and the DAW.



Wish granted.

Love,
Microsoft


----------



## JohnG (Jan 10, 2019)

great discussion. Thanks everyone


----------



## NYC Composer (Jan 10, 2019)

JamieLang said:


> Wish granted.
> 
> Love,
> Microsoft


I’m on Mac OS. Are you saying that Windows allows the integration of RME TotalMix literally WITHIN Cubase?


----------



## JamieLang (Jan 10, 2019)

NYC Composer said:


> I’m on Mac OS. Are you saying that Windows allows the integration of RME TotalMix literally WITHIN Cubase?



Yes. Always has. You can't do everything there is possible to do in Totalmix....but, it automatically does the loopback monitoring and basic volume/pan inside the DAW. It behaves like a single mixer--just automatically tapping the input loopback feed.

Apple remove the capability years ago at OS level. Microsoft knows better than to try to mess with professional audio standards. That's a double edged sword if you want to do stuff like watch YouTube videos while your ASIO DAW has the card clock locked exclusively....


----------



## NYC Composer (Jan 10, 2019)

That’s a trade off I would happily make if I wasn’t a fairly savvy Apple OS guy since 1989. Me not know no Windows.


----------

