# How to evaluate the choice MAC PRO 2013 / WIN 7?



## mcremolini (Oct 23, 2013)

Hi all!!
I'm increasingly in trouble:
I'm a Cubase user.
I need to update my daw by 2013.
The question is ...at the same costs (+ o - )

1) NEW mac pro (entry level config. adding more ram)
or
2) windows 7 machine with assembled by projectlead.it (italy professional service for audio) 
Intel XEON E5-2630 12 CORES in Hyper threading technology 15 Mb cache
3 PCI-e 3.0
2 PCI-e 2.0
2 E-SATA
12 USB2.0
2 USB 3.0 (10x super speed)
2 Firewire Texas Instruments
1 Bay Hot Swap functions for SATA III HD 3.5 "or 2.5"
1HD SSD 500GB
1 2 Terabyte HD 7200rpm SATA III 6Gb / s

On mac pro i need:
1) Convert MOTU 828 MKI FW 400 in TB
2) Use HUB SATA / TB (like Lacie) to handle 2HD (now external in my MPB) for samples library.

Thanks for the advice

Matteo


----------



## jaeroe (Oct 23, 2013)

couple questions to ask?

a) do you need such a powerful computer for doing music?
b) do you need to stay in OS X? so far, i don't see a reason you need to (unless you're just comfortable with that)

the new Mac Pro really seems geared more towards video professionals to me, and less so music. you'd be spending a lot of money, only to have to spend to have more external storage.

i would be more interested in building a pc or hackintosh that had a good cpu (do you really need 12 cores today?), lots of SSD space for samples, lots of RAM, and then get a good audio interface with a good driver. and i think that would be a lot cheaper.


----------



## midi_controller (Oct 23, 2013)

jaeroe @ Wed Oct 23 said:


> i would be more interested in building a pc or hackintosh that had a good cpu (do you really need 12 cores today?), lots of SSD space for samples, lots of RAM, and then get a good audio interface with a good driver. and i think that would be a lot cheaper.



Totally agree with this. If you are going to go for a PC, I think it's a much better idea to get a 4930k build with more SSD space (500GBs is going to disappear very quickly) and 64GBs of RAM.


----------



## rgames (Oct 23, 2013)

Mac can't compete on value, so if you really want the best value for your dollar then the PC is the obvious choice. There's no debate there.

However, you don't need a Xeon. It's costly overkill. A 4930k is really overkill, too. You can do fine with a 4770k.

Spend the extra money on more SSD's. They do much more to improve system performance than any amount of money spent on a processor.

rgames


----------



## jaeroe (Oct 23, 2013)

if you want to stay on Mac OS, but are looking at building a PC because of pricing, consider just building a hackintosh. it is the same thing as building a PC, literally. you can configure as you like. just check place like tonymacx86 to find successful builds people have done who are using the computer as you intend. you can look in their database for external hardware, software, and OS. so, if you have an audio interface you want to use with say logic, just type all the info into the search and you'll likely find info on it.

while the new mac pro's don't seem to make a lot of sense for most people in music, you don't have to abandon mac os if you don't want to. plenty of options.


----------



## mcremolini (Oct 24, 2013)

Thank you all for the answers. :D 
It seems to me that is shared by all the new mac pro is "oversized" (...CPU...) for music production, although based on the use of large cinematic template.
I have no particular interest in staying on OSX, I work well on both, Windows and OSX.
I prefer that my DAW is assembled by professionals who are familiar with the components, paying more, of course, but avoiding the search for compatibility.
In Italy there is projectlead.it but that 'for now mounts cpu Xeon E5 2630/2640.
I try to see if can customize with 4770k

Side iMac instead, an iMac at the top of the expansion, will cost approximately 3000E in Italy.
It is not easy to make a comparison but it is more indicated in the Mac world for audio?

ciao!!


----------



## snattack (Oct 24, 2013)

rgames @ Wed Oct 23 said:


> Mac can't compete on value, so if you really want the best value for your dollar then the PC is the obvious choice. There's no debate there.
> 
> However, you don't need a Xeon. It's costly overkill. * A 4930k is really overkill, too. You can do fine with a 4770k.*
> 
> ...



What are you basing this argument on? I'm also looking to assemble a Hackintosh, and 4930K has 6 cores compared to 4 cores. Are you suggesting that audio applications don't take advantage of multiple cores enough?

Multiple benchmarks I've read suggest that in single core performance the 4930K is slower than 4770K, but in multicore performance it's about 30% faster (depending on which tests to read).

I'm asking this because there's a significant price leap, and that the 4930K is more complicated making into a Hackintosh, and it seems unnecessary putting that time and money into something that isn't going to payoff.


----------



## FriFlo (Oct 24, 2013)

Without being an expert I want to second the PC choice. After almost 10 years of using a Mac pro as my main DAW (with PCs as slaves) I recently made the jump to PC for my DAW. I did that, because I switched from Logic to Cubase.
I build a maschine for a little over 3000€, which is a little overkill CPU wise for now, if I would have gone with a slightly less powerfull CPU, I might have even been under 3000€.

In this machine I have 64 GB of Ram and 2 TBs of SSDs for system drive and 3 Sample drives. I could even upgrade to more and faster SSD space via PCIe, if I would find out, I need it.
The PC is super performant and thanks to the SSDs so much more powerful than my redundant 2008 Mac pro. Of course the new Mac pro would be fast, maybe even faster than this machine! But my 3000€ would have brought me only to a basic configuration with 256 Gb System Drive and 12Gb of Ram ... 
Put at least another 3000€ on top of that and you might get the Expansion Chassis for PCIe cards (I use RME Madi FX) and an external TB Chassis for more SSD storage for samples. I think this would be even more expensive than 3000 bugs, but I am to lazy to calculate that now.

Let's just say: A meaningful configuration for a sample based composer would be up to twice as expensive, if you use Firewire or USB audio devices or more than twice as expensive using PCIe audio cards, than getting similar a PC system. Maybe the graphics are better on that mac pro and so is the CPU a little faster. But that is not really noticable for the kind of work we do!
I can only say, I am more than happy with my Windows based system! Of course, there are some occasions, where I miss Mac OS. But almost always, this has just to do with the way I am used ton things to be. For those few cases, where Mac OS X really has some advantages over Windows (e.g. aggregate audio device or virtual midi ports), the price the new system asks for simply doesn't cut it!

Further, I found Cubase to run much smoother on PC and with almost no crashes, compared to the 2008 Mac pro. It is no secret, that both Vienna Ensemble and Cubase are programed for Windows based systems.

So IMHO: The new Mac pro might be ok for recording studios. It is certainly to expensive to make it a great DAW for composers ...


----------



## rgames (Oct 24, 2013)

snattack @ Thu Oct 24 said:


> What are you basing this argument on? I'm also looking to assemble a Hackintosh, and 4930K has 6 cores compared to 4 cores. Are you suggesting that audio applications don't take advantage of multiple cores enough?


The benchmarks you're probably referring to don't measure real-time performance - they measure processor performance. The benchmarking sites never measure real-time performance because people outside the DAW world don't really care about it. As such, they're not telling you the whole story for DAW use.

The bottleneck for DAWs is (usually) not the processor and hasn't been for several years (at least on PC anyway - not sure about Mac). The one exception is if you're running huge numbers of synths - then *maybe* it matters but I've never seen it. It's the ability to load the audio buffer and dump it to the converter in time to keep the audio stream going. When it gets interrupted, you hear clicks and pops. The interruptions have more to do with how often the CPU is interrupted with other tasks to do, not how powerful it is. Raw CPU power helps a little bit but if you're getting clicks and pops then you'll get better performance getting rid of the interruptions rather than trying to get a more powerful CPU.

For example, my 4-core / 8-thread i7 920 (five years old) rarely gets above 20% CPU usage on a 400-track orchestral template with loads of plug-ins and buffer at 128 samples (latency around 3-4 ms). That latency is already below the threshold a human being can detect, so a more powerful processor won't do anything in terms of responsiveness.

Bottom line: if you want to improve performance, look at the real-time elements of the system (audio and video, possibly network). CPU doesn't make much difference these days unless you're running dozens of synths, and even then I've never seen any data that says it really matters for any practical configuration.

SSD's and audio hardware/drivers are the most important factors for a DAW. Spend your money there first and if you have anything left over, then throw it at a more powerful CPU.

rgames


----------



## mcremolini (Oct 24, 2013)

Can you describe your daw?
Do you use only disc ssd?
The set-up I have pasted at the beginning of the thread will be used with a MOTU 828MKII FW.
If you manage with your i7 400 :shock: :shock: orch. track (load and mix) where every track is a Kontakt instrument (for example) I should be very quiet with Xeon...right? 

thanks!!!!!!


----------



## snattack (Oct 24, 2013)

rgames @ Thu Oct 24 said:


> snattack @ Thu Oct 24 said:
> 
> 
> > What are you basing this argument on? I'm also looking to assemble a Hackintosh, and 4930K has 6 cores compared to 4 cores. Are you suggesting that audio applications don't take advantage of multiple cores enough?
> ...



Interesting! Do you know any way to check these kind of things before buy? Any particular components (mobos, memory, network adapters) you'd prefer/avoid?

I'm currently using Macbook Pro, and I had big problems before reducing amount of audio channels broadcasted via VEP & and before removing the Soundtoys-plugs (these plugs + reverbs should benefit from multiple cores if I've understood correctly).


----------



## midi_controller (Oct 24, 2013)

rgames @ Thu Oct 24 said:


> For example, my 4-core / 8-thread i7 920 (five years old) rarely gets above 20% CPU usage on a 400-track orchestral template with loads of plug-ins and buffer at 128 samples (latency around 3-4 ms). That latency is already below the threshold a human being can detect, so a more powerful processor won't do anything in terms of responsiveness.



You must be running slaves. I have a 3930k with SSDs and I can easily get all 6 cores crunching even at a buffer of 512, especially if I'm using Play. Reverbs can still be fairly CPU intensive, especially if you are running something like B2. Then add in a handful of synths and you will be glad you spent a bit more for those 2 extra cores. Out of everything in my system, I'd actually say that my CPU is absolutely my bottleneck now, since it's the only thing I ever worry about maxing out.

One thing that is very important is to build a machine that isn't just what you need now, but what you will need next year. The last thing you want to be stuck deciding is which to buy: that new sample library or the upgrades for a system that can actually run it well.


----------



## mcremolini (Oct 25, 2013)

I completely agree. A machine for at least 5 years of working serene.
Even I had to extend my mac book pro with VEP on a server win7...


----------



## Daryl (Oct 25, 2013)

Just to add to some of the information (and misinformation) above:

1) For samples (especially ones with lots of scripting) clock speed is more important than number of cores
2) Number of cores is more important when you use lots of plugins
3) People using slaves with VE Pro always forget to add the extra buffer to their description of what latency their system has. :wink: 
4) When someone tells you that you can't feel a difference between one buffer and another, remember that what they really mean is that they can't.
5) SSDs only matter when using samples that either have a gazzilion xFading layers per note, or multiple microphone positions. It all depends what sample libraries you intend to use.
6) Some sample players can do their own multi processing (like Kontakt) which can help the CPU/ASIO performance.
7) VST3 is able to switch off processing when there is no audio present, so can make a system appear more efficient than it really is.
8) One of the new Xeons (up to 12 cores, 24 threads) is certainly faster than all the CPUs mentioned above. It's all a question of price point.

D


----------



## mcremolini (Oct 25, 2013)

thanks for your details.


----------



## rgames (Oct 25, 2013)

Daryl - show me the details of a project that produces pops/clicks on a 4770k (or whatever) but not on a 4930k (or whatever). That will show under what conditions a processor matters for DAW use.

I'm open to the idea but nobody has every demonstrated it. It's so easy - just do it. Then you won't have to keep repeating yourself because the data will be there.

Also, show me the data that shows the latency threshold for human perception. It's so easy - just do it. Then you won't have to keep repeating yourself because the data will be there.

So many discussions on these topics and, alas, I've never seen any data other than my own which is reflected in my comments above.

Just do it!

rgames


----------



## Daryl (Oct 25, 2013)

rgames @ Fri Oct 25 said:


> Daryl - show me the details of a project that produces pops/clicks on a 4770k (or whatever) but not on a 4930k (or whatever). That will show under what conditions a processor matters for DAW use.
> 
> I'm open to the idea but nobody has every demonstrated it. It's so easy - just do it. Then you won't have to keep repeating yourself because the data will be there.
> 
> ...


Richard, you only ever believe what you have experienced, so showing you anything is pointless. The data has all been tested by Scott at ADK, Vin at DawBench and many other sources.

BTW your claim about human perception falls flat as you keep talking about 3-4ms, when clearly it's about the difference of an extra 3-4ms, which is not the same thing. OK, let me explain that.

Imagine your soundcard buffer is set to 128 and this produces a latency of 4ms.
Now introduce VE Pro at 1x buffer. We are now at 8ms
Now introduce the speakers to your ears, which is another 4ms. We are now at 12ms. At what point do you believe that I, as a professional pianist can't feel any difference?

D


----------



## EastWest Lurker (Oct 25, 2013)

I am in the middle here. If you are on a big wide stage with live players, there is more latency between the guy way on the left and way on the right, but ewe adjust.

So sure, I want the lowest latency possible, but Logic set to a 256 buffer gives me 13.5 ms roundtrip,, 7.1 ms. Double that with a 1 buffer in VE Pro 5 and we are talking 27Ms roundtrip and 14.2 ms output.

A millisecond is a thousandth of a second. Seems to be a good pianist can deal with it.


----------



## jaeroe (Oct 25, 2013)

Jay - no wonder why PLAY works flawlessly for you.... you're running at a 512 buffer in the end (every additional buffer in VE Pro is a multiple of your DAW's buffer setting).

256 is fine for me, but 512 really does slow down the process when programming rhythmic stuff. but, i'm working at 256 combined just fine on my setups. i'm not using a huge huge amount of PLAY these days - plenty else, though.

if you're on a big wide stage, a player's instrument still responds when he/she plays it - they're just hearing other players later. this is a very different thing than VI latency. at 512 buffer your instrument is lagging well behind. that's fine for slow string lines (the samples are late anyway), but rhythmically precise stuff that can definitely slow down the process - especially if you don't like to quantize.

also, when recording live, for rhythmically precise stuff, everyone is either on a click, or the conductor is anticipating. last resort is the music editor moves the stuff over. but again, the instruments don't speak late in relation to when they're played (.... except the brass.... they're always late :mrgreen: ....).

if you want to know what you can detect with delay, pull up a digital delay, start playing and then mess with the delay setting in increments of 10ms - see how quickly you start to notice it. it's pretty quick.


----------



## EastWest Lurker (Oct 25, 2013)

Yes the latency issue is different but my point is that at first it is disconcerting, but good players learn to adjust.

And we are still talking 27 ms roundtrip and 14.2 ms output. Not tenths of a second; not hundredths of a second; thousandths of a second I am a good enough pianist to have adjusted to that amount. And in really critical rhythmic situations I knock it down to 128 in Logic, then knock it back up, and since the samples are loaded in VE Pro, it happens fast.

The good news is that it seems to me that with now LP X under Mavericks for most of my projects I can knock it down to 128 in Logic Pro.


----------



## snattack (Oct 25, 2013)

Does Logic compensate for this? Cubase certainly don't. Instinctly I, also a pianist, adjust to the latency by playing ahead of the notes, but the problem is that playing back the same recorded material puts it's ahead during playback.

Cubase now have a midi delay compensation button on each track, but that doesn't move the notes (if I've understood correctly, or am I using it wrong?), when activated it just plays back the track later. So when editing, you have to put the notes before the actual beats for it to work, and that doesn't work out for me.

This gives me two choices:
1. Move everything I record forward in the editor (which is a bottleneck in the workflow).
2. Record everything with Bob Marley-timing.

That's why I need a more powerful CPU, or whatever is needed, to get latency down.


----------



## jaeroe (Oct 25, 2013)

EastWest Lurker @ Fri Oct 25 said:


> Yes the latency issue is different but my point is that at first it is disconcerting, but good players learn to adjust.
> 
> And we are still talking 27 ms roundtrip and 14.2 ms output. Not tenths of a second; not hundredths of a second; thousandths of a second I am a good enough pianist to have adjusted to that amount. And in really critical rhythmic situations I knock it down to 128 in Logic, then knock it back up, and since the samples are loaded in VE Pro, it happens fast.
> 
> The good news is that it seems to me that with now LP X under Mavericks for most of my projects I can knock it down to 128 in Logic Pro.



next time you're doing a session with live player, make them monitor through a DAW at a 512 buffer and see what they say to you. see how much slower the session goes. they don't like it and we shouldn't have to either.


----------



## EastWest Lurker (Oct 25, 2013)

jaeroe @ Fri Oct 25 said:


> EastWest Lurker @ Fri Oct 25 said:
> 
> 
> > Yes the latency issue is different but my point is that at first it is disconcerting, but good players learn to adjust.
> ...



Well I always record live players to audio sub mixes and knock the buffer down to 64.

When it comes to digital audio (and most things musical) "shouldn't have to" is one of the more useless phrases.

it is what it is. We do what we need to do to get the job done.

Just my opinion, of course.


----------



## jaeroe (Oct 25, 2013)

But, you don't have to put up with it today, that's my point. There are plenty of reasonable ways around it. As it pertains to this thread, the question is 'is the new Mac Pro worth the price tag'. I think there are better options for most composers.

Yes, most of us know about printing/bouncing audio down, etc. to track live against. The point I was making there was that we don't ask our live players to play against that type of latency(512), so why would you put up with it for tracking midi? There are a lot of options for getting the latency lower, and running at a 512 buffer slows a lot of people down. My last film was 60+ minutes of music start to finish in 5 weeks. I really don't want to be dicking around at a 512 buffer setting. Most of us here can probably do it, but it just slows the process.

My own experience is that you get better bang for your buck by having 2-3 respectably spec'd machines, than one of latest greatest today. The new Mac Pro does not seem like a very good use of money for most composers today, especially because a lot of that technology that is included doesn't really cater to our crowd very well.

So, I think it's better to build/have built a few PCs (or hackintosh -same thing) and make sure your have or get a good audio interface with a great, low latency driver. So, instead of the OP's extreme PC he mentions, I'd do 2-3 good builds.


----------



## midi_controller (Oct 25, 2013)

jaeroe @ Fri Oct 25 said:


> So, I think it's better to build/have built a few PCs (or hackintosh -same thing) and make sure your have or get a good audio interface with a great, low latency driver. So, instead of the OP's extreme PC he mentions, I'd do 2-3 good builds.



Something that I'm very interested in is to see how my computer performs once I move to all Kontakt based libraries. The only reason I have been running a buffer of 512 is because of Play. If I can get it down to 256 or possibly 128 using only Kontakt based stuff (and I think I will), I will probably recommend getting a single computer instead. It's less hassle, less noise and you don't have to worry about VEP adding additional latency. Just waiting on a couple libraries to come out before I make the jump.


----------



## rgames (Oct 25, 2013)

The good news on the latency debate is you guys don't have to take my word for it - you can prove it to yourselves. I've attached a screenshot of the transition between two slurred (aka legato) notes on a clarinet. You can clearly see one note resonating on the left and the transition to a new note on the right.

You can also clearly see that the time for the transition to complete is about 70 ms. Furthermore, that latency will vary for different notes and can be well over 100 ms.

Now if clarinetists have been playing clarinets for a couple hundred years with latencies of 70+ ms, why are latencies of 1/10th that amount suddenly a problem?

They're not. It's all hype.

QED

Go record a clarinet or any other instrument and prove it to yourself. Better yet, record a double bass or tuba and note that those instruments have latencies of many hundreds of milliseconds. Again, we have hundreds of years of people playing those instruments with no worries about latency.

QED again!


----------



## rgames (Oct 25, 2013)

Daryl @ Fri Oct 25 said:


> Richard, you only ever believe what you have experienced, so showing you anything is pointless.


Maybe. I'm just glad that we're agreed you still haven't showed me anything 

Seriously, though - I'd love to see the data. My comments are based on my experience, yes, but until someone shows me something else, what else can I base my judgment on?

rgames


----------



## Zelorkq (Oct 26, 2013)

I'd just like to chime in here quickly:

I'm not going to discuss latency, I'm jumping back a few posts...

I just want to say that CPU power CAN be very important, depending on your setup. If for instance you are using VSL MIR with a moderate amount of tracks, then a powerful CPU is a must! I'm using a Xeon E3-1230 V2 (which is almost comparable to an i7-3770, but a lot cheaper) and I'm easily maxing CPU performance.

However, I've never seen a more CPU-hungry software than VSL MIR


----------



## paaltio (Oct 26, 2013)

rgames @ 2013-10-26 said:


> Go record a clarinet or any other instrument and prove it to yourself. Better yet, record a double bass or tuba and note that those instruments have latencies of many hundreds of milliseconds. Again, we have hundreds of years of people playing those instruments with no worries about latency.



Well that's a completely misleading usage of the word latency in this context, considering it has a very specific usage in music technology, which this thread seems to be about.

I'm a cellist, and if my cello took 10ms from when I hit the string to produce any sound, I would promptly use it as firewood.


----------



## Daryl (Oct 26, 2013)

Richard, again I think you are missing the point,. It doesn't matter what a Clarinetist does. Talk to me about the latency of a snare drum. Is that 70ms? Of course not. Look I'm not saying that latency is a problem for all instruments and articulations, but for some, it is.

D


----------



## snattack (Oct 26, 2013)

Daryl @ Sat Oct 26 said:


> Richard, again I think you are missing the point,. It doesn't matter what a Clarinetist does. Talk to me about the latency of a snare drum. Is that 70ms? Of course not. Look I'm not saying that latency is a problem for all instruments and articulations, but for some, it is.
> 
> D



I agree. It's all about what feels intiutive on the respective instrument, and for the individual musician. We can argue all we want what's measureable or not, but it all comes down to the feeling of the performer. I had huge problems playing unweightes keys in the beginning, as a pianist I'm used to a small latency which occures between the time the fingers hits the keys and the hammer hits the strings which produces the sound when reaching the bottom. But on the other hand, I hate the latency produced by string libraries legato transitions, because they don't always feel natural.

Using math as measurement is just ignorance in this context.


----------



## rgames (Oct 26, 2013)

paaltio @ Sat Oct 26 said:


> I'm a cellist, and if my cello took 10ms from when I hit the string to produce any sound, I would promptly use it as firewood.


Your cello definitely takes longer than 10 ms to respond. Do the same study: go record yourself and look at the transition between notes - you'll see one note die out, a period where there's some "junk" as the new resonance is established, then a build to the resonance at the new pitch. It generally takes even longer on cello than clarinet because it's a larger instrument and, therefore, the timescales are longer. It's *really* easy to do. Record a bunch of such transitions - I bet you'll see a lot of them that are 100 ms or more, especially in the lower register.

This is what I call a "Flat Earth" problem: people are confused about what they can perceive. For countless millennia, people thought the Earth was flat because, well, it sure looks flat, doesn't it? Are you telling me my eyes are lying to me? Yes, in fact, they are. Likewise, peoples' perception of latency is lying to them as well.

So how did latency become such a hyped problem? Waaaayyyy back when digital first appeared, it was a problem because the converters were slow. That fact gave rise to latencies that were, in fact, easily perceived. However, we've moved beyond that to a point where it's not an issue any more but the perception stuck.

I'm still open to any data that show otherwise. Alas, there are none to be found, even though it is so easy to collect... 

Long live the Flat Earth Phenomenon!

rgames


----------



## EastWest Lurker (Oct 26, 2013)

The odds were that eventually it had to happen and it has. For once, I agree with Richard.


----------



## paaltio (Oct 26, 2013)

rgames @ 2013-10-26 said:


> paaltio @ Sat Oct 26 said:
> 
> 
> > I'm a cellist, and if my cello took 10ms from when I hit the string to produce any sound, I would promptly use it as firewood.
> ...



In synthesizer terms, you're conflating the attack envelope with sample #0. These have a pretty fundamental difference for the player.


----------



## snattack (Oct 26, 2013)

rgames @ Sat Oct 26 said:


> paaltio @ Sat Oct 26 said:
> 
> 
> > I'm a cellist, and if my cello took 10ms from when I hit the string to produce any sound, I would promptly use it as firewood.
> ...



This is simply not true in all contexts, and if this is what you call "flat earth phenomenon", then I'd call yours the "lack of seeing the forest for all the trees"-phenomenon.

1. Your test with the clarinet for instance involves ONE legato transition in ONE context with ONE musician. Do you really think that the instruments add 70ms latency for every note in a scale run? Or in fast passages in general? Or in a trill? Open a trill sample and check the timing between the notes there.

2. It's true that in some instruments it takes some time and force to get the string, reed or mouth to starts vibrate, but that is also different depending on context, how expressive the passage is, the actual build of the instruments, size etc.

3. What is a fact when recording with latency in a sequencer is the following: EVERY note in EVERY context adds latency, which makes some context unplayable - and some not - at least for me. The biggest respect for any performer who's able to nail all kinds of fast stuff with 10+100%buffers=20ms latency, but that is not me for sure, and I'd find this a problem in any blind test.

4. Also add the actual time for the legato transition in the samples which - if you're true - then would add even more latency because that's what happens with "real" musicians, and that's sampled in most libraries nowdays. I found this a problem in HS for instance where the legato transitions makes the notes play so late I had to move everything back almost an 8:th in 120bpm after recording.

5. And finally: why even bother using numbers and math as a measurement in this case? I'm all for science philisofy in most cases except when it comes to music. There was a time when I looked at specs for synths or libraries to see where I could get the most out of it on paper, but that has changed, now I'll choose a less extensive synth or library that has better feeling. We can read all we want but in the end, everything that matters is what ACTUALLY feels right for us as performers, so why even bother trying to generalize what "should" feel right or not for everyone?


----------



## jaeroe (Oct 26, 2013)

players have spent their whole lives responding to their instruments. latency introduces an added delay that they are not used to - plain and simple. the question is when does it become noticeable - but, it does become noticeable, and that is the reason latency is an issue (especially when it messes with a performer).

again - next time you have a live player over, make him or her monitor through a 512 buffer and see how much they like it. if you want be anal about it - put them an outboard digital delay on them. either way, they will rightfully complain. it is absolutely noticeable and a real drag to compensate for in rhythmic playing.

richard -what you're talking about is quite different. there are all sorts for quirks with different instruments. but, latency is throwing them a curve that they're not used to. the more rhythmic precision needed, the more of a problem it is.


----------



## germancomponist (Oct 26, 2013)

Whats about direct monitoring? This is how I always record live players.... .


----------



## germancomponist (Oct 26, 2013)

There is another old trick what we did: Copy your backing track, position it with the negative value of the latency and use this track for monitoring.... .


----------



## DocMidi657 (Oct 26, 2013)

Hi,

I had a very pro drummer who worked with Hall and Oates and Patty Smythe among other notables in my studio to record from a set of VDrums triggering samples . He does not know really anything about music/software technology at all. 

Immediately and I mean immediately when we started he complained about the "delay" coming thru his headphones from his drum sound when playing. The sample buffer was set at 256. I changed it to to 128 and he did not say anything about it for the rest of the session.

I can easily tell and feel the difference when playing a sampled piano set at 256 versus 128. I wish I couldn't so I could get more out of my system. I respectfully disagree that it's hype, I do also believe 256 buffer for other non quick attack sounds like strings is not a deal breaker feel wise.

I contend that if you are more composer than player it's probably less noticeable to you. I do see Richard's data which baffles my why I feel this latency but I really do.

Just my 2 cents. 

Dave


----------



## germancomponist (Oct 26, 2013)

Yeah, with midi instruments, a latency more than 128 can be a big problem. When recording live instruments , use my "moved backingtrack trick" what works great with the biggest latencies... , because then there is absolutely no latency.


----------



## jaeroe (Oct 26, 2013)

germancomponist @ Sat Oct 26 said:


> Whats about direct monitoring? This is how I always record live players.... .



you're missing the point...

i'm not advocating actually recording that way. the point is, it illustrates how latency is an issue - the realtime effects of latency. it is often worse for midi/VI stuff than live recording(meaning there is more delay), and if it is an issue for live tracking, then you can be pretty sure it affects things for midi performance.

most guitar players know the effect of changing the setting on a digital delay. they can tell very quickly when you've adjusted the delay time in MS.


----------



## germancomponist (Oct 26, 2013)

jaeroe, read my other posts. ...


----------



## jaeroe (Oct 26, 2013)

i did - you still seem to miss the point. i'm not talking about recording audio for audio's sake. it simply illustrates latency - what you will be dealing with hearing stuff back when you are tracking midi. i'm not talking about recording techniques.

yes - 128 buffer setting is great. even 256 is usually fine for me. but 512 is pretty ridiculous.


----------

