What's new

How precise, in terms of milliseconds and dB are professional musicians vs. the score?

x-dfo

Active Member
I was just exploring some cool max4live humanizing options for randomizing the note delay and the note velocity - does anyone have any numbers for what would approach actual human players?
 
It depends on the players a bit. The best players in London and Los Angeles are a lot more accurate than midi with samples. A lot more -- it's astonishing.

Midi with synths (not samples -- virtual or hardware synthesizers) is more accurate of course, because one doesn't suffer the vagaries of the samples' editing, which puts them all over the place. And that of course is leaving aside the mushier articulations.

For a long time I've watched composers try to 'humanise' their electronic music by introducing random errors and deliberate imperfections. I don't think it works very well.

For example, it's amazing how precisely top percussionists play -- if you write a figure four times in a row, unless you change the articulations or dynamics, they will play it almost exactly the same way each time.
 
Last edited:
I'm not sure how you could grade a musician's accuracy in terms of dB, but for rhythm, synths can of course be more accurate because they are always the same and can be edited to sound perfectly in time if necessary. You can't ever use a humanization function to really make it sound like a human, because humans don't make truly "random" mistakes. There is a tendency towards certain grooves, the way notes are placed relative to one another on the scale of a bar or a couple of bars. That is hard to replicate especially because it has a lot to do with what the specific "feel" or style of the music is, and how the players choose to interpret that.
 
It depends on the players a bit. The best players in London and Los Angeles are a lot more accurate than midi with samples. A lot more -- it's astonishing.

Midi with synths is more accurate of course, because one doesn't suffer the vagaries of the samples' editing, which puts them all over the place. And that of course is leaving aside the mushier articulations.

For a long time I've watched composers try to 'humanise' their electronic music by introducing random errors and deliberate imperfections. I don't think it works very well.

For example, it's amazing how precise top percussionists play -- if you write a figure four times in a row, unless you change the articulations or dynamics, they will play it almost exactly the same way each time.
I'm not sure how you could grade a musician's accuracy in terms of dB, but for rhythm, synths can of course be more accurate because they are always the same and can be edited to sound perfectly in time if necessary. You can't ever use a humanization function to really make it sound like a human, because humans don't make truly "random" mistakes. There is a tendency towards certain grooves, the way notes are placed relative to one another on the scale of a bar or a couple of bars. That is hard to replicate especially because it has a lot to do with what the specific "feel" or style of the music is, and how the players choose to interpret that.

Great answers, very helpful, thanks!
 
I was just exploring some cool max4live humanizing options for randomizing the note delay and the note velocity - does anyone have any numbers for what would approach actual human players?

You may be working on theincorrect assumption that dynamics are absolute. Players make deliberate variations in volume to create articulations and phrasing. It's by no means random. Also things like "soft" or "piano" are relative to the instrument and what else is happening in the music.

Unintentioal variations in timing tend to not be random either. Musicians tend to drift gradually ahead or behind the tempo and then also correct gradually.
 
Dynamic variations in dB? Not meaningfully quantifiable, for the reasons listed above.

In terms of timing, if it's in any way helpful, when I'm editing real orchestra I set my nudge value to 10ms... I wouldn't randomise anything to more than +-10ms from timing 'centre'. But that's also contingent on what you're randomising (ie legato strings much more forgiving than perc) and the samples you're working with (which can have very different speaking envelopes). You'll also find that timing errors in top orchestras aren't that random - even top session players playing on click will tend to push or lag as a section, depending on what the music is doing.
 
Dynamic variations in dB? Not meaningfully quantifiable, for the reasons listed above.

In terms of timing, if it's in any way helpful, when I'm editing real orchestra I set my nudge value to 10ms... I wouldn't randomise anything to more than +-10ms from timing 'centre'. But that's also contingent on what you're randomising (ie legato strings much more forgiving than perc) and the samples you're working with (which can have very different speaking envelopes). You'll also find that timing errors in top orchestras aren't that random - even top session players playing on click will tend to push or lag as a section, depending on what the music is doing.
Thanks, that's a good pragmatic answer. I totally get the 'emphatic' drift from my years of playing violin and guitar, this is part of my question's goal - if people had workflows for this kind of behaviour.
 
I totally get the 'emphatic' drift from my years of playing violin and guitar, this is part of my question's goal - if people had workflows for this kind of behaviour.

Play it in rather than drawing or step-time. It's much more efficient for the results you want.
 
Top Bottom