# Post-prod MIDI editing patterns



## JoePlummer (Apr 30, 2017)

Novice question: I've been fighting with a way to edit continuous CC data in post-performance using an external controller (a Korg nano or a Tecontrol BBC2) without having to resort to a separate, second MIDI track. I'd like to be able to just loop up a few measures and edit note-by-note expression (or breath, or attack) to humanize and go for hyper-realism--preferably within the same track (or lane, etc.). And I've read enough threads to make me realize that I may just have to give up and edit (i.e., draw) the data using a mouse. Not what I wanted, but the majority seems to do it that way. That approach seems to take a long time (maybe I'll develop a practice effect).

So, question is: do regular patterns exist for such data? Do note-by-note envelope patterns exist for different instrument articulations? Perhaps the attack/decay of a detache viola looks similar to double-tongued oboe (I doubt it, but...), making the regularity of the pattern predictable, or re-useable, or capable of being templatized. Can we automate a first pass? (Maybe this is what those legato scripts are already doing?)

We know that patterns which are too regular need to be humanized/randomized. But the time needed to edit continuous data in a large work makes the effort prohibitive. Maybe there's a middle ground. Maybe I will offend folks who see this as the individuality of the art. This is not my intent.


----------



## pmcrockett (Apr 30, 2017)

Can you elaborate on how the note-by-note editing process you describe in the first paragraph would work? It sounds like you're talking about a hybrid approach that combines both step-editing and realtime input. I've never considered this possibility, and the idea intrigues me.

I've spent quite a bit of time thinking about CC editing workflow, and the best thing I've come up with so far is a system that allows you to skew existing CC nodes along a new trajectory while keeping their existing shape intact. The script I'm using calculates a slope from a note's starting CC node to its ending node and treats each internal node as an offset from that slope, and this allows you to change and subdivide the slope but retain the offsets. The result is that you can substantially change the broad shape of the CC data but retain the nuances, and you can do it very quickly. It's intended for touching up CC data captured live, not for drawing it from scratch.

In theory, abstracting slopes on a note-by-note basis would allow you to apply those slopes to other sequences of notes, or to apply the other notes' offset data to the original slopes. I'm not sure how useful this would be for rapid, broad-strokes humanization because I haven't tried it yet, but the possibility has definitely occurred to me. I have the beginnings in mind of a system that would let you designate a single track as a template and then abstract performance elements from it and apply them to other tracks, effectively generating new performances based on the characteristics of the existing one, but I don't have any practical implementations of the idea to show yet.

I'm in the late stages of building a Lemur/Reaper interface that incorporates my CC editing process; I plan to make a thread about it when it's solid enough for other people to use it.


----------



## JoePlummer (May 1, 2017)

pmcrockett said:


> Can you elaborate on how the note-by-note editing process you describe in the first paragraph would work? It sounds like you're talking about a hybrid approach that combines both step-editing and realtime input.



Yes, that's a good description. Not sure this elaborates appropriately, but.... 

This process/workflow is just my own "syllabus" about how to gain mastery with the libraries, and the MIDI orchestration crafting process in general. I'm not much of a performer, although I've had experience with at least one instrument in each of the major families. I am importing MIDI performances (typically just highly quantized note data with tempo information), which I then attempt to make as realistic as possible. (Or using scanning/OCR software from PDFs, then converting them to MIDI performances myself):

- fix tempo, time sig, key sig, etc. globally
- confirm that the MIDI notes match composer score & intent 
- Audition sample libraries (those at my disposal anyway) and choose best fit sonically
- one instrument at a time, draw in keyswitch data for articulation. If in Logic, I try to use articulation ID and Peter Schwartz's artzid macros. If in DP9, I put keyswitch data in a separate track. I am NOT playing the keyswitch data, I am drawing it in using a mouse, copying the data in manually from annotated score printouts. I like the idea of using keyswitch velocity as a compound variable. I also like transmidifier software conceptually, but haven't used it yet.
- The results thus far are typically dry, so I'll add in reverb on a per-track basis. (Note that I haven't done any mixing yet.)

- then I get to the point when I try to achieve musical phrasing, and that's the point that caused me to post above. I had hoped to be able, using my interpretation of the composer's intent about phrasing, to just loop up a section of an instrument--and improve the performance to my satisfaction by using an external controller in touch mode. I believe the repetitions would allow me to get as granular about note phrasing as I might feel appropriate, but also be able to sketch up to the level of phrase. My hope had been that my "nearly analog" phrasing would make a more realistic performance.

Software limitations seem to force me into having to recreate the ENTIRE performance, even including note data, unless I do this via manual mouse "painting" of the appropriate data--one lane at a time. Or, I can add more lanes to an already crowded orchestral template, separating the data and its metadata trackwise. This takes forever and seems grossly inefficient.

I have a background in linguistics, so I tend to think about musical phrasing in terms like, "etic," "emic," syntax and prosody. I am fascinated by performances such as www.youtube.com/watch?v=NQpSDph_c4k and www.youtube.com/watch?v=kLkPvmr93K8 . But I'm not wanting to perform each instrument this way for each piece of music. The learning curve will be too high.

Is there a way to describe/predict the interactive variety of data needed to sufficiently humanize a performance without mouse painting?


----------



## pmcrockett (May 1, 2017)

Your DAW should have a recording mode that allows you to layer new takes on top of old ones instead of having the new takes erase the old ones, which will at least allow you to retain note data on new passes. Not necessarily ideal, but possibly a step up from where it sounds like you are now. It'll probably be in the options/preferences somewhere and may be referred to as _comping _or _sound-on-sound_. For example, in Sonar it's under Preferences > Project > Record > Recording Mode, and in Reaper you right click a track's record arm button and choose _Record: <type of record mode>_.

Reaper looks like it specifically has a touch-replace option for MIDI recording -- I've never used the feature, though, and I don't know whether it's common in other DAWs.

Another approach, if the DAW allows it, would be to use automation curves to control MIDI CCs rather than using actual MIDI CC data, which would let you use the standard automation-writing touch mode. Not sure which, if any, DAWs can do this. In a pinch, you should be able to map these automation curves to the actual sampler controls and bypass MIDI CCs entirely -- I can confirm that this is possible to do for Kontakt in both Sonar and Reaper, though I suppose whether you're able to access the actual controls you need depends on the way the instrument is designed. A lot of them don't, for example, have a user-interface-accessible control for expression.


----------



## Divico (May 12, 2017)

As far as i can remember reapers midi record modes can't be used the way you want done they overwright midi notes as well. But there is a workaround in reaper writing cc information from your controller into an envelope and here you can of course use touch mode etc to tweak a region until you like it. If you're interested I can send you more info on this.


----------



## gregh (May 13, 2017)

pmcrockett said:


> Another approach, if the DAW allows it, would be to use automation curves to control MIDI CCs rather than using actual MIDI CC data, which would let you use the standard automation-writing touch mode. Not sure which, if any, DAWs can do this.



FL Studio does this very easily using their automation clips


----------

