What's new

Is CC#11 (expression) just volume?

Rehashing an old thread as I come to my own realizations...I've been using MIDI within a DAW since it was first released, and to this day have never used CC11 because I assumed it was merely volume...until today that is.

So there is most definitely a difference between using CC7 and CC11...at least within the Synchron Player. CC11 is so damn smooth and musical, while CC7 seems very harsh and less forgiving...so there has to be some sort of resolution difference in the way it responds.

Now the difference is NOT in the actual numbers, but within how it's used. Meaning that there is some difference in the coding of the master volume of a VST compared to the expression part...I know I hear a difference...I've done some a/b testing with a couple of VSTs...and basically the same thing. The master volume of an instrument is far harsher and the expression is much smoother.

I am hoping a developer would jump in here.
 
So there is most definitely a difference between using CC7 and CC11...at least within the Synchron Player. CC11 is so damn smooth and musical, while CC7 seems very harsh and less forgiving...so there has to be some sort of resolution difference in the way it responds.

Yes, 'normally' CC7 sets the course volume (that's an MSB in MIDI speak) und CC11 allows finer control of the volume 'around' the value of CC7.
Although the official specs say that CC39 is the fine control (LSB) for CC7, practically most(?) implementations use CC11 (as deafult).

Here's the MIDI paper, CCs at the last pages.

Official MIDI specs and stuff: https://www.midi.org/
 
Yes, 'normally' CC7 sets the course volume (that's an MSB in MIDI speak) und CC11 allows finer control of the volume 'around' the value of CC7.
Although the official specs say that CC39 is the fine control (LSB) for CC7, practically most(?) implementations use CC11 (as deafult).

Here's the MIDI paper, CCs at the last pages.

Official MIDI specs and stuff: https://www.midi.org/
Awesome thank you. So I’m not just hearing things. Lol
 
cc#11 is a "subset" of CC#7, volume. IF you have your cc#7 set at 110, cc#11 will go from 0% of cc7 at 110 to 100% of cc#7 at 110...if that makes sense
Both values being equal starting points of 127...I'll move the volume CC7 up and down during a string passage...then back to 127 and switch to CC11...same movements and there is a clear sonic difference. CC11 is much smoother and nicer compared to moving the volume...at least within VSL and BBCSO.,,but more noticeable within Synchron Player.
 
Well, to reiterate from last time, there is no reason I can think of in normal operation to move CC7 on an instrument track. You can set and forget a CC7 tag for each instrument at the start of your template that balances the orchestra properly. Thereafter CC11 on individual MIDI regions can fade down specific instrument volumes when you need them to, like for example the tail end of a note.

If you want to raise your global music volume during a cue the best way to do that is to have a master music bus, or a VCA fader that controls all music stem buses.
 
Well, to reiterate from last time, there is no reason I can think of in normal operation to move CC7 on an instrument track. You can set and forget a CC7 tag for each instrument at the start of your template that balances the orchestra properly. Thereafter CC11 on individual MIDI regions can fade down specific instrument volumes when you need them to, like for example the tail end of a note.

If you want to raise your global music volume during a cue the best way to do that is to have a master music bus, or a VCA fader that controls all music stem buses.
Indeed..I think the point I was making mores was that there is a clear sonic difference between using CC11 and CC7 interchangeably. Personally, Ive always just used the dynamics of an instrument to control the volume, then automated it within my DAW if I needed further help...but I think I'm going to try and use CC11 along with dynamics...might make mixing in the end a bit easier.
 
Track automation has created bad experiences for me because it makes conforming to picture harder.. at least in LogicX.... Like if you have to do it you have to do it, but using a MIDI message inside a MIDI region will travel & splice easier when you're conforming because it moves right along with the notes.
 
Very helpful thread! For a beginner like me, I think I’m over using cc11 and cc7, messing up my balance and making things unnecessarily complicated. Then again, I’m using the cinesamples composers toolkit, and understand some instruments I should be utilizing cc11. These days I wonder if bbcso core might be easier for me to work with.
 
Track automation is a bad idea generally because it makes conforming to picture harder.. at least in LogicX.... Like if you have to do it you have to do it, but using a MIDI message inside a MIDI region will travel & splice easier when you're conforming because it moves right along with the notes.
Not sure I follow...Once the mix is done and stems are made, the automation is locked in...what's to conform?
 
Very helpful thread! For a beginner like me, I think I’m over using cc11 and cc7, messing up my balance and making things unnecessarily complicated. Then again, I’m using the cinesamples composers toolkit, and understand some instruments I should be utilizing cc11. These days I wonder if bbcso core might be easier for me to work with.
BBC Core is definitely easier...but more limited as a whole.
 
...... some developers 'normalize' samples and some do not. That is my guess as to why you see some folks riding 2 faders

..... With an attempt to create consistency within the patches, some developers end up "ironing out" volume levels of their samples. And by volume levels I am referring to amplitude or loudness not timbral dynamics.

and

It's usually best to normalise samples when they are going to be dynamically crossfaded, this helps greatly with avoiding chorusing. A lot of developers however will add a volume curve linked to CC1 (or whatever CC is being used to crossfade the layers) so that the natural volume difference is retained. I usually add a second volume modulator linked to the key range of the instrument as some instruments (especially woodwinds) have a large difference in dynamic and volume in their different registers and this volume range can't be realistically recreated by an overall volume curve.

If I am understanding this right, normalizing samples helps avoid chorusing but requires riding CC11 to re-create the lost volume differences. But on the other hand, riding CC11 to "fake" volume may sound less realistic for some instruments, like woodwinds.

If I got this right, what is the lesser of two evils? I would imagine the volume problem is less jarring especially if articulation/patch volume is manually balanced ....... no?

EDIT: I believe EWQLHO samples are not normalized and volume differences are preserved as the timbre changes. Curious if EWQLHO owners have problems with chorusing?? Many thanks!
 
Last edited:
If I am understanding this right, normalizing samples helps avoid chorusing but requires riding CC11 to re-create the lost volume differences.
Not quite. The "real" volume curve is reapplied in the sampler by the developer. As far as the end user is concerned it's business as usual and there is no need for them to do anything special to compensate for the normalisation as it's already been added into the modulator's internal curve.
 
I haven’t been able to refind the full MIDI 1 spec - I think it may now require registration to access.
But from what I recall of it CC11 is defined as Expression and has both a volume and eq type effect again from memory I think the spec was for expressions max effect on volume to be +6 dB and it did something to eq as well - basically it was aimed at imitating the Hammond organs’ expression pedal. If anyone has access to the full MIDI spec I’d appreciate it if they could check and confirm/correct me.
 
I was wrong. This is an excerpt from a MIDI Org White paper.
Volume, Expression & Master Volume Response
•0 Volume (CC#7) and Expression (CC #11) should be implemented as follows:
For situations in which only CC# 7 is used (CC#11 is assumed “127”):
L(dB) = 40 log (V/127) where V=CC#7 value

This follows the standard "A" and "K" potentiometer tapers. For situations in which both controllers are used:
L(dB)=40log(V/(127x127)) whereV=(volume x expression)
The following table denotes the interaction of volume and expression in determining amplitude …

I think this may have been what I was thinking about (or else the spec said some else pre 1996 which would’ve been when I looked at it).
 
Top Bottom