Hi,
As this is a bit of a complex topic and I'm sure other people work in different ways and have different opinions, take everything I'm about to say with a grain of salt.
For me, the most important thing about a DAW template is dependable consistency. In other words, at the most basic level, I need to be able to trust that...
A: what I am hearing is what is actually there;
B: when I move a control, it responds how I think it should respond.
This means that each step of the process must be scrutinized, set, and tweaked
before I put on my creative hat. The last thing I want to do when composing is route MIDI/audio or define key switches or set volume levels or any of that. I don't want to have to remember that library A is drier and louder than library B so I need to add reverb to A and volume to B to make them match better. I don't want to have to remember that library X controls dynamics via CC1 but library Y controls dynamics via CC11. I just want to write, play it in, and be done with it.
SETTING UP THE HARDWARE
So, I start by setting up my hardware—monitors/headphones, audio interface, room treatment, etc. Without this first step, I cannot trust
anything else I hear. It's like making sure I'm wearing the correct prescription of glasses and not generic dark sunglasses before I try to paint a colorful picture.
So the first thing to do is make sure the physical space is exactly as it should be. There are many guides online for this; PreSonus has a handy one:
www.presonus.com
And here is another guide, a bit more in-depth:
The only thing I'd change about what they say in the PreSonus guide is that the equilateral triangle should form a point about 14-16 inches
behind your head (which is what that second guide also says), not in the center of your head. Think about it this way: when the point is behind your head, your ears will perfectly line up with the "sides" of the triangle, and therefore be able to better have a sense of what is Left and what is Right; this is not possible if the point is somewhere in your hypothalamus!
Once you've set up your physical space, you will want acoustic treatment (and potentially room EQ). Why? Because music is something that happens in the frequency domain and the time domain. Acoustic treatment primarily treats the latter, and room EQ (like Sonarworks) primarily treats the former. This can make a HUGE difference. See:
The following are audio recordings illustrating the difference in a room treated with GIK Acoustics products vs. the same room untreated.
www.gikacoustics.com
If you had to choose one thing of the two, choose acoustic treatment. GIK Acoustics (linked just above) actually has a whole educational series of articles on the subject:
For those who are new to room acoustics,our acoustic primer will help get you started. Article on acoustic panels, bass traps, diffusors and room setup.
www.gikacoustics.com
The important thing to note here is that it is a very scientific thing, and not something that can be achieved accidentally or with non-specialized materials. It's not very sexy, but it's perhaps the most important money you can spend—certainly more so than on new libraries!
Once your time domain is sorted out, you might want to consider room EQ a la Sonarworks:
Create with full confidence in sound with speaker & headphone calibration software SoundID Reference. Already trusted by over 140'000 studios globally.
www.sonarworks.com
This helps smooth things out a bit more, but is definitely more of an optional step that the acoustic treatment step.
Next thing is to calibrate your system. Again, PreSonus has a handy guide:
www.presonus.com
And this is another one I
highly recommend to people:
News, Dec 2015: The entire tutorial is now available as a downloadable PDF, attached to the bottom of this post. +++ The name is just a play on Bob Kat
www.gearslutz.com
Hearing damage starts around 90 dB, so 85 dB is usually the maximum recommended level. This is a good level for a cinema room, but if you're working in a smaller area like a spare bedroom (/using near-fields and/or headphones), the closeness of the walls (or ear cups) will make the apparent volume level louder, so to compensate the volume of your system must be lowered by a few dB (probably to the high-70s).
As mentioned in that latter link, THIS is your base, your home. You can venture louder or softer when you want to see how things sound there, but always return to this volume level. This is how you can learn to trust your system and your ears to know that you are hearing what you are hearing. Listening to recordings you know very well can help you learn your space/system at this calibrated level.
SETTING UP THE SOFTWARE
Here is where things get a little more opinionated, so again, salt!
It's best to set things up granularly from smallest to largest: articulation, instrument, instrument family, DAW template. In practice this is a bit more complex; every developer is different and might have things coded differently (even per instrument—it's astounding how inconsistent these things can be!), so it's important to know what is going on in each and every instrument. However, in general, in MIDI-land, value 90 is Unity. In Kontakt, CC7 (MIDI Volume) controls the main volume slider. For a given instrument, I like to set my basic 'long' articulation to this value. Because most libraries normalize their samples (so that everything has the same max amplitude), this can create scenarios in which, say, forte pizzicato is as loud as a fortissimo molto vibrato sustain. If the sustain is set to CC7 = 90, then I might set the pizzicato to CC7 = 56, so that it is dynamically appropriate to what it would behave like in real life. (This is a whole topic in and of itself...)
Once you've set the articulations for each instrument in the relative volumes you want, then balance the instruments' relative levels, first within families and then across the ensemble. For example: a flute in its low register is not going to be as loud as a trumpet in its high register, even if both are marked "forte." Here, I usually set these values with a gain plug-in so that my faders can remain at Unity. Here is also where (going back to our example) I add reverb to A and volume to B to make them match better.
Finally, I do mock-ups (with a score!) of ~30s segments of pieces I know well, in a variety of different styles and orchestrations, and see if the balancing worked. There is likely stuff to tweak (and honestly a template never crystallizes; it continues to evolve as needed).
Here is also where I normalize the way I control the instruments; going back to our example, I set a MIDI transformer so that even though library X controls dynamics via CC1 but library Y controls dynamics via CC11, I make CC1 control CC11 for library Y in the back end so that while each library "sees" what it wants to, I only have to learn/remember/master one control paradigm.
---
Hope this helps some! It's a bit broader than the question you asked, but I really don't know a way to answer without touching on all of this, since it's all interrelated.