# What's Missing from Orchestral Sample Libraries Today?



## Zedcars (Jul 15, 2021)

I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections. In other words, the sound waves interact with each other before they reach your ear. Of course, with most sample libraries the recordings are made on solo instruments, or instrument groups in isolation and therefore lack that natural room interaction. I'm not saying anything here you all don't already know. However, what I am wondering is if there is any technology in existence now, or perhaps being worked on, which would enable that natural sound interaction to be simulated within a computer. Or is it far too complex a problem?

If that problem could be solved I think the realism of these libraries would be greatly enhanced.

Is there anything else that you think is missing from sample libraries, or do you think we've already reached the pinnacle or what can be achieved (at least until acoustic modelling technology can mature enough to surpass sampling technology in terms of realism.)?


----------



## mybadmemory (Jul 15, 2021)

If we look as sampling as the music equivalent of photoscanning textures in the 3D realm, which I think is a fair comparison, we can draw some conclusions from the 3D field, which has for quite some time struggled with the same issues. Photoscanning and sampling has its limits when it comes to flexibility and editing at scale, whereas the algorithmic or modeling approach doesn’t yet reach the same level of realism. What we start to see in the field of 3D is 1. The combination of the two, and 2. The combination of both of them with deep learning and AI. My bet is that the music business will end up on the same route. Samples are a great data set, either for modeling algorithms, or for deep learning and AI tools, to base their output on.


----------



## Smikes77 (Jul 15, 2021)

This is why some people record the ensemble patches and then layer with the sections afterwards.


----------



## Noeticus (Jul 15, 2021)

Pitch changes over time. That's what's missing for me.

I want more glissandi, and I want it now!


----------



## cygnusdei (Jul 15, 2021)

If the overriding goal is to reproduce a 'live mix' that represents a concert hall acoustics then OP's question is valid. But in the context of film music where the orchestra is recorded in a studio, and further engineered into a 'studio mix' then, it's a moot question isn't it?


----------



## mybadmemory (Jul 15, 2021)

To be a little more specific, I think we'll se orchestral instruments (based on a combination of samples, modelling, and AI), that respond more realistically to our input. We record or input a phrase, and the instrument won't simply play samples one after the other, but rather create a realistic full phrase based on analysis of thousands of recordings of that instrument having played thousands of different phrases. We'll then have simple sliders ot similar to tweak/effect the performance after the fact. Basically instruments acting more like real players, where we give them the simple instructions on what to play, but the performance comes from them rather than us. But with the ability to direct it further afterwards.


----------



## MarcelM (Jul 15, 2021)

hmmm... whats missing? did you ever hear a real orchestra? samples do not come close and i guess they will never. well, maybe some day but that one might be far far away.


----------



## cygnusdei (Jul 15, 2021)

mybadmemory said:


> To be a little more specific, I think we'll se orchestral instruments (based on a combination of samples, modelling, and AI), that respond more realistically to our input. We record or input a phrase, and the instrument won't simply play samples one after the other, but rather create a realistic full phrase based on analysis of thousands of recordings of that instrument having played thousands of different phrases. We'll then have simple sliders ot similar to tweak/effect the performance after the fact. Basically instruments acting more like real players, where we give them the simple instructions on what to play, but the performance comes from them rather than us. But with the ability to direct it further afterwards.


A rudimentary version of 'humanizer' of course has been around for a while, e.g. in Sibelius you can choose one of several expressive modes: molto espressivo, espressivo, poco espressivo, senza espressivo, or mecanico - the idea is that given a phrase with a starting velocity, it will automatically vary the values according to the melodic contour. The same applies to rhythmic variations: molto rubato, poco rubato etc.

Of course the result could sound natural or completely awkward depending on the music.


----------



## holywilly (Jul 15, 2021)

There’s still missing one strings library in orchestral sampling.


----------



## Casiquire (Jul 15, 2021)

I don't think it's that deep to be honest. Almost everything we hear was recorded separately in some way and we don't feel a lack of realism. Sure maybe that affects the sound in some small way, but not significantly enough to worry about


----------



## X-Bassist (Jul 15, 2021)

I think way before getting into the effects of multiple instruments in a room they should find a better way to switch articulations. Keyswitches and CC’s are like asking a piano player to have a third arm or leg, the pianos would never get sold, yet composers are still expected to deal with multple steps to record one track, and that’s not counting dynamics, layering, mixing....

If they aren’t going to work on “intelligent” computer keyswitching.... at least all the shorts on one (there SHOULD be many types) and longs (yes, including polylegato) on another, then the sampling world should just stop everything else right now. Including recording. There is enough. Thanks.


----------



## VSriHarsha (Jul 15, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections. In other words, the sound waves interact with each other before they reach your ear. Of course, with most sample libraries the recordings are made on solo instruments, or instrument groups in isolation and therefore lack that natural room interaction. I'm not saying anything here you all don't already know. However, what I am wondering is if there is any technology in existence now, or perhaps being worked on, which would enable that natural sound interaction to be simulated within a computer. Or is it far too complex a problem?


I think it’s being worked on. I am not quite sure just some gut feeling. Well, I think it’s pretty much common sense to have, for Sound pioneers, to put bars on such things. 


Zedcars said:


> If that problem could be solved I think the realism of these libraries would be greatly enhanced.
> 
> Is there anything else that you think is missing from sample libraries, or do you think we've already reached the pinnacle or what can be achieved (at least until acoustic modelling technology can mature enough to surpass sampling technology in terms of realism.)?


I think the World of sampling did not top. Well, not yet, as far as am concerned.

Btw @Zedcars, I was searching for the Superman March mock up you’ve done but didn’t show up. Is it possible for you to post on the same thread?


----------



## cygnusdei (Jul 15, 2021)

X-Bassist said:


> I think way before getting into the effects of multiple instruments in a room they should find a better way to switch articulations. Keyswitches and CC’s are like asking a piano player to have a third arm or leg, the pianos would never get sold, yet composers are still expected to deal with multple steps to record one track, and that’s not counting dynamics, layering, mixing....
> 
> If they aren’t going to work on “intelligent” computer keyswitching.... at least all the shorts on one (there SHOULD be many types) and longs (yes, including polylegato) on another, then the sampling world should just stop everything else right now. Including recording. There is enough. Thanks.


I've always been under the impression that keyswitches are for live play. Well, speaking as someone who doesn't even have a MIDI controller, it's separate channel for each articulation for me. The added bonus is you can stack multiple articulations on the fly.


----------



## Zedcars (Jul 15, 2021)

VSriHarsha said:


> I think it’s being worked on. I am not quite sure just some gut feeling. Well, I think it’s pretty much common sense to have, for Sound pioneers, to put bars on such things.
> 
> I think the World of sampling did not top. Well, not yet, as far as am concerned.
> 
> Btw @Zedcars, I was searching for the Superman March mock up you’ve done but didn’t show up. Is it possible for you to post on the same thread?


Hi, I had a sudden loss of confidence and didn’t think it was good enough to leave up. I’m going to work on it some more before I post again (not going to promise). Sorry.


----------



## muk (Jul 15, 2021)

Your point, or the point you are paraphrasing Paul Thomson as making, sounds like a reasonable concern. I doubt that it would make an audible difference in practice though. As has been mentioned, many of the more recent film soundtracks have been recorded that way, and it's not noticeable in the end product. Much less will it be noticeable in a mockup that inevitably has to deal with far graver compromises than this one.

In a mockup we use recordings of notes that where not played to play the phrase that we have written. We have to create the illusion of a musical performance using all sorts of tricks to do so. For that, we have one to maybe five variations of timbres at our hands, and maybe one to three different types of attacks. When most instruments are capable of countless nuances thereof. The same with articulations. We use predefined sets of articulations for our make-do interpretations, when most instruments can play countless variations thereof (the length of a staccato would be one example). We use samples that have not been recorded chromatically. Instead we use a recording of one note to cover two, sometimes three or more semitones. We do blend audio files that have been recorded in different venues with different players and instruments. We use samples that have been recorded in dry environments and add articifial reverb.

These are but a few compromises we have to deal and content with as best as possible when creating mockups. The fact that the sound waves we use didn't interfere with each other before being recorded - I really don't think that it would make an audible differences here when it doesn't in the recordings of film soundtracks.


----------



## bill5 (Jul 15, 2021)

MarcelM said:


> hmmm... whats missing? did you ever hear a real orchestra? samples do not come close and i guess they will never. well, maybe some day but that one might be far far away.


Yeah, they do come close. Often REALLY close, to the point that we have "experts" debating on whether or not a piece they hear is real or samples. I don't think anything is really missing, but things that could still be improved somewhat, like better legatos etc


----------



## FireGS (Jul 15, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections. In other words, the sound waves interact with each other before they reach your ear. Of course, with most sample libraries the recordings are made on solo instruments, or instrument groups in isolation and therefore lack that natural room interaction. I'm not saying anything here you all don't already know. However, what I am wondering is if there is any technology in existence now, or perhaps being worked on, which would enable that natural sound interaction to be simulated within a computer. Or is it far too complex a problem?
> 
> If that problem could be solved I think the realism of these libraries would be greatly enhanced.
> 
> Is there anything else that you think is missing from sample libraries, or do you think we've already reached the pinnacle or what can be achieved (at least until acoustic modelling technology can mature enough to surpass sampling technology in terms of realism.)?








Inspirata Reverb from Inspired Acoustics!


So is this comparable to Vienna MIR? Probably more useful with dry sources I suppose..




vi-control.net





I had a similar thought.


----------



## cygnusdei (Jul 15, 2021)

muk said:


> maybe one to three different types of attacks


It makes me appreciate Chris Hein more as the solo strings have 6 spiccatos and 6 shorts of varying durations, short and long dynamic expressions, along with the standard sustain, sustain vibrato and accent vibrato - not to mentioned CC controlled vibrato (LFO).

That said, its tone (especially the phase-aligned patches) is nothing to write home about.


----------



## muk (Jul 15, 2021)

bill5 said:


> Yeah, they do come close. Often REALLY close, to the point that we have "experts" debating on whether or not a piece they hear is real or samples. I don't think anything is really missing, but things that could still be improved somewhat, like better legatos etc


In my opinion this is a simplification. Samples can come close in a very narrow range of styles and usecases. Namely loud orchestral tuttis, busy orchestrations where individual elements can be hidden, and some hybrid/epic scoring. They can cover some bits of romantic/film music style scoring semi-decently.
But samples fall incredibly short of the mark for other things like string quartets, solistic passages, any writing with individual voice leading. Anything that is not epic or romantic basically. Try mocking up a Haydn Symphony, or Beethoven, or Shostakovich - or any 'classical' music that is not of the romantic idiom for that matter - and there will be not a question that samples are a grotesque distortion of what real orchestras can achieve in these situations. There is a long way to go still for samples in these cases.


----------



## nolotrippen (Jul 15, 2021)

holywilly said:


> There’s still missing one strings library in orchestral sampling.


Have you considered N?


----------



## bill5 (Jul 15, 2021)

muk said:


> In my opinion this is a simplification. Samples can come close in a very narrow range of styles and usecases. Namely loud orchestral tuttis, busy orchestrations where individual elements can be hidden, and some hybrid/epic scoring. They can cover some bits of romantic/film music style scoring semi-decently.
> But samples fall incredibly short of the mark for other things like string quartets, solistic passages, any writing with individual voice leading. Anything that is not epic or romantic basically. Try mocking up a Haydn Symphony, or Beethoven, or Shostakovich - or any 'classical' music that is not of the romantic idiom for that matter - and there will be not a question that samples are a grotesque distortion of what real orchestras can achieve in these situations. There is a long way to go still for samples in these cases.


? Solistic? As in solos? We'll agree to disagree there, and in generally really. I know I've heard some very realistic sounding pieces and I don't mean "epic" soundtrack stuff. Hardly a "grotesque distortion." If I find links I will post.


----------



## Casiquire (Jul 15, 2021)

cygnusdei said:


> I've always been under the impression that keyswitches are for live play. Well, speaking as someone who doesn't even have a MIDI controller, it's separate channel for each articulation for me. The added bonus is you can stack multiple articulations on the fly.


I use keyswitches when penciling in notes too. It's not a terrible system, especially as more and more devs allow you to assign your own, so they're getting easier to standardize within a template.


----------



## Stringtree (Jul 15, 2021)

The SSS performance legato patches are sketchy, in a good way. Like the JB Violin. I can just close my eyes, scowl, and pretend I'm a player. The programming does most of the rest, close enough to not worry about messing with keyswitches or different tracks with different articulations.

The Straight Ahead Samples BOT and Tenor Colossus fix a lot, magically, in post. 

Le Nozze de Da Both of those backstage programming technologies would free me from monkeying with the settings while trying to come up with musical ideas. "Just Play" is a notion that would benefit every area of my music.


----------



## AudioLoco (Jul 15, 2021)

The resonance between instruments is there as a factor, but hardly "the" defining missing link for the holy quest for realism, in my opinion.

It is still the *number* of articulations, round robins, velocity layers, and very importantly, how the instruments react to the most feared police in history: the legato police.

There must be new ways to enhance realism of VIs drastically, but I don't know the answer apart from the above. So.... More of everything! Silly amounts of everything!

Until a new, really breakthrough tech comes along at least....


----------



## cygnusdei (Jul 15, 2021)

If there's one thing that can be improved ... it's the demos. Hearing is believing, so people want to know what the VI can do. I for one would want to hear standard repertoire pieces in their entirety (not original demo pieces) so I can compare directly with real performances AND using exclusively the library in question. No mix and match! I want to hear what you got, not what somebody else got.


----------



## Mason (Jul 15, 2021)

Might one problem be that the best sound engineers are working for record labels and big artists and not sampling?


----------



## Soundbed (Jul 15, 2021)

There aren’t nearly enough variations of playing recorded. Specifically I want more advanced ways to move from various attacks to various amounts of vibrato / progressive vibrato with various transitions to other types of longs and various releases.

The reflections in the room and the interaction of vibrating instrument bodies / room acoustics / physics are a smaller concern for me.


----------



## ism (Jul 15, 2021)

By some measures, nearly everything is missing. Case in point:


----------



## Noeticus (Jul 15, 2021)

I want more of more.


----------



## Zedcars (Jul 15, 2021)

Noeticus said:


> I want more of more.


----------



## Zedcars (Jul 15, 2021)

Soundbed said:


> There aren’t nearly enough variations of playing recorded. Specifically I want more advanced ways to move from various attacks to various amounts of vibrato / progressive vibrato with various transitions to other types of longs and various releases.
> 
> The reflections in the room and the interaction of vibrating instrument bodies / room acoustics / physics are a smaller concern for me.


Both would be good. Seems like that might be beyond the diminishing returns for the vendor though. I suppose if it gives them an edge over their competitors, but if it increases studio time and you have to pay the musicians and sound engineers more then would it make the final product far too expensive to make it commercially viable? Not to mention the mammoth task of editing, which is already huge.


----------



## Zedcars (Jul 15, 2021)

mybadmemory said:


> If we look as sampling as the music equivalent of photoscanning textures in the 3D realm, which I think is a fair comparison, we can draw some conclusions from the 3D field, which has for quite some time struggled with the same issues. Photoscanning and sampling has its limits when it comes to flexibility and editing at scale, whereas the algorithmic or modeling approach doesn’t yet reach the same level of realism. What we start to see in the field of 3D is 1. The combination of the two, and 2. The combination of both of them with deep learning and AI. My bet is that the music business will end up on the same route. Samples are a great data set, either for modeling algorithms, or for deep learning and AI tools, to base their output on.


This 100%. I have been following AI developments and it's actually pretty remarkable how good they are becoming.


----------



## janila (Jul 15, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections.


IMHO that doesn’t make much sense. That’s the result of human playing, not instrument body resonance. A human can never play one note the same way let alone a phrase or a piece. And that’s the beauty of it. If you record two people playing simultaneously they will react to one another. And if you record one player playing the same note in a different context the notes will be different. As they should after years of training of playing musically. If a professional violinist plays a concerto there will be thousands of differently bowed notes and even the largest libraries include just a few. And even if someone invented a way to categorize and record enough variations using such a library would be unwieldy. AI might get us there eventually but in the mean time we have to compose for the samples or use real players. There have been great advancements in sampling in the past two decades but as far as realism in connecting articulations is concerned Garritan Orchestral Strings from 2001 are just as good as any library from 2021.


----------



## Zedcars (Jul 15, 2021)

janila said:


> IMHO that doesn’t make much sense. That’s the result of human playing, not instrument body resonance. A human can never play one note the same way let alone a phrase or a piece. And that’s the beauty of it. If you record two people playing simultaneously they will react to one another. And if you record one player playing the same note in a different context the notes will be different. As they should after years of training of playing musically. If a professional violinist plays a concerto there will be thousands of differently bowed notes and even the largest libraries include just a few. And even if someone invented a way to categorize and record enough variations using such a library would be unwieldy. AI might get us there eventually but in the mean time we have to compose for the samples or use real players. There have been great advancements in sampling in the past two decades but as far as realism in connecting articulations is concerned Garritan Orchestral Strings from 2001 are just as good as any library from 2021.


I see. But it wasn’t the differences in performance of the isolated vs ensemble recordings I was highlighting, more the interaction in sound waves from the other instruments in the room that is absent from the former.

Edit: I kind of ignored your excellent points. Yes, the interaction between the players as they play is very important. As you say, it seems only AI could actually make this possible in a computer. However, if the interactions were studied somehow, maybe the nuances could be simulated in a very crude way (eg lead violinist uses vibrato in a certain phrase and the group follows but not precisely the same way and it would be slightly delayed perhaps?).


----------



## PeterN (Jul 15, 2021)

Better tempo sync. Too many libraries go crazy when you have pace changes - say, if you slow a curve down from 90 to 80 - they cant make it, or spike the CUP to turbo. Talking in particular about phrase libraries and such. I get it, its probably hell of a work to get this finesse working, but fixing this would be great.

I throw out a 100USD anytime, lets say, The Orchestra, does an update where you can - well, not only have any chord there on the piano rolll- but also *any tempo, even* with crazy changes, ...and it goes with this *seamlessly*.


----------



## janila (Jul 15, 2021)

PeterN said:


> Better tempo sync. Too many libraries go crazy when you have pace changes - say, if you slow a curve down from 90 to 80 - they cant make it, or spike the CUP to turbo. Talking in particular about phrase libraries and such. I get it, its probably hell of a work to get this finesse working, but fixing this would be great.
> 
> I throw out a 100USD anytime, lets say, The Orchestra, does an update where you can - well, not only have any chord there on the piano rolll- but also *any tempo, even* with crazy changes, ...and it goes with this *seamlessly*.


Don’t use tempo ramps which cause a huge burden on the CPU. Use as many small tempo jumps as you need which sounds just as convincing but feels like a holiday for your CPU in comparison. Change the tempo before or after the beats, not directly on them.


----------



## FireGS (Jul 15, 2021)

muk said:


> there will be not a question that samples are a grotesque distortion of what real orchestras can achieve in these situations.


lmfao.


----------



## cygnusdei (Jul 15, 2021)

janila said:


> Don’t use tempo ramps which cause a huge burden on the CPU


There must be something fundamentally different in DAW vs Sibelius for example, as tempo changes don't affect CPU usage in the latter. I use Sibelius for notation and hosting VST (sampler and effects) directly.


----------



## Zedcars (Jul 15, 2021)

janila said:


> Don’t use tempo ramps which cause a huge burden on the CPU. Use as many small tempo jumps as you need which sounds just as convincing but feels like a holiday for your CPU in comparison. Change the tempo before or after the beats, not directly on them.


Interesting. I’ve never heard of this before. I will see if I notice this in Cubase. I know you can convert ramps to steps, but prefer ramps as find them easier to work with if adjustments are needed.


----------



## Zedcars (Jul 15, 2021)

Noeticus said:


> Pitch changes over time. That's what's missing for me.
> 
> I want more glissandi, and I want it now!


Can the former not be simulated with subtle pitchbend changes? I know VSL have their humanisation tool in their players that adds tuning variations as well as slightly different envelope shapes to each note which is amazing.

And yes, I second your request for more glissandi.


----------



## ryans (Jul 15, 2021)

Mason said:


> Might one problem be that the best sound engineers are working for record labels and big artists and not sampling?


Well Dennis Sands recorded Cinebrass.


----------



## CT (Jul 15, 2021)

There are also libraries out there involving Shawn Murphy, Jake Jackson, Simon Rhodes, Alan Meyerson....


----------



## Toecutter (Jul 15, 2021)

Zedcars said:


> Interesting. I’ve never heard of this before. I will see if I notice this in Cubase. I know you can convert ramps to steps, but prefer ramps as find them easier to work with if adjustments are needed.


You can still work with tempo ramps (I do!! so convenient) and turn off the external sync in Kontakt (master tab) for most of your stuff to avoid glitches. I keep all my patches that require the external sync like loops, time machine or tempo based effects in a separate Kontakt instance, so I can deal with them separately (usually bounce to disk, disable and move on).


----------



## mybadmemory (Jul 15, 2021)

What’s missing is the ability to do zero latency while recording and latency compensation for the full transitions while playing back, and changing between the two automatically.

We shouldn’t have to drag notes, setup track delays, or switch between tight-modes on/off. It’s just numbers. Computers are great at numbers.


----------



## givemenoughrope (Jul 15, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections. In other words, the sound waves interact with each other before they reach your ear. Of course, with most sample libraries the recordings are made on solo instruments, or instrument groups in isolation and therefore lack that natural room interaction. I'm not saying anything here you all don't already know. However, what I am wondering is if there is any technology in existence now, or perhaps being worked on, which would enable that natural sound interaction to be simulated within a computer. Or is it far too complex a problem?
> 
> If that problem could be solved I think the realism of these libraries would be greatly enhanced.
> 
> Is there anything else that you think is missing from sample libraries, or do you think we've already reached the pinnacle or what can be achieved (at least until acoustic modelling technology can mature enough to surpass sampling technology in terms of realism.)?


read this as 'Paul Thomas Anderson' at first and was kinda mindblown


----------



## janila (Jul 15, 2021)

Toecutter said:


> You can still work with tempo ramps (I do!! so convenient) and turn off the external sync in Kontakt (master tab) for most of your stuff to avoid glitches. I keep all my patches that require the external sync like loops, time machine or tempo based effects in a separate Kontakt instance, so I can deal with them separately (usually bounce to disk, disable and move on).


This is true, but it doesn’t help with tempo synced libraries which usually are the most CPU intensive anyway. Using jumps instead of ramps takes care of both without the hassle of bouncing. Also after a while placing jumps manually results in more musical results, a straight ramp isn’t what a real performer would do.


----------



## Saxer (Jul 15, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW.


He said quite a few times that instruments sound differently because they resonate with each other in the room - but I don't believe him. I never noticed my instrument reacting differently when other music is in the room and never heard of other musicians realizing that. Especially with an empty room and a few players performing single notes for sampling. Even when a truck passes the house and shakes the glasses in the cupboard a performed instrument doesn't change it's sound because it's shaking. At least not in an audible way.

The main difference is that musicians react to each other. Listening while playing and adjusting level, timbre, intonation, vibrato, articulation, breath... everything. And that's what's really missing in sample performances. It's even missing in one by one acoustic multitrack recordings. I often record woodwind or brass sections (using a windcontroller for brass) and thought at the end: hm... I should have played the first track differently. With a real section you just play another take and probably even a few takes before recording anything. That's missing in production world and especially with samples.

In real orchestra sessions there are mostly a few takes recorded. Depending on the orchestra a kind of walkthrough to know what happens in the music. Then a few performances. Only very few high end orchestras can skip these steps. And then happens this kind of blossoming when the musicians know what happens and concentrate on the interaction and emotion. And suddenly there's real touching music! Same notes performed like one take before... but better!

To replicate that with samples you have to be a performer of a team but acting isolated. And a lot of this performances happen when the track isn't even ready composed or orchestrated. It's a miracle that samples sound halfway realistic to us anyway!


----------



## JDK88 (Jul 15, 2021)

No sampled instrument could ever stand against a real instrument which has unlimited round robins, unlimited dynamic layers, and unlimited articulations. It's a dead-end technology.


----------



## CT (Jul 15, 2021)

Saxer said:


> He said that quite a few times but I don't believe him. I never noticed my instrument reacting differently when other music is in the room and never heard of other musicians realizing that.


It's an effect that might vary from instrument to instrument, and even between particular contexts. It's a fairly well documented phenomenon in the horn section, for example, where very loud high passages are difficult for players to sustain because of the intensity of the sound itself affecting their instruments/embouchure. 

Of course, the way players change their performance _consciously_ as a result of playing with others instead of in isolation is much more noticeable, but the way in which the sound itself interacts in the space, and maybe with players and their instruments, might well add something subtle which the ear and brain miss, maybe without realizing it, in samples.


----------



## mybadmemory (Jul 15, 2021)

I guess it doesn’t really matter if it’s the players or the sound waves reacting, or both of them together. We still know that a recording of many instruments, sounds very different than a recording of the individual instruments mixed together after the fact. Hence: recorded sections, ensembles, and tuttis.


----------



## ryans (Jul 15, 2021)

Saxer said:


> I never noticed my instrument reacting differently when other music is in the room and never heard of other musicians realizing that.


Well I sure notice it when I'm playing a bass solo and the drummer hasn't turned off their snares heh :D


----------



## PaulieDC (Jul 15, 2021)

MarcelM said:


> hmmm... whats missing? did you ever hear a real orchestra? samples do not come close and i guess they will never. well, maybe some day but that one might be far far away.


I've mentioned that in other threads... they don't sound REAL, but a good sample library should sound like a good _recording _of an orchestra. Plus, a playback system that is so advanced that it can reproduce exactly as the source signal and everyone needs to have access to that is mandatory for sample libraries to sound "real", I would think.


----------



## PaulieDC (Jul 15, 2021)

Zedcars said:


> Is there anything else that you think is missing from sample libraries...?


A return policy.


----------



## PaulieDC (Jul 16, 2021)

To me it sounded like Paul Thomson was talking about sympathetic resonance... maybe I'm misunderstanding what he was saying. Wouldn't be the first time, lol.


----------



## SteveC (Jul 16, 2021)

Please show me the"recordings" of an sampled orchestra that sounds like one. I didn't find any!

I think physical modeling in combination with an AI could be an answer in the future.

I'm also missing something that is in a live performance what cannot be recorded by now. So we need mikrophones and speakers for these more spiritual things!


----------



## SteveC (Jul 16, 2021)

bill5 said:


> Yeah, they do come close. Often REALLY close, to the point that we have "experts" debating on whether or not a piece they hear is real or samples. I don't think anything is really missing, but things that could still be improved somewhat, like better legatos etc


Maybe it's so with Drums or so - but who cannot hear the difference of a fake and a real orchestra - is no expert.


----------



## Piotrek K. (Jul 16, 2021)

SteveC said:


> I think physical modeling in combination with an AI could be an answer in the future.


I'm with you on that. No matter how many round robins, velocity layers etc. you will record it will still be just tons of GB of snippets of time, that you can or can't make work in a given context. Beautiful, expressive, dead, cold, spooky you name it. But still just, more or less tweakable, recordings. Modelling and building actual virtual instruments instead of keyswitch based Frankenstein monsters is the future.


----------



## mybadmemory (Jul 16, 2021)

PaulieDC said:


> To me it sounded like Paul Thomson was talking about sympathetic resonance... maybe I'm misunderstanding what he was saying. Wouldn't be the first time, lol.


I agree. We know this with pianos, and digital pianos and software pianos have emulated it for years. Strings in a wooden enclosure shouldn’t be that different from musicians in a room when it comes to acoustic principles, right?


----------



## DANIELE (Jul 16, 2021)

X-Bassist said:


> I think way before getting into the effects of multiple instruments in a room they should find a better way to switch articulations. Keyswitches and CC’s are like asking a piano player to have a third arm or leg, the pianos would never get sold, yet composers are still expected to deal with multple steps to record one track, and that’s not counting dynamics, layering, mixing....
> 
> If they aren’t going to work on “intelligent” computer keyswitching.... at least all the shorts on one (there SHOULD be many types) and longs (yes, including polylegato) on another, then the sampling world should just stop everything else right now. Including recording. There is enough. Thanks.


Well, there are libraries that do this already.


----------



## muk (Jul 16, 2021)

bill5 said:


> ? Solistic? As in solos? We'll agree to disagree there, and in generally really. I know I've heard some very realistic sounding pieces and I don't mean "epic" soundtrack stuff. Hardly a "grotesque distortion." If I find links I will post.


Yes, let's agree to disagree on this one. A mockup of the Mendelssohn violin concert that experts will wonder whether it's real or samples? Not going to happen anytime soon. Or a Haydn string quartet? Or any string quartet really? With current samples it's just not possible. Not even close. A few more of the limitations of samples have come up in the discussion. Dynamic limitations and the crude way we crossfade between layers is one example. With vibrato it's even worse. So any kind of music where variations in vibrato are necessary and audible simply can't be mocked up with samples convincingly.


----------



## Mason (Jul 16, 2021)

Mike T said:


> There are also libraries out there involving Shawn Murphy, Jake Jackson, Simon Rhodes, Alan Meyerson....


True, but are they doing everything, or just some polishing at the end to put a big name on it?


----------



## mikeh-375 (Jul 16, 2021)

muk said:


> Yes, let's agree to disagree on this one. *A mockup of the Mendelssohn violin concert that experts will wonder whether it's real or samples? Not going to happen anytime soon. Or a Haydn string quartet? Or any string quartet really? .............*


.....absolutely, samples are not even close to replicating solo strings and their nuances. I'm in the process of recording a soloist for my violin concerto and the difference between his playing and the sample mock-up is night and day.


----------



## muk (Jul 16, 2021)

mikeh-375 said:


> .....absolutely, samples are not even close to replicating solo strings and their nuances. I'm in the process of recording a soloist for my violin concerto and the difference between his playing and the sample mock-up is night and day.



That has been my experience as well. Well, not with a violin concerto. But with a string quartet piece, and other pieces with solo strings passages.

I am very interested in hearing your concerto Mike! I hope you do share once the prodcution is finished.


----------



## mikeh-375 (Jul 16, 2021)

muk said:


> That has been my experience as well. Well, not with a violin concerto. But with a string quartet piece, and other pieces with solo strings passages.
> 
> I am very interested in hearing your concerto Mike! I hope you do share once the prodcution is finished.


Thanks Muk. Well I was toying with the idea of posting it when done as I have a well known player performing it, but it's just not very VI-C. If I don't post, I'll send you a private link to audio and score sometime.


----------



## muk (Jul 16, 2021)

mikeh-375 said:


> Thanks Muk. Well I was toying with the idea of posting it when done as I have a well known player performing it, but it's just not very VI-C. If I don't post, I'll send you a private link to audio and score sometime.


Thanks Mike, I'd appreciate it. I'm genuinely curious to hear your concert.


----------



## AudioLoco (Jul 16, 2021)

SteveC said:


> Please show me the"recordings" of an sampled orchestra that sounds like one. I didn't find any!
> 
> I think physical modeling in combination with an AI could be an answer in the future.
> 
> I'm also missing something that is in a live performance what cannot be recorded by now. So we need mikrophones and speakers for these more spiritual things!


.... if we start deeming not good enough even recordings of the real thing we should all throw away our equipment, close this forum and try our luck as estate agents....

Recordings are the benchmark for any comparison with samples instruments. Comparing VIs with an actual real instrument in a room is a non starter....


----------



## SteveC (Jul 16, 2021)

AudioLoco said:


> .... if we start deeming not good enough even recordings of the real thing we should all throw away our equipment, close this forum and try our luck as estate agents....
> 
> Recordings are the benchmark for any comparison with samples instruments. Comparing VIs with an actual real instrument in a room is a non starter....


No, this is not what I was talking about. I wanted to say that I did not heard an VI orchestra that sounds real to me. Sorry for my bad English.


----------



## ism (Jul 16, 2021)

Saxer said:


> He said quite a few times that instruments sound differently because they resonate with each other in the room - but I don't believe him. I never noticed my instrument reacting differently when other music is in the room and never heard of other musicians realizing that. Especially with an empty room and a few players performing single notes for sampling.


This might be true if you have the bleed mics located right next to the other instrument. But even so, I'd be surprised if this is anything approaching a first order effect. In principle, whether the other musicians are wearing tweed vs polyester will have an affect of the reflective/absorptive acoustic properties of a room - but I doubt that human acoustic perception is enough to dissent this except in artificially extreme cases. 

The's also the question of non-linearity. The wave equation that (in principle at least) governs the combination of sound is (mostly) linear, meaning the whole of adding sounds is determined only by the sounds you're adding - which is the only reason sampling works in the first place. I don't know if the (very small) non linearities that do exist if you go deep enough in to the underlying physics would ever be perceptible by human ears. But I do know that there are non-linearities in the signal chain that are significant. For instance, some (analogue) electronics of some mixing desks have non-linear properties that are desirable enough that at least one DAW boasts of emulating them (meaning the whole is no longer simply the sum of the parts). So I'd guess that there's at least some attention to this paid by engineers adapting recording techniques specifically to samples. Though again, I'm not at all sure that this is going to be a first order effect. 


But what is, unequivocally, a first order effect that you can actually hear, is noise reduction - you can seen it very clearly in this video by Olafur Arnalds where he uses both the OACE samples and new recordings of the same orchestral in the same space for his album 



I forget where, exactly (maybe someone can provide the exact time I think ~20:00), but he demonstrates a very clear effect that you when record notes separately and then mix them, the sound is different, and that this arises because in recording notes individual vs simultaneously sample libraries typically require noise reduction in the high end. In adding up all the individual samples, absent some high frequency noise reduction, you would get an unpleasant high frequency build up. And you can see in the video that this noise reduction works pretty well ... but it does cost you something in the high end, as you can hear quite clearly in this video when he compares the mixed individual samples (again: same musicians, same space, same engineering, same orchestration) to the full recordings. 

I also suspect that the "watching each other's elbows" effect, as mentioned above, can sometime, though not always, be a first order effect. Good musicians playing together fall into a kind of collective state attunement with each other. This is most pronounced in solo strings (I'd argue it's the sine quo non of string quartets). But for any chamber music-esque musicality - which recall, is at heart a musicality written expressly for the kind of expressiveness and musicality that can be achieved by musicals playing in collective attunement and without a conductor overlord - I would imagine that this is a dimension of musicality that will at the very least, take a bit of performance of the individual samples to replicate. 

This is also one of the reasons that in my above diagram, I feel the solo strings at best capture these tiny delta-epsilon slivers of the space of not only the individual musicians, but especially a real string quartet where the whole is much more complex that merely the sum of it's parts at the level of expression.

Lets call this an "expressive non-linearity" (ie. where the whole being more the sum of it's parts arises from attuned human performance) to distinguish it from a "physical non-linearity" (where the wobbliness of the air, as per the wave equation, differs from the sum of it's parts), vs a "signal-chain non-linearity" (where the recording technology introduces this different between a whole and the sum of it's parts). 


The expressive non-linearity of the "watching each others's elbows" effects won't always be a first order effect with samples. But the small the sections sizes, and the more chamberesque the compositional style, the more I feel the "expressive non-linearity" it becomes a central concern - and profound limitation - of compositing with samples.


----------



## AudioLoco (Jul 16, 2021)

SteveC said:


> No, this is not what I was talking about. I wanted to say that I did not heard an VI orchestra that sounds real to me. Sorry for my bad English.


Ok, it wasn't clear


----------



## Rich4747 (Jul 16, 2021)

In my opinion technically ai will surpass humanity in every way but love. A man can study and practice music for over a hundred years the computer just keeps going. Thankfully "love" being the most important and valued aspect. What would I want added to sample libraries a large passion knob, a Huge excitement slider, and Gigantic love for humanity editable envelope. The first 2 I really want to see in the sample player the last one just added for effect.


----------



## SteveC (Jul 16, 2021)

Rich4747 said:


> In my opinion technically ai will surpass humanity in every way but love. A man can study and practice music for over a hundred years the computer just keeps going. Thankfully "love" being the most important and valued aspect. What would I want added to sample libraries a large passion knob, a Huge excitement slider, and Gigantic love for humanity editable envelope. The first 2 I really want to see in the sample player the last one just added for effect.


But maybe we need love for a good sound!


----------



## Rich4747 (Jul 16, 2021)

SteveC said:


> But maybe we need love for a good sound!


my point exactly


----------



## SteveC (Jul 16, 2021)

Rich4747 said:


> my point exactly


Yeah ... I think I'm a bit stupid today...


----------



## tmhuud (Jul 16, 2021)

The libraries are there but the samples are missing.


----------



## cygnusdei (Jul 16, 2021)

muk said:


> Or any string quartet really?


I still remember hearing VSL's Beethoven String Quartet no. 9 demo, it was a jaw dropping on the floor moment. The demo is still there, I just listened to it again and it still sounds awesome to my ears years after. I've mentioned this before: there is a subset of musical expression that can be done very well on VI, maybe even better than with real instruments. But it's a subset nonetheless (it helps that this piece is mostly shorts!).





__





SOLO STRINGS BUNDLE - Vienna Symphonic Library


Vienna’s Solo Strings Bundle represents the most comprehensive and sophisticated solo string samples ever created, including more than 100 GB of pristine recordimngs of the violin, viola, cello and double bass.




www.vsl.co.at


----------



## bill5 (Jul 16, 2021)

Thanks. Not exactly "grotesque" or light years away from the real thing to put it mildly. In a blind test, good luck telling this from the real thing.


----------



## Jorgakis (Jul 16, 2021)

One simple thing, that occupies me all the time:
Maybe repeated (sustianed) notes? There is repetition options e.g. in CSS/HWOrch but I don't think it sounds quiet convincing. I've never managed to program a convincing melody that has 3 or more longer notes in a row (something like Korsakov's Russian Easter) with strings/brass/WW.


----------



## youngpokie (Jul 16, 2021)

muk said:


> In a mockup we use recordings of notes that where not played to play the phrase that we have written.


This is a critical point - the goal for a "real" artistic performance is to find the most compelling way to shape a phrase. The articulations are never defined very precisely in a technical sense, and they are picked from some point on a sliding scale: a legato, for example, is somewhere within a large range from legatissimo to non-legato.

Meanwhile, the goal of sample libraries is explicitly the opposite - to produce an "ideal" legato. This means a theoretical definition and an "averaged out" articulation, a true case of "one size fits all".

And maybe it's part of the reason why sampled music sounds "sampled" - it's basically many "averaged out" articulations strung together into a phrase. Trying to make rainbows using grey and lifeless dots...

Meanwhile, the range-based approach to articulations is already achievable with modelled instruments. But they offer only two choices so far: spend many hours learning how to "play" them properly; or spend many hours drawing 30 CCs, note by note. Until some phrase-based "look ahead" AI is created, the averaging out (i.e. dumbing down) is likely once again the outcome since nobody has the time or the patience.


----------



## youngpokie (Jul 16, 2021)

cygnusdei said:


> I still remember hearing VSL's Beethoven String Quartet no. 9 demo, it was a jaw dropping on the floor moment. The demo is still there, I just listened to it again and it still sounds awesome to my ears years after. I've mentioned this before: there is a subset of musical expression that can be done very well on VI, maybe even better than with real instruments. But it's a subset nonetheless (it helps that this piece is mostly shorts!).
> 
> 
> 
> ...



Compare the amount of different short articulations in just the first few minutes of a real performance (and what it means for the music) vs that demo, as good as it is:


----------



## marco berco (Jul 16, 2021)

MarcelM said:


> hmmm... whats missing? did you ever hear a real orchestra? samples do not come close and i guess they will never. well, maybe some day but that one might be far far away.


I really agree but I have also experimented that good samples sound better than an average orchestra or a mediocre one. But when playing with a fantastic one like the London or Hollywood or even a good one the sound result is fantastic. The difference is in the wallet…!


----------



## Stringtree (Jul 16, 2021)

The sound of a middle school or high school band or orchestra in the band room. Gleefully imperfect, with the option of fixing things here or there. This would have utility for music-to-picture or just as a Tide Pod of emotional possibilities.


----------



## cygnusdei (Jul 16, 2021)

youngpokie said:


> Compare the amount of different short articulations in just the first few minutes of a real performance (and what it means for the music) vs that demo, as *good *as it is:


Yes, at the end of the day one must decide for himself what _good _means. I have my own bias, that is a virtual performance is a musical performance in its own right, that just happens to use virtual instruments. And thus the job of a virtual musician is the same as that of a real musician, i.e. to realize the composer's design - but how, it depends on the understanding, taste, and craft of the musician.


----------



## Casiquire (Jul 16, 2021)

Mason said:


> Might one problem be that the best sound engineers are working for record labels and big artists and not sampling?


A lot of sample libraries use the same players, rooms, and engineers as actual film scores


----------



## Vladimir Bulaev (Jul 16, 2021)

SteveC said:


> Please show me the"recordings" of an sampled orchestra that sounds like one. I didn't find any!




It's not perfect, but when I listen to it, I enjoy the music, and I don't think about samples or not samples. First of all, it is a masterly orchestration and performance.


----------



## muk (Jul 16, 2021)

youngpokie said:


> This is a critical point - the goal for a "real" artistic performance is to find the most compelling way to shape a phrase. The articulations are never defined very precisely in a technical sense, and they are picked from some point on a sliding scale: a legato, for example, is somewhere within a large range from legatissimo to non-legato.
> 
> Meanwhile, the goal of sample libraries is explicitly the opposite - to produce an "ideal" legato. This means a theoretical definition and an "averaged out" articulation, a true case of "one size fits all".
> 
> ...



Very eloquently put what I was trying to say. I totally agree with this. 

I also concur about the string quartet demo. Well made as it is, it does not hold up compared to the recording. And that's for musical material that is relatively well suited to samples (mostly shorts only). Try mocking up the other movements of this quartet. It will not work.


----------



## PaulieDC (Jul 16, 2021)

Vladimir Bulaev said:


> It's not perfect, but when I listen to it, I enjoy the music, and I don't think about samples or not samples. First of all, it is a masterly orchestration and performance.



Thanks for posting this... I finally understand the saying "Not seeing the forest for the trees". I'm equally guilty of over-examination of every detail. In photography, the pros know that most people can't tell the different between a good photograph and a great photograph. If you play some of the demo's from the VSL or OT site for the average listening base, are they going to know it isn't real? Ha, even that's false statement. Unless you go to an orchestral hall, you're listening to a _recording _of classical music or film scoring. Well, what are samples? The very same, except we now have way more control of those "recordings". Even writing this I'm realizing I don't spend enough time studying and listening to and practicing articulations to better know how to perform the MIDI input. I think a LOT about the arrangement and assigning parts to instruments but assume the built-in articulations will carry it. That's like shooting in Program mode on a pro DSLR and expecting perfection. Wow... I have a _lot_ to learn still and the scary part is, not knowing what I don't know. I also recently realized that I often look at the NAME of the sample library provider when I need to be looking at what ROOM they were recorded in (I know, VI 101... I'm getting there!).

These discussions, while sometimes heated, are PHENOMENAL to make you you think and workout your specific world. I personally love seeing both side hack it out, it brings up things we don't think about in our comfy path we've created.

So keep arguing folks, there are tons of positive things to get out of it.


----------



## CT (Jul 16, 2021)

Vladimir Bulaev said:


> It's not perfect, but when I listen to it, I enjoy the music, and I don't think about samples or not samples. First of all, it is a masterly orchestration and performance.



I remember this. It's really good.


----------



## PaulieDC (Jul 16, 2021)

mybadmemory said:


> I agree. We know this with pianos, and digital pianos and software pianos have emulated it for years. Strings in a wooden enclosure shouldn’t be that different from musicians in a room when it comes to acoustic principles, right?


Shoreline Music was the first to start selling acoustic guitars online, two decades ago. They used to advertise them with a simple 5-minute video on YT, playing the appointed guitar in their huge acoustic room with DOZENS of guitars hanging on the walls. When a certain guitar had amazing output (a Cocobolo/spruce Taylor comes to mind), the guy would hit a chord and stop it so we could hear, just with the camera mic, all of the other guitars now singing in the room. There's no way a cello pounding out audio waves isn't going to make other wood instruments respond, at least in _some_ way. However, it would have to obviously be instruments at rest, such as violas. Like harmonics I believe we can hear that resonance within reason. But what if the violists, while resting at the appointed time, lay their arm over the instruments to NOT let it vibrate? So many variables!

Welcome to Overthinking 101... I need help.


----------



## Soundbed (Jul 16, 2021)

Vladimir Bulaev said:


> It's not perfect, but when I listen to it, I enjoy the music, and I don't think about samples or not samples. First of all, it is a masterly orchestration and performance.



Wow what a monster orchestrator!

Here's another one — I get lost in the music and forget about the samples.


----------



## Wunderhorn (Jul 16, 2021)

Listen to Anders Hillborg's new Cello Concerto. Then you know what's missing in Spitfire's Solo Cello Performance patch...


----------



## Living Fossil (Jul 16, 2021)

I don't want to dive too deep into the discussion, however, here are some thoughts:

- as long as real musicians exist, i don't care too much about the fact that samples don't replace them.
That's a good thing.

- as long as the impact/emotional content of the music works, i don't ask myself if something is real or not when listening to music. 

- I had quite often the situation that some smart guys, when commenting on my music, pointed out that it's obvious that some instruments are samples. 
Now, i usually work with musicians and also with samples in most projects. And the amazing thing was that those guys in almost every single case accused the real musicians to be samples. It was really interesting, specially regarding the psychological subtext. They always got really emotional, almost hostile, when accusing real musicians' performances as being "samples". I think it's due to an inner conflict of being afraid of not being able to hear that something isn't real.

- while i really love working with real orchestras and listening to a good orchestra is always a fantastic experience, that's also just a fragment of the whole picture.
I'm absolutely allergic to bad intonation, and some poorly intonated passages can completely destroy the whole thing. It's like eating a great meal at a table in a public toilet that wasn't cleaned for some days. 

- i guess i've mentioned this aspect in other threads: for me, listening to real musicians is never the same as listening to a recording, and i don't think recording should try to replace the live experience.
It's not even a similar experience.
Therefore i don't think recordings should try to sound like concerts.
If you listen to music in boomy or muddy rooms, your brain can easily adapt the sound and change your perception according to the visual feedback it gets from the room. (within some limits. I once heard a piece that used a drumset in a church. It was painful)
When you hear a live recording of the same, it will be unpleasant.
When you hear a violin concerto live, your brain will boost the violinist by around +3dB when you see him ("cocktail effect"). Etc.
Therefore, i clearly prefer well produced recordings to "natural" ones.
Or to put it in a different way: when working with samples, i don't care about "all instruments being in the same room". It's about getting a great sound per se, not about replicating live concerts.

- The last point: i don't think that in any point of human history there have been so many fantastic orchestral musicians as there are alive right now. So, i think a focus in media music should be to make it more easy to work with those musicians, instead of thinking about replacing them.


----------



## jbuhler (Jul 16, 2021)

With respect to the string quartet samples, one issue is that there is not really a significant market for mocking up Beethoven string quartets. And one wonders what a library designed specifically to mock up say the op. 18 quartets would look like. How many different shorts, round robins, longs, legatos, special one shots (runs, trills, etc.), swells, vibrato control, and so forth would such a library require? One can easily imagine this library: a recording of each part, then divided up, classified, generalized, turned into an instrument. How would this hypothetical string quartet library differ from our current libraries? 

At the same time I can imagine this hypothetical sample library optimized for making mock ups of op. 18 being essentially ineffective outside those particular sweet spots and so not really getting me any closer to and in some ways being farther away from what I want a sampled quartet to be able to do, which personally would lie more in the Shostakovich and Bartok domain, with other things that fall under the auspices of the eclectic Kronos quartet repertory. 

If you look at @ism’s little chart of sweet spots, what you see is that it should be perfectly possible to write effective music for sampled string quartets so long as you write for those sweet spots. But the space is limited and there are far more things a string quartet can do compared to what the sampled quartet can do. Though remember that an under-rehearsed live quartet is also comparably limited compared to say a professional quartet. And remember as well that the professional quartet has tens upon tens of thousands of hours of experience with their instruments and a vast amount of time playing together, whereas the sampled quartet has precisely our own experience and we are almost always playing only with ourselves, resembling the Oscar Levant dream performance in An American in Paris.


----------



## thevisi0nary (Jul 16, 2021)

I think as far as workflow the next big break through will be some integration / communication between the sampler and the daw.


----------



## KEM (Jul 16, 2021)

Junkie XL Strings


----------



## muziksculp (Jul 16, 2021)

Who needs samples when we have SYNTHS


----------



## cygnusdei (Jul 16, 2021)

jbuhler said:


> an under-rehearsed live quartet is also comparably limited compared to say a professional quartet


Thus it's not really real vs samples but rather, what constitutes a good performance? Do bad traits become good just because they are evident in real performances? Is it correct to assume that real musicians have certain limitations based on what you have heard, or it's just that you haven't heard it performed properly?

Example: "Real orchestras can't do tight shorts!". Really? Get a load of this. There is not a hint of sloppiness in the rhythm, to my ears anyway. Well, it just so happens that this is an exemplary recording performed by probably one of the best ever (conductor and orchestra). But if there is one thing, there is some diffuseness in the string intonation in places. So if anything, to lend some 'realism' to your mock-up perhaps some looseness in intonation would be more effective than rhythmic variations. But that said, is it really necessary to emulate 'bad' traits at all?


----------



## NoamL (Jul 16, 2021)

If you have an opportunity to compare samples directly to the orchestrated live recording (especially a striped one) you will notice some MAJOR things that are missing from today's samples.

There are lots of things about today's samples, on the other hand, that are very exciting.

We are approaching a 1:1 between the parameters of scoring sessions and sample sessions. The same musicians, same stages, same mics, same engineers. This is way different than the EWQLSO days. When Spitfire finishes releasing their Abbey Road lineup, you will genuinely have the Abbey Road sound from players and seating to the mics, the room sound and the recording chain, right in your DAW.

We also have a very wide array of ensemble sizes recorded. There are really high quality samples out there for 4, 15, 30, 45, 60 piece string ensembles for instance. There are oodles of choices for 6 horns, 2 horns, 4 horns..

So what is missing?


*1. Tuning.* Musicians tune chords based on an ever-changing awareness of their musical line's role in the music. This vid is a great introduction to it -



This doesn't happen with samples at all. Samples have a sterile fixed pitch that makes brass and wind chords feel somewhat lifeless when you a/b them directly against the real thing. There are a few tricks you can apply to approach the live, dynamically-tuning sound, but they're extremely limited. We need sample*rs*, not samples, that can retune samples on the fly with a global awareness of the musical context. How much do you want to bet that Remote Control's super secret sampler already does this? I bet they've at least experimented with it.

*2. Dynamics.* Even when libraries have an appropriate_ spectrum _of dynamics (which is really quite rare for brass libraries) there are still missing layers _between. _Crossfading between *mf* and *ff* only gives an approximation of what horns sound like *f.* Etc. Most composers can feel that 2 or 3 layers are inadequate on most orchestral instruments. Then a library comes along with 5 layers and you think it's really good. Then you compare directly with the recording you got back from the scoring stage and you realize there are at least 7 to 9 layers of dynamics in most instruments.

*3. Note Attacks. *Note attacks contribute hugely to the character of music but we are stuck in a paradigm where sustained note samples only vary their attack on the basis of the dynamic. Cinematic Studio and Orchestral Tools offer you a bit more than that but it's still not enough. It's the variety of note attacks, _not_ the legato transitions, that make "agile and sprightly" music _that isn't short note ostinatos_ quite difficult with samples. I think this point is the MAIN culprit for why mocking up any 18th-early20th century classical music is still really quite difficult as @muk observed.

"just play it" is a fantasy. It will never happen. We need more controls not fewer. Now if you prefer biting a breath controller and tilting your head in front of a camera, to penciling in some CC curves, then fine... but the keyboard itself will always need assistance.


----------



## NoamL (Jul 16, 2021)

Also, we need to get rid of keyswitches.

I do not say we should innovate past *switches* the problem is *keys*.

There is no reason for switches to have (phantom) pitch information or for them to even appear directly on piano rolls. We should have dedicated switcher hardware peripherals which send pitchless switching information (this could be UACC, as Spitfire has already demonstrated; 127 units of resolution is a reasonable amount when most instruments come with less than a dozen articulations recorded!).

Keyswitches create huge problems all along the music production assembly line mostly from being unnecessarily pitched. An obvious example is how keyswitches play hell with midi cleanup and orchestration because they appear as notes. Keyswitches would also be an immediate problem if DAWs started having global tuning awareness as I sketched out in the last post. Then there's the problem of composers accidentally pressing switches when they play an instrument that has the switches a little too close to the pitch range of the instrument... etc...

The funny thing is the big guys already have done a Layer 2 replacement on top of keyswitches, all the big composers NEVER press the actual switch, they press a button on their tablet. It shows how keyswitches are an obsolete solution that haven't been innovated because music production is just too niche of an industry to move fast.


----------



## purple (Jul 16, 2021)

Good room modeling that allows you to place a dry instrument in a real sounding space.


----------



## cygnusdei (Jul 16, 2021)

NoamL said:


> we are stuck in a paradigm where sustained note samples only vary their attack on the basis of the dynamic


It may be impossible for solo strings, but for string sections I thought it's pretty common to layer short with sustain. I do it myself occasionally and by adjusting the gain of the short, the result could be pretty convincing.


----------



## tmhuud (Jul 17, 2021)

Ok, instead of me being a jack-ass (I’ve been known for that) I’ll post something that hopefully will be of some help.
WHAT is missing? It’s YOU.
YOU are missing. Only YOU know the sound your after. Start very simple like I did many moons ago. If you want a certain choir sound hire a few folks, mic them the way you want , see what your getting. Create your own sound. Maybe go simpler: hire a cellist, get 7 mics on him/her, listen to what happens when you move those mics around. If you have good ears and a project that needs THAT sound, you’ll dial it in. No sample library off the shelf will give you that. And believe me, you’ll feel a sense of accomplishment AND satisfy your ears.


----------



## AudioLoco (Jul 17, 2021)

PaulieDC said:


> Shoreline Music was the first to start selling acoustic guitars online, two decades ago. They used to advertise them with a simple 5-minute video on YT, playing the appointed guitar in their huge acoustic room with DOZENS of guitars hanging on the walls. When a certain guitar had amazing output (a Cocobolo/spruce Taylor comes to mind), the guy would hit a chord and stop it so we could hear, just with the camera mic, all of the other guitars now singing in the room. There's no way a cello pounding out audio waves isn't going to make other wood instruments respond, at least in _some_ way. However, it would have to obviously be instruments at rest, such as violas. Like harmonics I believe we can hear that resonance within reason. But what if the violists, while resting at the appointed time, lay their arm over the instruments to NOT let it vibrate? So many variables!
> 
> Welcome to Overthinking 101... I need help.


Yes, but to overtake you in overthinking...102!  
Those guitars were probably loose without any pressure on their frets. Therefore the open strings were resonating. 
When people play they usually excert some kind of pressure on open strings when not in use. Or when pausing and not playing any notes they are holding the instrument dearly by the neck and pressuring even slightly the strings exactly so they won't resonate or accidentaly make unwanted sounds and thus get executed by the conductor. 
Surely the bodies resonate with other instruments, but it is not loud as an instrument resonating as a whole when its strings are not pressured. And therefore shouldn't be THAT influencial on the sound as a whole.... (although I would need to experiment that more scientifically to be able to state that 100%, maybe Paul Thompson has done exactly that...)
Drums are different, as it is impossible to hold every single piece's skin (the physical equivalent of strings) to prevent it from resonating, unless you have an octopus drummer that is.
That is why gates are so widely used especially on toms when mixing Rock/Pop drum kits.


----------



## Henrik B. Jensen (Jul 17, 2021)

6 pages and noone mentioned this? 🙂


----------



## Zedcars (Jul 17, 2021)

Henrik B. Jensen said:


> 6 pages and noone mentioned this? 🙂


The tutti nature of that clip lends itself well to fooling the ear (although not 100% for me but very very close). However, some sparser sections may not fair so well.


----------



## Pixelpoet1985 (Jul 17, 2021)

NoamL said:


> This doesn't happen with samples at all. Samples have a sterile fixed pitch that makes brass and wind chords feel somewhat lifeless when you a/b them directly against the real thing. There are a few tricks you can apply to approach the live, dynamically-tuning sound, but they're extremely limited. We need sample*rs*, not samples, that can retune samples on the fly with a global awareness of the musical context. How much do you want to bet that Remote Control's super secret sampler already does this? I bet they've at least experimented with it.


The only developer I know of is VSL. They achieved a lot, not only with their sophisticated recordings, but also with the features in their sample player. The humanization feature does exactly this, it's not a static, but an evolving pitch change. There are endless possibilities you can create with this. But it's only possible when the instruments are perfect in tune – what VSL's instruments are. And not to forget the dimension series (brass, strings), which is, in my opinion, to closest to the real thing we currently have.


----------



## Ivan M. (Jul 17, 2021)

I don't know what's missing, but oh God I hate sample libraries, and keyswitches and almost everything about production, everything is so rigid, and awkward. I simply can't enjoy creating. I just want a simple system that doesn't require being in the technical mindset all the time. Modeled instruments and machine learning based products are the right step forward, and I'm going that way.


----------



## Jdiggity1 (Jul 17, 2021)

Pixelpoet1985 said:


> The only developer I know of is VSL. They achieved a lot, not only with their sophisticated recordings, but also with the features in their sample player. The humanization feature does exactly this, it's not a static, but an evolving pitch change. There are endless possibilities you can create with this. But it's only possible when the instruments are perfect in tune – what VSL's instruments are. And not to forget the dimension series (brass, strings), which is, in my opinion, to closest to the real thing we currently have.


PLAY, being VST3, will also work with dynamic tuning. It's a big reason why I've been using Hollywood Brass and Cubase for a while now.
For those that haven't explored it yet, look into Cubase's Hermode tuning.
...
Oh alright I'll tell you! Choose a Hermode mode in Project setup, then go into the MIDI modifiers of a trombones or horns sustain track and enable "HMT: Follow". Play some chords. Try it with HMT off then on to compare.
(This will not work with Kontakt libraries, FYI.)


----------



## el-bo (Jul 17, 2021)

Living Fossil said:


> - as long as the impact/emotional content of the music works, i don't ask myself if something is real or not when listening to music.


This!

As humans, I think we’re generally more inclined to broad-stroke awareness and emotional reaction. Getting lost in the weeds (or the proverbial trees in lieu of the proverbial forest) is just a distraction.

Anyone who grew up when photos were really poor facsimiles of the real-world events they captured, or when black-and-white movies were regularly screened on small black-and-white televisions, are likely well aware that the realness (or lack thereof) of the medium had nothing to do with the amount of emotion that might result. Even the most distant and fragmented memories I have can render me emotionally immovable. And most of my early connection to and love of music was fostered with the help of poor recordings, played back on something similar to this:


----------



## el-bo (Jul 17, 2021)

Ivan M. said:


> I don't know what's missing, but oh God I hate sample libraries, and keyswitches and almost everything about production, everything is so rigid, and awkward. I simply can't enjoy creating. I just want a simple system that doesn't require being in the technical mindset all the time. Modeled instruments and machine learning based products are the right step forward, and I'm going that way.


As someone who is just at the very beginning of a potential journey into learning orchestration, this is definitely starting to feel like a dealbreaker.

It’s already difficult enough for me to record music (I still play and often ‘write’), without having to negotiate the building of templates (track-per-art or key-switches?), the key-switches, huge amounts of MIDI-massaging, track/sample delays etc. 
Just the thought of it all seems like a joy-killer. And if there’s no joy in it, then why even bother?

Unfortunately, the ‘Venture’, AM & SWAM side of the equation is way out of the budget of this newbie.


----------



## AudioLoco (Jul 17, 2021)

Ivan M. said:


> I don't know what's missing, but oh God I hate sample libraries, and keyswitches and almost everything about production, everything is so rigid, and awkward. I simply can't enjoy creating. I just want a simple system that doesn't require being in the technical mindset all the time. Modeled instruments and machine learning based products are the right step forward, and I'm going that way.


I get it. 
But, come on... It is part of the game. The results that can be achieved are incredible, we only could dream of being able to get there until a few years ago, and the tech mindset and "production" are the price of the ticket.
Like practice and studying are the price to be able to play at a high level an instrument.
I personally hate practicing my instrument!  
But I love the challenge of production....
No biggie....
As for modeled instruments, the road ahead is looooong.... AI applied to automatically choosing the right articulation could be really nice for playability (although I don't see how it can ever read a musician's mind *before* a note has been played).


----------



## Ivan M. (Jul 17, 2021)

el-bo said:


> Unfortunately, the ‘Venture’, AM & SWAM side of the equation is way out of the budget of this newbie.


Well, these still require a lot of midi CC, and layering. BUT, you can do runs and trills, and play any melody you want. Contrast that with having to constantly keyswitch and fight with some sluggish samples, out of time articulations, and then finally give up when you realize it cannot possibly be massaged into something useful, and go search for another library... complete BS! 
Modelled and ML for the win!


----------



## Ivan M. (Jul 17, 2021)

AudioLoco said:


> we only could dream of being able to get there until a few years ago


We don't live "a few years ago" anymore  And I'm going to take any tool that makes my life easier, because the production of these past years has taken all the joy out of my creative work.


----------



## AudioLoco (Jul 17, 2021)

Ivan M. said:


> We don't live "a few years ago" anymore  And I'm going to take any tool that makes my life easier, because the production of these past years has taken all the joy out of my creative work.


Hey each to their own! 
While I'm looking forward to great leaps and ready to embrace what could make my workflow...well... flow more, I'm having a great great, fantastic time and lots of joy with what tech is offering right now!


----------



## Casiquire (Jul 17, 2021)

Pixelpoet1985 said:


> The only developer I know of is VSL. They achieved a lot, not only with their sophisticated recordings, but also with the features in their sample player. The humanization feature does exactly this, it's not a static, but an evolving pitch change. There are endless possibilities you can create with this. But it's only possible when the instruments are perfect in tune – what VSL's instruments are. And not to forget the dimension series (brass, strings), which is, in my opinion, to closest to the real thing we currently have.


Humanization is still different from pitch fluctuation that arises out of an understanding of the musical context, which VSL still hasn't quite done. I think the best example of this is a choir singing acapella. Over the course of a piece, a choir might drift a full quarter or half step from where they started, and that's not an example of a BAD choir, it's an example of a GOOD one. It's because true equal temperament doesn't always give us the most pleasing and musical tuning between notes. Equal temperament is a compromise we make for the sake of convenience and consistency, not for the best or most musical sound. Musicians will naturally very gently tune their pitch to the other instruments they hear, and samples aren't capable of that today.

Actually i think that's a much more impactful quality of live performers that's missing from samples than anything involving sound waves interacting with one another or room acoustics, which came up earlier

Edited to add that in today's market, i think NotePerformer is best poised to take this into account since it can see the full chords being played and i believe those pitch adjustments are predictable enough to script


----------



## youngpokie (Jul 17, 2021)

Casiquire said:


> Humanization is still different from pitch fluctuation that arises out of an understanding of the musical context, which VSL still hasn't quite done. I think the best example of this is a choir singing acapella.


Indeed! Another one is solo strings.

A violin playing in a string quartet would tune to just intonation, meaning that D# and Eb would actually be two distinct pitches, as they are supposed to be. While a violin in a piano trio would have to obey the piano tuning and play in equal temperament, so D# and Eb would be the same pitch. Of course the samples are spread over the piano keyboard so...

And that's not to even mention pitch fluctuation due to missing fretting or because of this or that fingering needs for a given phrase.


----------



## JDK88 (Jul 17, 2021)

Imagine composing an entire orchestrated piece with only your voice. Or maybe the instrument of your choice? AI machine learning has a promising future.


----------



## youngpokie (Jul 17, 2021)

Ivan M. said:


> Well, these still require a lot of midi CC, and layering. BUT, you can do runs and trills, and play any melody you want. Contrast that with having to constantly keyswitch and fight with some sluggish samples, out of time articulations, and then finally give up when you realize it cannot possibly be massaged into something useful, and go search for another library... complete BS!
> Modelled and ML for the win!


I'm still trying to wrap my head around this. I'm watching the bow in this video (for less than one minute!!) and I see how widely the bow is sliding towards and away from the fingerboard. Clearly in order to better articulate the phrase. And I say to myself: yes, this can now be easily done in modelled strings with a single CC.

Then I'm watching how the vibrato is different depending on the finger generating it. Yes, this can now be done too, with 2 CCs. I can increase bow pressure, control amount of bow per note, rebow when I want to, etc, etc.

None of that is even remotely possible with traditional samples. And these subtle changes do add up to bring a lot of life to the music.

But on the other hand isn't this new flexibility and nuance making programming even more like a place in hell than it is already?


----------



## Ivan M. (Jul 17, 2021)

youngpokie said:


> I'm still trying to wrap my head around this. I'm watching the bow in this video (for less than one minute!!) and I see how widely the bow is sliding towards and away from the fingerboard. Clearly in order to better articulate the phrase. And I say to myself: yes, this can now be easily done in modelled strings with a single CC.
> 
> Then I'm watching how the vibrato is different depending on the finger generating it. Yes, this can now be done too, with 2 CCs. I can increase bow pressure, control amount of bow per note, rebow when I want to, etc, etc.
> 
> ...



Great point! I use a plugin to copy midi cc, so when I push the dynamics cc up, vibrato depth and speed are also increased. I think I can even scale the copied cc in reaper, can't remember though. Seen this in Friedlander violin (not modeled, classic sample lib, and a good one). This works excellent, and only one CC lane!


----------



## el-bo (Jul 17, 2021)

Ivan M. said:


> Well, these still require a lot of midi CC, and layering. BUT, you can do runs and trills, and play any melody you want. Contrast that with having to constantly keyswitch and fight with some sluggish samples, out of time articulations, and then finally give up when you realize it cannot possibly be massaged into something useful, and go search for another library... complete BS!
> Modelled and ML for the win!


Actually, riding the mod-wheel/faders is enjoyable and is a part of the overall performance of the instrument. I've even got so used to using it that I now jerry-rig most of my synth sounds to respond to expression/volume via mod-wheel (a touch of delay ad verb after the patch can deal offset the complete death of the sound as it falls closer to nothing).

It's everything else that you highlighted that is the stumbling-point, for me. And yes, I just want to be able to play fluid melody lines like synth patches. 

It may very well turn out that I never really develop a love for creating on the orchestral side of things, but that my efforts to learn how to arrange, orchestrate and do split part-writing would help my 'normal' musical exploits, no end


----------



## VSriHarsha (Jul 17, 2021)

Casiquire said:


> Humanization is still different from pitch fluctuation that arises out of an understanding of the musical context, which VSL still hasn't quite done. I think the best example of this is a choir singing acapella. Over the course of a piece, a choir might drift a full quarter or half step from where they started, and that's not an example of a BAD choir, it's an example of a GOOD one. It's because true equal temperament doesn't always give us the most pleasing and musical tuning between notes. Equal temperament is a compromise we make for the sake of convenience and consistency, not for the best or most musical sound. Musicians will naturally very gently tune their pitch to the other instruments they hear, and samples aren't capable of that today.
> 
> Actually i think that's a much more impactful quality of live performers that's missing from samples than anything involving sound waves interacting with one another or room acoustics, which came up earlier


That’s right! But I think you can mess with the pitch, in the context of humanization, using your DAW. Same goes to tempo. Hiring is there, always. But sampling CAN do things, still. And there’s more to the developing technology.


----------



## Casiquire (Jul 17, 2021)

youngpokie said:


> I'm still trying to wrap my head around this. I'm watching the bow in this video (for less than one minute!!) and I see how widely the bow is sliding towards and away from the fingerboard. Clearly in order to better articulate the phrase. And I say to myself: yes, this can now be easily done in modelled strings with a single CC.
> 
> Then I'm watching how the vibrato is different depending on the finger generating it. Yes, this can now be done too, with 2 CCs. I can increase bow pressure, control amount of bow per note, rebow when I want to, etc, etc.
> 
> ...



I love the possibilities of modeling but not necessarily the practicality of it. The players are making those decisions instinctively and not always logically, reactive to players around them, in a manner that just takes too much control of too many parameters to be feasible with CCs which are far from instinctive. I think modeling is incredible and has an upper hand for solo lines, clarinets, brass instruments that tend not to use much vibrato, etc, but for me i prefer the compromise of traditional sample recordings with natural recorded vibrato, even if it misses some of those nuances, but different strokes!


VSriHarsha said:


> That’s right! But I think you can mess with the pitch, in the context of humanization, using your DAW. Same goes to tempo. Hiring is there, always. But sampling CAN do things, still. And there’s more to the developing technology.


Yes! We can definitely do it ourselves, it just takes so much more time and effort and it's the perfect thing to have a computer to do for you


----------



## youngpokie (Jul 17, 2021)

Casiquire said:


> I love the possibilities of modeling but not necessarily the practicality of it. The players are making those decisions instinctively and not always logically, reactive to players around them, in a manner that just takes too much control of too many parameters to be feasible with CCs which are far from instinctive.


Yes. I'm still undecided but I do wonder if all the CCs generated by the right hand in that video can be allocated into groups to make it more manageable. Maybe they should be. For example, maybe there's a "dynamics" group that combines some bow pressure with CC11, which seems quite intuitive for a violin player to do. Or a group linking up/down bow with bow position CC, which happens regularly in that video on sustains.

I got LeapMotion a week ago and started experimenting with setting this up in Dorico. I type in the notes and then record an overdub (potentially two) of these CCs. The initial setup is very easy and I'm toying with a couple of ideas for presets derived from such groupings. The one thing still unclear is how to approach shorts...

And very quickly it starts to feel like I just bought a _*real*_ instrument with expressive potential of Joshua Bell and I don't have a clue how to play it. That's ultimately the dilemma as I see it for myself: do I learn to play this thing, or do I suck it up and go back to the convenience of pre-recorded samples?


----------



## muziksculp (Jul 17, 2021)

Being a fan of Audio Modeling SWAM libraries, which are physically modeled solo instruments, I'm very curious, and excited to see what their upcoming SWAM String Ensemble will offer. 

I think having a SWAM based String Ensemble is a missing part of the SWAM family of products, and might offer a new level of realism that we have not experienced with String Ensemble Libraries so far, given the ample real time controls SWAM offers. I think achieving a realistic timbre with physically modelled Ensemble Strings is a challenge they need to try tackle.


----------



## AudioLoco (Jul 17, 2021)

Henrik B. Jensen said:


> Imagine if a billionaire bought VSL and said: Develop a stunning orchestral library, spare no expense! (Jurassic Park quote  ) Would be interesting to see what‘s the limit of what’s possible currently. Also without making it insanely complex for the composer to use.


Mr. Branson, Mr. Bezos, forget about those boring space thingies and create THE VST orchestra for us, thank youuuuuuu!


----------



## CT (Jul 17, 2021)

I really think the future as far as "playability" is in better programming of traditional samples, not modeled stuff. The latter is a siren song and I've yet to really see any evidence that you don't eventually get torn apart on the proverbial rocks. 

On the other hand, stuff like Spitfire performance patches (far from perfect) and what Jasper is doing seem much more potentially fruitful.


----------



## PeterN (Jul 17, 2021)

Heres another missing one. (Not about the technical details though - but as library)

A library that has recorded every possible chord (without going to absurdities) for woods, brass, strings, and also, as full orchestra. As tutti long sweep and staccato. 

Those who did this stuff have only done a handful of chords, and most "experimental" is Spitfire Bernard H library, with around 10 recorded chords. OT did around 8 chords or so. Come on. Cinesamples has 3 chords, one is a 7th, two other major n' minor.

Yea, we can make these chords ourselves too, but this library is still missing.

*The Royal Albert Hall Full Chord Complete Library (with extended version on bass note experiments)*

Please


----------



## muziksculp (Jul 17, 2021)

Sonokinetic's multi-sampled Strings Library, with Divisi sections, and a lot of other special features.

Hopefully it will be released one of these days, but so far it is just a mirage that keeps moving away from us, every time they announce a release date


----------



## Wunderhorn (Jul 17, 2021)

Here is what I miss in a lot of libraries:

_In the GUI as accessible information:_
*Version number* and date of release with link to a web page containing a changelog.
Basic information including whether it is recorded in situ, how many dynamic layers and round robins.
A list of implemented default CCs.


----------



## robgb (Jul 18, 2021)

Quit trying to sound "real." Sound "good" instead. Real doesn't matter.


----------



## CT (Jul 18, 2021)

Quit thinking "real" and "good" are so often mutually exclusive.


----------



## Zedcars (Jul 18, 2021)

robgb said:


> Quit trying to sound "real." Sound "good" instead. Real doesn't matter.


I understand where you’re coming from, but sounding as close to “real” as possible matters to me. (Both matter to me.)


----------



## robgb (Jul 19, 2021)

Zedcars said:


> I understand where you’re coming from, but sounding as close to “real” as possible matters to me. (Both matter to me.)


Just know that you will never achieve it with sample libraries.


----------



## Zedcars (Jul 19, 2021)

robgb said:


> Just know that you will never achieve it with sample libraries.


Yep.


----------



## mybadmemory (Jul 19, 2021)

The nature of business is that companies want and need to grow. To do that they need to reach more people and for that pro’s and advanced hobbyists are not enough. Beginning hobbyists and casual users is where the big numbers and real potential for growth lies.

We already see the biggest developers like Spitfire doing this in a number of ways. Focusing on new products. Focusing on lower and mid priced products. Offering a lot of free and very cheap stuff. Offering staggered upgrade paths etc.

The next logical barrier of entry to tackle after price is in making the products simpler and easier to use without being an expert or investing 10.000 hours. By using things like AI to assist us with a lot of the things we’re now doing and have to learn manually.

There will still be smaller developers going for pro users that will work on offering even more precise control too of course, but I really can’t see that being a viable way for the bigger developers that need to grow. Empowering the much larger crowds by removing barriers of entry is where I think we’ll see most effort.


----------



## J-M (Jul 19, 2021)

My personal gripe with so many libraries is consistency. Especially regarding playing techniques - if the first violins are playing Col Legno there's a pretty good chance that I want the 2nd violins and violas playing that as well. I'd be happy to pay more for that...


----------



## mybadmemory (Jul 19, 2021)

So it we, just for the experiment, stop thinking about the faster horse (added features and polish to what we already have) and start thinking about the car (what will ultimately make what we have now obsolete) instead. What would that be?


----------



## doctoremmet (Jul 19, 2021)

mybadmemory said:


> The nature of business is that companies want and need to grow. To do that they need to reach more people and for that pro’s and advanced hobbyists are not enough. Beginning hobbyists and casual users is where the big numbers and real potential for growth lies.
> 
> We already see the biggest developers like Spitfire doing this in a number of ways. Focusing on new products. Focusing on lower and mid priced products. Offering a lot of free and very cheap stuff. Offering staggered upgrade paths etc.
> 
> ...


TL;DR
Ujam


----------



## cygnusdei (Jul 19, 2021)

mybadmemory said:


> So it we, just for the experiment, stop thinking about the faster horse (added features and polish to what we already have) and start thinking about the car (what will ultimately make what we have now obsolete) instead. What would that be?


What are you talking about? The future is already here. You might as well dump your string libraries!

 



https://vi-control.net/community/threads/marketing-cringes-press-one-key-and-sound-like-a-blockbuster.101343/


----------



## Casiquire (Jul 19, 2021)

robgb said:


> Just know that you will never achieve it with sample libraries.


Depends who you ask. If I ask a forum of sample library fanatics, maybe. If I ask my target audience, chances are i could've achieved it ten years ago with VSL. Heck over the years this very forum has been fooled by blind comparisons, and we've all seen, heard, or experienced plenty of stories about clients unknowingly telling us they preferred the sound of the sample mock up over the live players. I think it's just important for us to know our audience


----------



## Niah2 (Jul 19, 2021)

robgb said:


> Just know that you will never achieve it with sample libraries.


...and that's the joy of it. The struggle...pushing the limits...to see how far we can go...to push our creativity, musical skills and understanding....

To put it simply, I hope we never get there, that would be a real tragedy...that would mean the end...


----------



## CT (Jul 19, 2021)

iT's nOt rEaL aNyWaY


----------



## Mike Fox (Jul 19, 2021)

We really need something that allows you to compose blockbuster soundtracks within seconds.

Oh, wait…


----------



## Nick Batzdorf (Jul 19, 2021)

mybadmemory said:


> To be a little more specific, I think we'll se orchestral instruments (based on a combination of samples, modelling, and AI), that respond more realistically to our input. We record or input a phrase, and the instrument won't simply play samples one after the other, but rather create a realistic full phrase based on analysis of thousands of recordings of that instrument having played thousands of different phrases. We'll then have simple sliders ot similar to tweak/effect the performance after the fact. Basically instruments acting more like real players, where we give them the simple instructions on what to play, but the performance comes from them rather than us. But with the ability to direct it further afterwards.


This is a description of Synful.


----------



## Casiquire (Jul 19, 2021)

Nick Batzdorf said:


> This is a description of Synful.


Interesting, I'm not familiar with that one. How well does it work in practice?

Edit, i just saw the age of the library and i think it's an area that still needs a lot of exploring.


----------



## BlackDorito (Jul 19, 2021)

AudioLoco said:


> Mr. Branson, Mr. Bezos, forget about those boring space thingies and create THE VST orchestra for us, thank youuuuuuu!


Don't forget about Mr. Musk.

We can compare the sound of VI libraries against a recorded orchestra as perhaps the ideal, but as pointed out, smaller textures/ensembles will be a challenge. You can't compare against a live listening experience because it is just a different experience: you're sitting in your seat, you're moving your head taking in all the sounds of the concert hall, there's a murmur of conversation, the conductor walks out, lights down, silence .. just a completely different integrated listening experience.


----------



## robgb (Jul 19, 2021)

Casiquire said:


> Depends who you ask. If I ask a forum of sample library fanatics, maybe. If I ask my target audience, chances are i could've achieved it ten years ago with VSL. Heck over the years this very forum has been fooled by blind comparisons, and we've all seen, heard, or experienced plenty of stories about clients unknowingly telling us they preferred the sound of the sample mock up over the live players. I think it's just important for us to know our audience


Absolutely. But I'm talking to a forum of sample library fanatics. Lay people don't even know what an oboe is.


----------



## Nick Batzdorf (Jul 19, 2021)

Casiquire said:


> Interesting, I'm not familiar with that one. How well does it work in practice?
> 
> Edit, i just saw the age of the library and i think it's an area that still needs a lot of exploring.


It's been a long time since I used it, but what I remember is that it was very good for some things like fast-moving slurred strings layered with samples (as opposed to additive synthesis). Its weaker point was the sound of some of the instruments. But this was easily 15 years ago, so what I'm writing may mean nothing today.


----------



## Nick Batzdorf (Jul 19, 2021)

^ Clarification: layered with samples as opposed to using its additive synthesis sounds alone.

Sorry, the phone rang when I posted it.


----------



## Niah2 (Jul 20, 2021)

Yup synful had potential, I liked the sound of some instruments and others not so well like you said. But when it sounded good is sounded really good at least at the time which was ages ago.


----------



## Jish (Jul 20, 2021)

Nick Batzdorf said:


> This is a description of Synful.


One of the things I recall mentioned about that particular release (which feels only roughly 400 years ago now) on an old forum was that most of the users that purchased at the time weren't able to get it sounding anywhere approaching as good as some of the demos- a number of which actually hold up fairly well today (Leandro did a few classical mockups as well using the software that are still on the demo page). 

Ditto on it having a more 'alive' presence on some of the instruments than anything out there at the time- never purchased out of the typical old fear/hesitations but yes it did _appear_ to show promise in it's own way.


----------



## Nick Batzdorf (Jul 20, 2021)

Eric Lindermann gave very good demos of Synful. He'd have pedals and everything going, and he's a good keyboard player.


----------



## Jish (Jul 20, 2021)

Nick Batzdorf said:


> Eric Lindermann gave very good demos of Synful. He'd have pedals and everything going, and he's a good keyboard player.


It's a very interesting thing, because I remember at the time several of his on-site mockup's (such as the _Tristan _excerpt_) _really sounded almost groundbreaking at the time; I didn't understand the math/process of how it all worked then, but it really seemed like a meaningful step in the right direction. A healthy number of demos at the time from people made it appear to have a promising future to be developed upon. I definetley agree with the mindset that unfinished business resides in the general concept and that there is alot of further potential there.


----------



## Nick Batzdorf (Jul 20, 2021)

Things seem to be moving in two main directions:

1. More of the process being done quasi-automatically, for example EastWest's Hollywood Orchestrator. Synful is in that category, since it attempts to fill in the expression you have in mind.

2. More expressive instruments that you play rather than program. Samplemodeling is probably the best example at this stage.

#1 is farther along, and you'd have to assume there's more of a market for it. I personally am more interested in #2.

Obviously, most libraries and instruments have some of each bent, including Hollywood Orchestrator.


----------



## bill5 (Jul 20, 2021)

robgb said:


> Lay people don't even know what an oboe is.


I assume that's a joke/exaggeration...I think if you polled people on the street, most people would know.


----------



## Saxer (Jul 20, 2021)

bill5 said:


> I assume that's a joke/exaggeration...I think if you polled people on the street, most people would know.


Nope. Even a lot musicians don't know.


----------



## bill5 (Jul 20, 2021)

That's scary. I find that hard to believe.


----------



## Trash Panda (Jul 20, 2021)

bill5 said:


> That's scary. I find that hard to believe.


Don’t ever show random people/coworkers a map of the Caribbean without names of the countries and ask them which island is Costa Rica. I mean, unless you’re good with losing faith in humanity.


----------



## PaulieDC (Jul 20, 2021)

Saxer said:


> Nope. Even a lot musicians don't know.


I mentioned a Contra bassoon in front of a classically trained pianist who was a college grad and she had no idea what I was talking about. I was a bit surprised, but we should be careful not to live only in our bubble. 

Nah... I like the bubble.


----------



## PaulieDC (Jul 20, 2021)

SteveC said:


> Please show me the"recordings" of an sampled orchestra that sounds like one. I didn't find any!


Like I said, even if a recording was magically the exact same as instruments in the room, there's no playback system known to man to reproduce reality at that level. You're still bound by the playback ability in 2021. So given that concept, I'd say all of our recorded samples in 2021 are stellar, being we listen to them with our current playback technology. I need to focus on how to better play the samples in, to gain realism. Never was anything so fun and frustrating to learn, lol.


----------



## bill5 (Jul 20, 2021)

Trash Panda said:


> Don’t ever show random people/coworkers a map of the Caribbean without names of the countries and ask them which island is Costa Rica. I mean, unless you’re good with losing faith in humanity.


Certainly no need to school me on the general stupidity of the general public. But cmon...an oboe?


----------



## Zedcars (Jul 20, 2021)

bill5 said:


> Certainly no need to school me on the general stupidity of the general public. But cmon...an oboe?


----------



## Zedcars (Jul 20, 2021)

I present to you the 106 year old self-playing violin. Imagine this coupled with AI to impart a bit of emotion. The performance starts at 5:23:


----------



## robgb (Jul 21, 2021)

bill5 said:


> I assume that's a joke/exaggeration...I think if you polled people on the street, most people would know.


I suspect you're wrong about that.


----------



## cygnusdei (Jul 21, 2021)

Funny, I don't hear artists wishing for self-painting brushes.


----------



## Zedcars (Jul 21, 2021)

cygnusdei said:


> Funny, I don't hear artists wishing for self-painting brushes.


No violinist would wish for a self-playing violin either. In fact, it seems reasonable to suspect many musicians object to VI’s being used at all for fear it is slowly (or maybe not so slowly) nudging them out of work.

If any market exists for automated acoustic instruments it will be the curiosity crowd that will look upon them in the same way circus freaks were viewed many decades ago. If they improve to less clunky more expressive and sophisticated playing then some composers will show an interest. This is just the primitive mechanical version of the computer assisted compositional tools that are so prevalent these days. Love them or loath them, automation/AI/robots are supplanting jobs across the world. The creative arts may be the last field to be fully affected by this change but I don’t think it will escape it.


----------



## Vik (Jul 21, 2021)

Main, old wish: I wish that VI-makers plan fewer products, but products that are designed to keep being developed (and bought) over a period of several years (fixing issues, adding features, articulations, add-on products etc etc). IMO it would be brilliant if all the major VI makers plan something which has all the following features:


More dynamic layers. There are too many libraries that require or encourage the use of CC11 in addition to dynamic crossfading, with a very MIDI sounding result. Three dyn layers is certainly to little, 4 may work in some cases, but 5 and above (sometimes way above) is a lot better.
At least 4 or 5 attack types for strings.
Three levels of vibrato is never enough. 4-5 should be considered a minimum.
Endings! Different levels of release velocity should trigger endings of various lengths.
Modularity: If I have a go-to library I really like, but in some cases need eg. larger/smaller sections, I don't want to buy another library – I want add-on options for the library I have.
I want 'artificial' crossfade options (!), meaning that if I have a library with great con sordinos, flautandos and sul tastos, I want presets where I can, for special situations (and there are relatively many of them), use a controller to immediately find the sweet spot combination of two of three of these. This can be done in various ways, but not in all libraries, and the UI is too non-obvious and cumbersome.
I wish that VI-makers plan fewer products, but products that are designed to keep being developed for a few years (fixing issues, adding features, articulations, add-on products etc etc).
UIs that are scalable. It seems that some of the newer sample players require a lot of screen real estate, which I in a lot of cases don't want to look at, especially when half of it is empty space. I'm getting my Zen moments from other sources anyway. 
Excellent rebowing.
Features! It's frustrating that some of the most inspiring libraries lack many of the features that good libraries IMO should have, for instance some of the stuff we've seen in Audiobro Modern Scoring Strings and OT Berlin Strings.
Finally: All libraries should IMO be buyable in a modular way, meaning that one could start with buying one section at a time. I'm looking forward to the day Paul Thomson says that he is very excited to announce that all their libraries have been modular-ized.


----------



## cygnusdei (Jul 21, 2021)

Zedcars said:


> No violinist would wish for a self-playing violin either. In fact, it seems reasonable to suspect many musicians object to VI’s being used at all for fear it is slowly (or maybe not so slowly) nudging them out of work.
> 
> If any market exists for automated acoustic instruments it will be the curiosity crowd that will look upon them in the same way circus freaks were viewed many decades ago. If they improve to less clunky more expressive and sophisticated playing then some composers will show an interest. This is just the primitive mechanical version of the computer assisted compositional tools that are so prevalent these days. Love them or loath them, automation/AI/robots are supplanting jobs across the world. The creative arts may be the last field to be fully affected by this change but I don’t think it will escape it.


Haha, curious thing, but I swear my post wasn't a response to yours (I had not even seen your post). It's more of a comment on art vs commodity. Sure, with automation you could churn out handbags by the hundreds and slap an Hermes on it. Same thing with a Patek Philippe, or a Matisse - well, you get the idea. Automation brings productivity but art implies something one of a kind into which the artist has poured blood, sweat, and tears. And is not virtual music art?


----------



## Saxer (Jul 21, 2021)




----------



## cygnusdei (Jul 21, 2021)

Saxer said:


>


We have ways of making you talk


----------



## Zedcars (Jul 21, 2021)

Saxer said:


>


*“Please put down your oboe. You have 20 seconds to comply!”*


----------



## jbuhler (Jul 21, 2021)

Zedcars said:


> I present to you the 106 year old self-playing violin. Imagine this coupled with AI to impart a bit of emotion. The performance starts at 5:23:



Lots of self-playing instruments were made during this period (1910s) for use in early cinemas. They could be operated either in self-playing mode where piano rolls played the role of midi today (midi is descended from these piano rolls) or by players who engaged various instruments like stops on an organ. The more advanced theater organs of the 1920s also incorporated these kinds of instruments (especially percussion) along with the usual sets of organ pipes.


----------



## ism (Jul 21, 2021)

robgb said:


> Absolutely. But I'm talking to a forum of sample library fanatics. Lay people don't even know what an oboe is.


But they would know what an oboe feels like. From all those moments when an oboe does something uniquely oboe-esque, and that no flute or violin or trombone could ever do.

In this sense it's completely irrelevant that someone doesn't know what an oboe *is*, because they know what it *does*.


----------



## robgb (Jul 21, 2021)

ism said:


> In this sense it's completely irrelevant that someone doesn't know what an oboe *is*, because they know what it *does*.


I think you overestimate the abilities of the lay public and what they might "feel."


----------



## ism (Jul 21, 2021)

robgb said:


> I think you overestimate the abilities of the lay public and what they might "feel."


I think that there's research that suggests that deeply engaging with music is quite an innate capacity of human perception, rather than more that one of education. So I'm not sure.


----------



## jbuhler (Jul 21, 2021)

ism said:


> But they would now what an oboe feels like. From all those moments when an oboe does something uniquely oboe-esque, and that no flute or violin or trombone could ever do.
> 
> In this sense it's completely irrelevant that someone doesn't know what an oboe *is*, because they know what it *does*.


I would also say that people generally recognize the sound of an oboe, at least in part because they recognize this feel, though they may lack an articulate concept of “oboe” to describe it. There is of course the characteristic sound of the oboe, but also points at which the sound of the oboe overlaps somewhat with other sounds (especially English horn but also other instruments, woodwinds in particular) and so separating out “oboe” is not always a straightforward exercise. There are also doublings that knowledge of orchestration can help recognize the presence of an oboe even when its characteristic timbre is obscured. Recognizing the oboe in these contexts can be analytically useful if you are seeking to replicate this sound, and doublings have their own characteristic “feels,” but the analysis of the doubling into named constituent parts is not necessary to understand what the doubling is doing musically. 

It is interesting to see in teaching music appreciation how quickly non-music students pick up on orchestration, as though they suddenly have words for already formed concepts. There’s something similar that happens with teaching media music. Students have a strong intuitive grasp of the concepts and merely lack a terminology… of course with respect to orchestration it’s not clear that the terminology of instrumentation is always the right terminology for expressing what one wants to say, and this is especially the case for non-composers, who as @ism suggests are generally much more concerned with how the music feels and what it is doing than with what it is or how it was made.


----------



## robgb (Jul 21, 2021)

ism said:


> I think that there's research that suggests that deeply engaging with music is quite an innate capacity of human perception, rather than more that one of education. So I'm not sure.


I have no doubt that people react to music on a primal level, but I doubt they instinctively know what they're listening to or ever even give it much thought. My point, of course, was that while composers/sample library users can often tell the difference between real and Memorex, the lay public almost always cannot.


----------



## Noeticus (Jul 21, 2021)

Sorry to state the perhaps obvious, but...

One of the reasons VI Library producers don't provide EVERYTHING we want is because if they did they wouldn't be able to sell us their next library as we would already have EVERYTHING we need.


----------



## ism (Jul 21, 2021)

robgb said:


> I have no doubt that people react to music on a primal level, but I doubt they instinctively know what they're listening to or ever even give it much thought. My point, of course, was that while composers/sample library users can often tell the difference between real and Memorex, the lay public almost always cannot.


And the point I'm making (sorry if I seem to repeat this a lot), is that that even if people don't have the formal syntax or language to identify an oboe, or the cognitive / syntactical analysis to say "this is a real vs sampled oboe", there's still an experience of sound at the emotional / semantic level that people will experience a bad mock up as emotionally fake, even if, at the syntactical level, they very often might not have the syntax / cognitive analysis to say "this isn't a *real* oboe".

Of course, on another level, I'm agreeing with your maxim that it's about sounding "good" more than it's about sounding "real". So long as we understand "good" to be understood in the emotional / semantic realm and "real" to be understood in the cognitive / syntactic realm.


----------



## robgb (Jul 21, 2021)

ism said:


> And the point I'm making (sorry if I seem to repeat this a lot), is that that even if people don't have the formal syntax or language to identify an oboe, or the cognitive / syntactical analysis to say "this is a real vs sampled oboe", there's still an experience of sound at the emotional / semantic level that people will experience a bad mock up as emotionally fake, even if, at the syntactical level, they very often might not have the syntax / cognitive analysis to say "this isn't a *real* oboe".
> 
> Of course, on another level, I'm agreeing with your maxim that it's about sounding "good" more than it's about sounding "real". So long as we understand "good" to be understood in the emotional / semantic realm and "real" to be understood in the cognitive / syntactic realm.


We could take it to another level -- that of movie music -- and say that most people absolutely do not know the difference between real and sampled, since the music in movies is often experienced on a more subliminal level, not to mention that it's also often a combination of real and sampled. 

People watching the CW super hero shows have no idea that the music they're hearing was all done in the box, and don't really care.


----------



## ism (Jul 21, 2021)

robgb said:


> We could take it to another level -- that of movie music -- and say that most people absolutely do not know the difference between real and sampled, since the music in movies is often experienced on a more subliminal level, not to mention that it's also often a combination of real and sampled.
> 
> People watching the CW super hero shows have no idea that the music they're hearing was all done in the box, and don't really care.


I'd broadly agree with this. Except that I'd include what goes on at a subliminal level (or unconscious or pre-linguistic, there's lots of possible words for it) as a mode of knowing, and perhaps the most important mode of knowing among the multivalent modes of knowing that we bring to music. 

I listened to a staffpad demo recently of Gabriel's Oboe. And it was really good, and few casual listeners would have known, cognitively, that it was a sampled oboe, because it was perfectly "realistic" in at least a technical sense. 

But compare it to the performance of a real, professional, musical, it's operating on a completely different level of human emotion and expressiveness. I'm not sure if this demo represent the best performance that can be drawn out of staffpad, but lets say that it is. A real performance - and even to a significant extent, the kind of performance I can draw out of my best Kontakt Oboe samples (SF & OT Soloists) - simple convey that particular solo oboe part from that particular story at a different level. 

Who care if someone knows the technical specifications of what an oboe is or not? I didn't know what a flute was when I heard the Princess theme on my Start Wars record Storybook ("you'll know it's time to turn the page when R2D2 beeps like this ...") when I was 7. But I felt it in a way that resonates to this day. 

I'd also add that superhero movies are themselves, technical artifacts. People go (presumably) at least in part to see the CGI, and not even necessarily to be "fooled" by the CGI. So slightly artificial scores are, at least arguably, and least part of the technical spectacle. In the same way that synths in Batman score are part of the ethos of industrial decay of the that story.


----------



## jbuhler (Jul 21, 2021)

Noeticus said:


> Sorry to state the perhaps obvious, but...
> 
> One of the reasons VI Library producers don't provide EVERYTHING we want is because if they did they wouldn't be able to sell us their next library as we would already have EVERYTHING we need.


Some of us recognize that even though N provides everything we can imagine, yet we still continue to buy.


----------



## robgb (Jul 21, 2021)

ism said:


> I'd also add that superhero movies are themselves, technical artifacts. People go (presumably) at least in part to see the CGI, and not even necessarily to be "fooled" by the CGI. So slightly artificial scores are, at least arguably, and least part of the technical spectacle. In the same way that synths in Batman score are part of the ethos of industrial decay of the that story.


I'm not sure even some people with musical knowledge can hear the "artificiality" of a score. I heard a podcast interview with Blake Neely and the interviewer (another composer) asked him how much of his CW work was "in the box" and how much was live. It was an honest question he didn't know the answer to. So sometimes even "experts" can be fooled when it's merely underscore. While I appreciate the sentiment that people, in general, can sense the emotional reality of music recorded using live players and, as such, respond to it differently, I'm not sure it's strictly true. Some people, sure. Most people? Doubtful.

It's a nice thing to believe people have that ability, but I think it's more of a wish than a reality. And, honestly, it really comes down to the music itself. You could play a Beatles tune on a kazoo and feel some emotion from it. Okay, I'm exaggerating, but you get the point.


----------



## Casiquire (Jul 21, 2021)

I'm amazed to see so many comments that samples just can't come close. Is it not common knowledge how much of what we hear in film scores is actually samples? Have we forgotten some of those legendary mockups that come up from time to time? I'm looking at 2 Steps from Hell, Blakus, i could go on and on. Samples can not only come very close, they've been at that level for at least a decade. I'm with Robgb here. And on the oboe debate, yeah, most average people really don't know what one is.


----------



## robgb (Jul 21, 2021)

Casiquire said:


> I'm amazed to see so many comments that samples just can't come close. Is it not common knowledge how much of what we hear in film scores is actually samples? Have we forgotten some of those legendary mockups that come up from time to time? I'm looking at 2 Steps from Hell, Blakus, i could go on and on. Samples can not only come very close, they've been at that level for at least a decade. I'm with Robgb here. And on the oboe debate, yeah, most average people really don't know what one is.


Well, part of my point is that it shouldn't matter if it comes close or not. It is what it is. And as long as it sounds good, who cares whether its a perfect representation of an instrument? That said, I admit I can really only tell the difference if the mock up is really bad, or if I hear the same tune side by side with real vs. fake.


----------



## Wunderhorn (Jul 21, 2021)

I might want to add that I would like it if developers would always add a standard set of expression maps/articulation set for the most popular DAWS. I was very thankful to @AudioBro to include that in MSS.


----------



## Noeticus (Jul 21, 2021)

Next up.... heckling people about hecklephones.


----------



## Mike Fox (Jul 21, 2021)

What’s missing? Buyer protection.


----------



## Noeticus (Jul 21, 2021)

Also, why, oh why would a real, actual, empirical, recording of a musical instrument sound like an actual, real, empirical, recording of a musical instrument? 

Years ago I played a recording of a VSL piece I did to a "person", and they said that they were blown away by how real it sounded, and that they had no idea that computers could sound so real.

I then explained that it was made from real recordings, and not a synthesizer or the like.

No wonder it sounds so good!!!


----------



## Zedcars (Jul 21, 2021)

Unless you are heavily interested in film soundtracks, and/or music technology, I’d wager that the general public do not know this whole world of mock-ups and samples exists. I doubt they’d even care if you tried to explain it.

One thing I have noticed which may or may not have been noticed by the general public is how the quality of music on TV shows has improved over the last few decades. I don’t mean the quality of the compositions, but the timbre and fidelity of the sounds used. I’m sure a lay person would have noticed this, even subconsciously. They may not be fully aware of it in the way we are, or have any idea why this has happened. But it must be apparent because so often a cinematic chase scene style cue (that sounds realistic) is added to the most mundane cookery show, or DIY show, or talent show, or reality TV show. The orchestral (or other) sounds used are just better quality than those used maybe 15 years ago. Go back beyond VSL and Siedlaczek and you’d rarely hear any original orchestral music on TV (excluding music concerts etc) unless it was a TV show with a big enough budget that could afford a small orchestra.


----------



## youngpokie (Jul 22, 2021)

Casiquire said:


> And on the oboe debate, yeah, most average people really don't know what one is.


This is true, but is it a genuine argument? I sometimes wonder....

Why would an "average" person know anything at all about something that requires a skill (s)he doesn't have? Why would anyone even expect them to? This makes no sense.

To know, let alone understand something, means taking an effort, an extra step, to learn it. But for most "average" people it's perfectly sufficient to enjoy music on an emotional level, not intellectual. They don't need to know about an oboe to enjoy it. It's only people who are "passionate" and curious about music who take that extra step.

With this oboe example @robgb takes a subject that requires specialized knowledge and uses it as a strawman to defend being a dilettante.

Imagine what every aspect of life (not to mention - all art!) would be like if most humans adopted this "why even bother" attitude...


----------



## Casiquire (Jul 22, 2021)

youngpokie said:


> This is true, but is it a genuine argument? I sometimes wonder....
> 
> Why would an "average" person know anything at all about something that requires a skill (s)he doesn't have? Why would anyone even expect them to? This makes no sense.
> 
> ...


I'm not sure it's being used as an example of why we don't even have to try, for me i use it to put my concerns into perspective. Only two or three libraries out there let us choose finger positions for strings, for example. If I'm struggling to get a realistic sound because i keep hearing a library use a string i don't think an actual player would use, i can remember that the average person doesn't even know what an oboe is and that I'm worried about details that don't actually meaningfully impact the music. Does that make sense? It's good to know your audience


----------



## mybadmemory (Jul 22, 2021)

I can love or hate music that is performed by a real orchestra. I can love or hate music that is performed by a modern sample library. And I can love or hate music that is performed by orchestral samples on a Super Nintendo.

If I actually enjoy it or not has zero to do with what is performing it, real, fake, or so fake it’s not even trying.

As a listener its never about the technology and always about the music. Always. Trying to get samples to sound real i’d say is more of a strange obsession from the creators side, that can of course be fun in its own right.

I just don’t think that it somehow makes the music better or worse. Or that the listeners actually take notice or care.


----------



## youngpokie (Jul 22, 2021)

Casiquire said:


> I'm not sure it's being used as an example of why we don't even have to try, for me i use it to put my concerns into perspective. Only two or three libraries out there let us choose finger positions for strings, for example. If I'm struggling to get a realistic sound because i keep hearing a library use a string i don't think an actual player would use, i can remember that the average person doesn't even know what an oboe is and that I'm worried about details that don't actually meaningfully impact the music. Does that make sense? It's good to know your audience


Yes, but you're actually making a different point. You're arguing that if the technology isn't currently available to let you pick a specific violin string, you'll struggle to compensate. I agree.

The argument I'm responding to is "_Just know that you will never achieve it with sample libraries_". If you take that one to its logical conclusion it would indeed lead to defense of being a dilettante. Just like "Cheating in Music" thread that takes the Beatles' experimentation in sound design and turns it into the defense of "cheating" and covering up of bad playing skills (as if they couldn't have used a session musician). They are two sides of the same coin.

An average person doesn't know what oboe is (and doesn't care) - yes.. All (s)he wants is to get an emotional experience. _This person is not listening for realism, compositional technique, instrumentation, etc_. They just want the pleasure that comes from listening to music and that's how they judge it.

But the composer has a completely different frame of reference - (a) mastery of knowledge that came before (to gain recognition of your peers) and (b) the use of that knowledge to create emotional experience for the average listener that's powerful enough for them to want to know your name.

"You'll never achieve it" is a motto of dilettantes everywhere. It shifts the focus from a personal attitude (I just don't want to learn) to general defeatism and negativity (it can't be done).


----------



## Casiquire (Jul 22, 2021)

youngpokie said:


> Yes, but you're actually making a different point. You're arguing that if the technology isn't currently available to let you pick a specific violin string, you'll struggle to compensate. I agree.
> 
> The argument I'm responding to is "_Just know that you will never achieve it with sample libraries_". If you take that one to its logical conclusion it would indeed lead to defense of being a dilettante. Just like "Cheating in Music" thread that takes the Beatles' experimentation in sound design and turns it into the defense of "cheating" and covering up of bad playing skills (as if they couldn't have used a session musician). They are two sides of the same coin.
> 
> ...


Ah i think we totally agree then. I don't even believe in the "you'll never achieve it" argument.


----------



## stephen22 (Jul 26, 2021)

Interesting to hear you guys chat about your various problems.
One thing hasn't come through: some instruments are easier to emulate than others.

Very easy: Harp, percussion, piano, harpsicord, plucked instruments in general.
Easy: woodwind, especially bassoon, string sections, most brass(but see below)
Problematic: Brass crescendo (this can be achieved to a certain extent with Kontakt's AET)
Difficult: solo strings ensemble, human voice
Don't even try: exposed virtuoso solo strings

With all instruments, short articulations are much easier, which is why the Beethoven quartet sounds so good.

Many of the problems mentioned - attack, dynamics, microtuning - can be almost completely overcome by using a wind controller for everything that isn't percussive - even strings, especially strings. Blowing can control not only volume, but also EQ or cutoff, making louder sounds brighter. Lip sensors give you control over microtuning via pitchbend. All this with immediate feedback - you actually play the parts as you want them to sound, giving them life. By all means tweak them afterwards, but don't overdo it or they will lose their spontaneity. My lovely WX5, which is a joy to see and hold, is getting a bit long in the tooth and is out of production, but there are others on the market. It took me a couple of years to learn to play it, but my goodness how it transformed my music.

I don't understand these problems with tempo sync: I just play the parts layering them on each other (I press a button to quieten the tracks alreay recorded so I can hear what I'm playing.) If a previously recorded part doesn't fit in well, I blow it again.

Quality of samples is paramount. Modern libraries may be better in some respects than the good old EWQLSO, which someone felt was outdated, but if you want to hear what it can achieve, have a look at this which I recorded a few years ago


----------



## PeterN (Aug 9, 2021)

How about this one. Did anyone already say it?

*Readily balanced orchestra*, so the flute is in "correct" balance to, say, cello, and maybe even readily panned. Same with horn etc, this is all based on, say, some position in audience - perfected by developer. In other words, you dont need to make a template of the orchestra library, its already "templated". Maybe even ready reverbed.

You can adjust it to preferences later, of course, but this would not be the DIY template orchestra library, but readily adjusted, out of the box.


----------



## mybadmemory (Aug 9, 2021)

PeterN said:


> How about this one. Did anyone already say it?
> 
> *Readily balanced orchestra*, so the flute is in "correct" balance to, say, cello, and maybe even readily panned. Same with horn etc, this is all based on, say, some position in audience - perfected by developer. In other words, you dont need to make a template of the orchestra library, its already "templated". Maybe even ready reverbed.
> 
> You can adjust it to preferences later, of course, but this would not be the DIY template orchestra library, but readily adjusted, out of the box.


Aren’t most modern orchestral packages already like that? Or at least intending to be.


----------



## doctoremmet (Aug 9, 2021)

mybadmemory said:


> Aren’t most modern orchestral packages already like that? Or at least intending to be.


Partly - yes. A lot of orchestral libraries are recorded in situ, so placement isn’t much of a worry. Some vendors also actually deliver templates.


----------



## PeterN (Aug 9, 2021)

mybadmemory said:


> Aren’t most modern orchestral packages already like that? Or at least intending to be.



I opened a thread some months ago, asked about it, and got replies the flute is not readily adjusted to brass etc. *If there is one, which one is it?*


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> Partly - yes. A lot of orchestral libraries are recorded in situ, so placement isn’t much of a worry. Some vendors also actually deliver templates.


Talking about whole picture, also the velocity. .

Who has the template and all ready? Names please.


----------



## doctoremmet (Aug 9, 2021)

Xsample have expression maps for Cubase


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> Xsample have expression maps for Cubase



Thanks for reply. Never heard of it and been hanging here 4 years (2 of them banned - global politics).

Back in a minute.



Edit. Back. Thanks, so one developer delivers this with a template (for Cubase). Just skimmed through his website, the link to it looked so shady first, was not sure I open it (joking). Anyway, thats great if he delivers whole package? Didnt see the template though, there was only a vid on download page. For Cubase users.

So we have *one candidate*. *Xsample* can (presumably) deliver this full package for Cubase users.

Not saying this stuff is desperately needed and essential, just still no developer didnt really deliver this? For all users, readily baked in?


----------



## doctoremmet (Aug 9, 2021)

I’m not really sure what you mean. I’d argue most orchestral libraries are recorded in situ. So placements shouldn’t necessarily be a huge issue. 

Also, MOST on here actually use a mix-and-match approach, using various vendors’ samples. Then placement and template-making becomes a thing. Luckily third parties offer ready made templates, spatialization tools such as MIR Pro, etc.

Btw, Xsample are a one man outfit who release top tier samples. You wanted an example of a vendor offering expression maps, and I gave one  - I highly respect this vendor AND I am trying to positively contribute to this thread - so it’s a bit disappointing to read you found the link “shady”. Anyway.. I guess you are not convinced by my example so I’ll leave it at that.


----------



## mybadmemory (Aug 9, 2021)

PeterN said:


> I opened a thread some months ago, asked about it, and got replies the flute is not readily adjusted to brass etc. *If there is one, which one is it?*


Not sure what you mean exactly. Most orchestral libraries today are recorded in situ, meaning with the placement and balance as it actually is in the room. The mics aren't panned or normalised in relation to each other after the fact, and if/when they are it's only ever to provide a ready made mix which seem to be what you're asking for and what many libraries also do.

The only time balancing is a thing really, is with libraries that are recorded centered or bone-dry, which isn't that many these days. Or when using many different libraries from different developers together. At that point you obviously have to start placing instruments and match rooms though your own mixing.


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> I’m not really sure what you mean. I’d argue most orchestral libraries are recorded in situ. So placements shouldn’t necessarily be a huge issue.
> 
> Also, MOST on here actually use a mix-and-match approach, using various vendors’ samples. Then placement and template-making becomes a thing. Luckily third parties offer ready made templates, spatialization tools such as MIR Pro, etc.
> 
> Btw, Xsample are a one man outfit who release top tier samples. You wanted an example of a vendor offering expression maps, and I gave one  - I highly respect this vendor AND I am trying to positively contribute to this thread - so it’s a bit disappointing to read you found the link “shady”. Anyway.. I guess you are not convinced by my example so I’ll leave it at that.



Like I said, one orchestral library that has all velocities, positions, even reverb there, ready. So no third party templates need to be downloaded. ALL. So you dont need to adjust flute to trumpet, not position, not reverb or velocity. You can, of course, but it would be "perfectly readied" - symphonic hall, straight out of the box, style. 

Hey, your example was appreciated. You got a like for it. But it did not really fill the criteria.


----------



## doctoremmet (Aug 9, 2021)

Ok. Why wouldn’t you say Spitfire Audio’s BBCSO Core or Pro literally ticks ALL of those boxes?


----------



## PeterN (Aug 9, 2021)

mybadmemory said:


> Not sure what you mean exactly. Most orchestral libraries today are recorded in situ, meaning with the placement and balance as it actually is in the room. The mics aren't panned or normalised in relation to each other after the fact, and if/when they are it's only ever to provide a ready made mix which seem to be what you're asking for and what many libraries also do.
> 
> The only time balancing is a thing really, is with libraries that are recorded centered or bone-dry, which isn't that many these days. Or when using many different libraries from different developers together. At that point you obviously have to start placing instruments and match rooms though your own mixing.



So if you take legato flute, from some of these libraries you refer to, it is in "perfect" relation to the trumpet? Both when it comes to velocity, placement and say reverb (I get it, reverb complicates it, but as a generalisation). I might be wrong, Im ready to accept that. I got a reply a few months ago, we need to adjust all velocities ourselves.


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> Ok. Why wouldn’t you say Spitfire Audio’s BBCSO Core or Pro literally ticks ALL of those boxes?



Does it? Is the flutes relation to trumpet velocity fixed? If Spitfire did that, its great. Are you sure?


----------



## doctoremmet (Aug 9, 2021)

Ah okay, I get it. That’s what you want? Fixed relative velocity settings? Why on earth would one want THAT? Not all instruments are likely to have velocity control the same MIDI CC. Sometimes it controls legato, portamento - sometimes it controls expression or volume. For shorts velocity typically controls something different than for longs, etc. Also, the whole idea is to control relative volumes / gains / expression of each instrument yourself. You know… as a “real” player would. 

But I guess I may now completely have lost what it is you’re striving for…


----------



## doctoremmet (Aug 9, 2021)

PeterN said:


> Does it? Is the flutes relation to trumpet velocity fixed? If Spitfire did that, its great. Are you sure?


Placement ✅
Reverb ✅
Velocity relatively fixed ❓


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> Ah okay, I get it. That’s what you want? Fixed relative velocity settings? Why on earth would one want THAT?


Yes!

Why I would like to have that, ....is because it might be idea to set the instruments in relation to each other, from beginning. It takes a lot of time to perfect this - its like the restaurant doesnt give all ingredients for a dish, but you need to fry the onion in there yourself. Maybe even bring your own meat to put in the soup.

So: ready velocity relation fixed, reverb relation fixed, position relation fixed. ALL.

I dont understand why the midi CC need to complicate this. You could have that Spitfire BBC ready from beginning. From what I understand, they are almost there, only the velocities relations lacking. You could have a button to choose this.


----------



## doctoremmet (Aug 9, 2021)

Instruments can play in a range from let’s say ppp to ffff. Do you mean that the various instruments should respond similarly to whatever MIDI controller (velocity, modwheel, fader, breath controller) controls that loudness / those dynamics? So they respond kind of similarly? Is that the idea you are trying to catch under the umbrella of “velocity”?

Because other than that, it will totally depend on the arrangement the composer makes (translated to recorded MIDI information in a DAW) whether my flutes ensemble will play a mf while the trumpet plays a soaring fff solo line. You’d never want those relationships to be “fixed”. So what you propose should be “fixed relatively to each other” is how a controller, fader, modwheel, keyboard velocity actually triggers volume and expression transitions?

So if I copy MIDI data from a violin line that I like, and have it play with a trumpet sample from the same library it would still sound half decent? That would be near impossible to pull off in practice, simply because the instruments are likely to not behave that way (“similarly”) in real life to begin with. Don’t have identical ranges, totally different physics involved, bowed versus blown instruments, various dynamic responses etc.


----------



## PeterN (Aug 9, 2021)

doctoremmet said:


> Instruments can play in a range from let’s say ppp to ffff. Do you mean that the various instruments should respond similarly to whatever MIDI controller (velocity, modwheel, fader, breath controller) controls that loudness / those dynamics? So they respond kind of similarly? Is that the idea you are trying to catch under the umbrella of “velocity”?
> 
> Because other than that, it will totally depend on the arrangement the composer makes (translated to recorded MIDI information in a DAW) whether my flutes ensemble will play a mf while the trumpet plays a soaring fff solo line. You’d never want those relationships to be “fixed”. So what you propose should be “fixed relatively to each other” is how a controller, fader, modwheel, keyboard velocity actually triggers volume and expression transitions?
> 
> So if I copy MIDI data from a violin line that I like, and have it play with a trumpet sample from the same library it would still sound half decent? That would be near impossible to pull off in practice, simply because the instruments are likely to not behave that way (“similarly”) in real life to begin with. Don’t have identical ranges, totally different physics involved, bowed versus blown instruments, various dynamic responses etc.



Will create a separate thread for this one day. I will leave it here, bcs this is starting to take too much focus in this thread. Thanks for engagement, doc.


----------



## mybadmemory (Aug 9, 2021)

I don't think I understand what you mean with velocity relation fixed. Velocity is just one way of controlling dynamics, usually for short notes. Another way is though modulation and/or expression for long notes. Both velocities and modulation/expression is part of the *performance of the piece* rather than the mixing of it, and therefore cannot really be fixed. It needs to be controlled by whoever is performing or programming the piece in the same way they control what notes to play and when. But in terms of mixing (respective volumes, panning, room coloration/reverb, etc), most of todays complete orchestral libraries or series already does exactly this. Mic positions are recorded in situ, and many offer one or even more ready made mixes of the mic positions as well.


----------



## doctoremmet (Aug 9, 2021)

mybadmemory said:


> I don't think I understand what you mean with velocity relation fixed. Velocity is just one way of controlling dynamics, usually for short notes. Another way is though modulation and/or expression for long notes. Both velocities and modulation/expression is part of the *performance of the piece* rather than the mixing of it, and therefore cannot really be fixed. It needs to be controlled by whoever is performing or programming the piece in the same way they control what notes to play and when. But in terms of mixing (respective volumes, panning, room coloration/reverb, etc), most of todays complete orchestral libraries or series already does exactly this. Mic positions are recorded in situ, and many offer one or even more ready made mixes of the mic positions as well.


Yes thanks for wording my confusion


----------



## doctoremmet (Aug 9, 2021)

PeterN said:


> Will create a separate thread for this one day. I will leave it here, bcs this is starting to take too much focus in this thread. Thanks for engagement, doc.


Cool! My pleasure.


----------



## PeterN (Aug 9, 2021)

mybadmemory said:


> Both velocities and modulation/expression is part of the *performance of the piece* rather than the mixing of it, and therefore cannot really be fixed.


Let me give a reply here too. Apologies, if this sucks up the whole thread now, it was not the intention.

Well is it? If you sit in row 4 and seat 27, in a live setting, when the violin section plays, say, ff legato, you hear it, say like XXY db. Nice. Then you hear some flutes doing some mezzo-forte. Then the trombone, shows his his massive fallos, and the flutes cannot be heard anymore.

I mean, my brain maybe does not get the picture, and I could be wrong, but why could you not do this relation in velocities, expression and modulation too. Why not?

I will chew on this.

More than once time on this forum, when you throw in something a bit out the "consensus" here, it is initially rejected - all kinds of experts show up - until Daniel J or Philip or whoever comes in and says it makes sense, or even does a product the suggested way.



Thats why Im suspicious on both you guys and myself included (due to creative brain, but horrible in cognitive tasks)


Thanks for replies. I invite you guys one day to a separate thread on this.


----------



## doctoremmet (Aug 9, 2021)

I’ll wait for the big guns to tell you you’re right and then hopefully they’ll be able to explain the actual concept to me - lol. I don’t reject anything, but I just see you talking about velocity - which is merely one of many MIDI controls - and can’t help but wonder if you do not actually mean something completely different… so I can’t really have rejected anything yet since I don’t even understand what you mean 

Nuff said.


----------



## mybadmemory (Aug 9, 2021)

PeterN said:


> Let me give a reply here too. Apologies, if this sucks up the whole thread now, it was not the intention.
> 
> Well is it? If you sit in row 4 and seat 27, in a live setting, when the violin section plays, say, ff legato, you hear it, say like XXY db. Nice. Then you hear some flutes doing some mezzo-forte. Then the trombone, shows his his massive fallos, and the flutes cannot be heard anymore.
> 
> ...


I echo what Doc said: I'm not really protesting against anything, I simply don't understand what you mean. 

I rarely touch any volume, pan, or room depth position in any of the libraries I own and use, unless I use several different ones together. At that point I obviously have to try my best to match them to one another, but when I'm staying within one product or product line, I simply just perform or program the different parts with their respective velocities and modulation/expression being part of the creative performance, add a small amount of reverb, and I'm done. No additional balancing needed. The mockups linked in my signature for example, are all made like that, using BBCSO Cores Mix1.


----------



## wilifordmusic (Aug 9, 2021)

PeterN said:


> Let me give a reply here too. Apologies, if this sucks up the whole thread now, it was not the intention.
> 
> Well is it? If you sit in row 4 and seat 27, in a live setting, when the violin section plays, say, ff legato, you hear it, say like XXY db. Nice. Then you hear some flutes doing some mezzo-forte. Then the trombone, shows his his massive fallos, and the flutes cannot be heard anymore.
> 
> ...


Here's what I think you are asking about.

Yes, a trombone could drown out the woodwinds if he felt like it.

Developers could put all of the instruments in true perspective, volume and position.

But, the cries of anguish and horror from everyone that bought those libraries would be deafening.

By creating true positioning and volume relationships you make it extremely difficult to change things to suit your needs.
No flute solo over the top of the orchestra. In the real world we stick said flautist in a booth and then crank the gain in the mix. This option will be more difficult with a "realistic" flute volume. If you do crank the gain, the masses will be complaining about the noise in the samples.

Most developers try to give you options to create realistic volume relationships and still be able to create a non-realistic version as well.

And this is just my simplified take on what happens when decisions are made on such things by the sample developers.


----------



## muk (Aug 9, 2021)

doctoremmet said:


> I’ll wait for the big guns to tell you you’re right and then hopefully they’ll be able to explain the actual concept to me - lol. I don’t reject anything, but I just see you talking about velocity - which is merely one of many MIDI controls - and can’t help but wonder if you do not actually mean something completely different… so I can’t really have rejected anything yet since I don’t even understand what you mean


If I understand correctly, what he means is an orchestral library that is naturally balanced out of the box. So violin 1 pizzicato has the right volume compared to a violin 1 staccato (i. e., the former is quieter than the latter). This is true at all volume levels. A forte pizzicato is quieter than a forte staccato. The same is true at piano. And so on. So all the instruments articulations are volume balanced against each other.

That's one prerequisite. The second one is that the instruments are also balanced naturally against each other. Meaning, at piano a trumpet is about as loud as a horn. But at forte, the trumpet is about as loud as two horns playing together. And in the highest register, a flute is piercingly loud at fortissimo, while in the lowest register it is much quieter.

I don't know of any orchestral library that has that natural balance. Some developers do claim that they have but are actually far off (Orchestral Tools, for instance). VSL has a table that you can use as a starting point for their instruments:






Natural Volume Overview | VSL - Software Manuals







www.vsl.info





I don't know how accurate it is.

Spitfire's BBCSO is comparingly close in that regard, in my opinion. Yet you still have to balance instruments and articulations against each other. Just not as much as with other libraries.


----------



## PeterN (Aug 9, 2021)

wilifordmusic said:


> Here's what I think you are asking about.
> 
> Yes, a trombone could drown out the woodwinds if he felt like it.
> 
> ...



You got it. And make good points too. Now we are talking *same frequency. *Thats why I suggested a button. So there could be both options, one readily balanced, one not, so the cries will not be heard. Now we are basically forced to make the template ourselves. Well, ....you cant always get what...

Ooops.... I said Im leaving. Thanks for comment!


----------



## Casiquire (Aug 9, 2021)

PeterN said:


> You got it. And make good points too. Now we are talking *same frequency. *Thats why I suggested a button. So there could be both options, one readily balanced, one not, so the cries will not be heard. Now we are basically forced to make the template ourselves. Well, ....you cant always get what...
> 
> Ooops.... I said Im leaving. Thanks for comment!


Sounds like VSL is the one for you then, since they do offer a natural volume button (at least in MIR. I'm not sure if there's one on the interface.)

OrchestralTools and BBCSO might be your next closest though i don't own the latter, so don't quote me on that. But with OT I very rarely need to change the balance with the exception of some very quiet percussion like celesta. 

Another thing that's missing from the conversation is the fact that it's not quite so simple. Players are always balancing against one another. I can't tell you how many times in choir we'd have to deliver a "piano" over a massive pipe organ, so we'd project a soft tone loudly. Most instruments need to do a little bit of that from time to time, like brass for instance, delivering a brassy bold tone at a low volume. Dynamics are actually more about tone than volume much of the time.

Then there's the fact that OrchestralTools got a ton of criticism for their Woodwinds Revive which were balanced to the rest of the orchestra. People felt they were way too quiet and if you wanted to use one in a non-orchestral track you'd need to turn the volume up so high, sometimes beyond the volume slider in the DAW. And yet the orchestra is so highly dynamic that even with the winds so quiet, the bass drum will still clip art high volumes, so in practice i need to turn the orchestra including those super quiet winds *down* even further if i want to record a loud part and maintain balance.

With all that out of the way though, I'm going to do a complete reversal and say that I also think more devs should do this. I think it's still logical and, just like with legatos that have a ton of delay for the sake of capturing expression, one or two devs have to try it and see what doesn't work before someone can come along and truly get it right. OrchestralTools is close. I'd like to see more devs thinking along those lines


----------



## wilifordmusic (Aug 9, 2021)

I think your button is possible.

Two sample sets that toggle from one to the other.

Two slight problems, double the disc space and probably double the cost.

More wails of anguish from the gallery.


----------



## Casiquire (Aug 9, 2021)

wilifordmusic said:


> I think your button is possible.
> 
> Two sample sets that toggle from one to the other.
> 
> ...


Unless the button is just scripted, which i think is entirely possible.


----------



## wilifordmusic (Aug 9, 2021)

Casiquire said:


> Unless the button is just scripted, which i think is entirely possible.


I'm sure this is also a possible good solution. 
My answer was based on using two sample sets that achieve the desired output volume without taking the room acoustic along for the ride.

These are just one of the many situations that make me happy I'm not a developer.


----------



## PeterN (Aug 10, 2021)

Now we got it flowing the usual flow - great.

First rejected and not understood, then understood and accepted, now we can leave a time mark here.

*10th of August 2021.*

Within 500 days someone does this type of orchestral library, and there will be a lot of praise for the original idea.

_(tongue in cheek)_


----------



## Tanuj Tiku (Aug 10, 2021)

Zedcars said:


> I heard Paul Thomson in one of his walkthrough videos (may have been ARO?) mention that when two acoustic instruments play together within a space the resulting sound is different compared to those same two instruments playing exactly the same music in the same way and then layered in a DAW. I think it has to do with the way the sound waves of each instrument resonate within and through the body of the other instrument which modifies the sound waves emanating from the other sound sources, and also modifies the room reflections. In other words, the sound waves interact with each other before they reach your ear. Of course, with most sample libraries the recordings are made on solo instruments, or instrument groups in isolation and therefore lack that natural room interaction. I'm not saying anything here you all don't already know. However, what I am wondering is if there is any technology in existence now, or perhaps being worked on, which would enable that natural sound interaction to be simulated within a computer. Or is it far too complex a problem?
> 
> If that problem could be solved I think the realism of these libraries would be greatly enhanced.
> 
> Is there anything else that you think is missing from sample libraries, or do you think we've already reached the pinnacle or what can be achieved (at least until acoustic modelling technology can mature enough to surpass sampling technology in terms of realism.)?


Personally, I don't think this is a major issue at all. In fact, it may not even appear in my priority list with regards to samples (assuming Paul meant that how players react to each other when playing together). 

I think the biggest problem is that of not being able to create a connected musical phrase that is consistent over time. The second big problem is that of smooth and realistic dynamics. 

The third massive problem is that of the room build up that inevitably happens when simulating the whole orchestra. It is a very big compromise in sound over all and that alone could kill all other arguments like two players playing together. You can manage it somewhat but nothing can be done beyond a point. 

Without a way to really add musicality and performance elements, the sound and multiple mic positions are far less important. However, in the case of 'sound', it is assumed that the basic 'goodness' of sound associated with performance should be there. 

The recorded sound aesthetic is far less important in the grand scheme of things. 

It might be useful to think of it like having many of these things in a balance - not too wet, not too dry, not too aggressive etc. Again, not in terms of performance but the recorded sound. 

The enormous number of mic positions don't really make much sense with samples because there is no sensible way to manage all that and run it on even above average systems. They are useful sometimes but I would trade most of them in a heart beat for some musicality. 

Having said that, I don't know whether this will ever be possible to do and the state of mock-ups is at a decent enough level at the moment. It would be nice to have some way to speeding up the process of writing in some cases. 

Recording real musicians is a joy far too greater than using samples!


----------



## youngpokie (Aug 10, 2021)

Casiquire said:


> Sounds like VSL is the one for you then, since they do offer a natural volume button (at least in MIR. I'm not sure if there's one on the interface.)
> 
> OrchestralTools and BBCSO might be your next closest


OT Berlin Series offers a slider that scales the relative volume per instrument. This is what allows a highly realistic volume relationship between orchestral instruments if it is needed. It's better than a button because it covers more than one type of scenario.

For example, this slider allows to closely reproduce the natural volume curve of a flute througout its _ppp_ to _fff_ range - barely audible in the former and piercing through any tutti at the latter. Then, a two-flute ensemble can be matched (near perfectly!) to a single trumpet, also pre-scaled across its own range.

That's more or less how it's set up out of the box. The advantage of the OT approach, I think, is that it allows multiple types of realistic orchestra from a single product series: from a small one of the Beethoven era to the gigantic Mahler-type setup. Two-flute x one trumpet, balanced and seated a-la Beethoven and four-flute x three trumpets seated in a totally different way (and size!) a-la Mahler. The cost of this flexibility is the user must make a custom template.

I suppose OT could make "a button" that scripts their entire product range into these orchestra sizes and seating styles. But what happens if I replace their strings with CSS? Or, if I only have their woodwinds?


----------



## Casiquire (Aug 10, 2021)

youngpokie said:


> OT Berlin Series offers a slider that scales the relative volume per instrument. This is what allows a highly realistic volume relationship between orchestral instruments if it is needed. It's better than a button because it covers more than one type of scenario.
> 
> For example, this slider allows to closely reproduce the natural volume curve of a flute througout its _ppp_ to _fff_ range - barely audible in the former and piercing through any tutti at the latter. Then, a two-flute ensemble can be matched (near perfectly!) to a single trumpet, also pre-scaled across its own range.
> 
> ...


Right, but VSL also has a dynamic range slider which I've found useful for exactly those types of scenarios. I'm glad that some devs are including that.


----------



## muk (Aug 10, 2021)

In my experience Orchestral Tools' libraries are far from balanced out of the box. They may not normalize the samples, which is impprtant to preserve natural volume. But then they should instruct the players to play at very similar volumes during recording. As it is, there are strange and unnatural volume balances out of the box. Take the Berlin Strings First Chairs Library. Out of the box, the pizzicati of VL2 are much louder than those of Vl 1. That's precisely what you want to avoid, since it takes much rebalancing from the users to correct these issues. 






Volume differences in Orchestral Tools Berlin Brass


Hi For those of you who uses Orchestral Tools Berlin Brass, have you noticed big differences in the volume between ex. Horn 1 and 2? I have, and today I have spend some time wondering if I need an update or what. I simply haven't experienced that in my other libs. Any help appreciated!




vi-control.net










How to fix Berlin Brass


Here is a short little piece for horns, using the sustain articulation in Berlin Brass. Please try and concentrate on all four horns. Not a very good performance, right? The bottom horn is way too loud, and the others aren't really balanced either. It doesn't feel like four musicians...




vi-control.net










Berlin Strings First Chairs vs CSSS?


From the email that I just received, it appears that Berlin First Chairs is getting what is described as "better legato performance", some sample fixes and a first chair double bass, in version 2.0. Furthermore, it is on sale for a couple of weeks for €199. I'm wondering how this compares to...




vi-control.net


----------



## Jish (Aug 10, 2021)

PeterN said:


> Thats why I suggested a _button_. So there could be both options, one readily balanced, one not, *so the cries will not be heard.*






Tanuj Tiku said:


> The enormous number of mic positions don't really make much sense with samples because there is no sensible way to manage all that and run it on even above average systems. They are useful sometimes but I would trade most of them in a heart beat for some musicality.


Absolutely, but the appeal of the x20 mic approach in hindsight was simply too cost-efficient (and therefore appealing) to many developers- by simply recording the samples with various mic placement's in the same space you were able to capture in a good many libraries pretty dramatic tonal differences that no amount of EQ or the slapping onnage' of Altiverb was going to produce. At the time, I felt the added difference of the mic's in Spitfire's _Sable_ and subsequent SCS to be very striking, so it _did_ feel like a small sea change back then. So many continued in that direction, and kind of never looked back. 

At this point, atleast with regards to the orchestral side of the equation, that's where the 64k question really is, imo- it's obvious that a subset of developer's like SampleModeling and Aaron Venture take this particular question very seriously, but less so obvious with many others. Or, they simply are continuing to profit enough out of whatever thing they have going on. I thought '41 Fingers' had an interesting possible concept but no idea if it would be workable. Honestly, having played around with 'Synful' around the time it came out, and re-visiting it again shortly after reading some replies on this thread, I would have wagered we would be farther along with regards to all of this in 2021- but no worries, as I hear both Sonokinetic and Cinesamples are waiting somewhere in the wings, and that something truly _wonderful_ is coming in May...I mean, just cannot see it not being April....or was it February? "I don't know, maybe it was Utah..."


----------



## Casiquire (Aug 10, 2021)

If you're recording in an ambient space, offering all the mic positions is crucial imo



muk said:


> In my experience Orchestral Tools' libraries are far from balanced out of the box. They may not normalize the samples, which is impprtant to preserve natural volume. But then they should instruct the players to play at very similar volumes during recording. As it is, there are strange and unnatural volume balances out of the box. Take the Berlin Strings First Chairs Library. Out of the box, the pizzicati of VL2 are much louder than those of Vl 1. That's precisely what you want to avoid, since it takes much rebalancing from the users to correct these issues.
> 
> 
> 
> ...


i agree, but i think they're closer than most. For example yes you might have to rebalance the pizzicato, but with competitors you might have to balance the pizzicato, legato, col legno, longs, sordino all individually and then against other members of the orchestra. With OT it's more like spot checking for me than balancing an entire orchestra. I've also noticed that most of the criticism is directed at the horns which is a pretty easy fix.


----------



## youngpokie (Aug 10, 2021)

Casiquire said:


> i agree, but i think they're closer than most.


Indeed, I was not suggesting OT libraries are perfect - they are not. But at least they (and VSL, as it turns out) offer a tool to achieve a more realistic and detailed volume matching if it is needed. It also makes it a little easier to volume-match it to other libraries. 

Maybe some developer will invent a precise and universal system of representing dynamic values for markings that are relative by nature, like _mezzoforte,_ _sforzando, etc _- which is really the root cause of the problem with individually recording instruments meant to play as ensemble.

Perhaps they could combine something like the % approach used in notation software with a way to do IRs of individual instruments in various seatings, orchestral sizes and rooms.


----------

