Whoops pressed send to soon ok my earlier post, deleted that one.
As a violin player myself (except I'm scared to admit how long it's been...) I have some thoughts here.
The key problem is that continuous parameters are a fundamental part of the sound. I think it's the same reason that voice is so hard too. On the opposite extreme, this is why percussive instruments are easier to sample. Most of the sound that most people hear about it comes from a one shot hit, which beautifully matches midi note-on and note velocity paradigm.
For strings, continuous parameters are things like bow speed, bow pressure, vibrato rate and depth, exact intonation, etc. It's not just a "set it and forget it" thing to figure out when sampling... The way they *change* is the part that people recognize as a violin sound. So maybe a sample out of context can sound great. But in a musical line, it matters a lot the timing of when vibrato kicks in, how the player should move the bow - what would sound continuous and natural is probably not what separate samples would sound like sequenced together. This problem still exists even if a library goes into deeply sampled legato and progressive vibrato and phase free velocity layers, even if they model up and down bowings and even if they go as far into modeling like AudioModeling. humans are just accustomed to hearing all the parameters being modulated in just the right way in a real performance - both physically realistic and musically appropriate. Without some next gen scripting or AI, there's no deep sampled instrument that could fill in those parameter decisions to fit the music and physical constraints yet.