Maybe it's some sort of union arrangement on the part of live string players to make sure libraries can't replace them entirely.
No, it's for the same reason why any type of 'legato' articulation can be hit or miss depending on the library. It requires recording transitions, then editing and scripting the patches in such a way where the transitions sound semi-passable. A library focused only on this would have to have a decent handful of transitions, and a lot of scripting. Both take time, which costs *a lot* of money.
But there are some more technical reasons (at least it seems so, from my crude understanding of how some of the current libraries work)... The transition portion won't always be from the same sample as the sustained portion; which means phase issues, obvious/fake sounding transitions, etc. The more transitions, the less often the samples 'belong' to one another.
Basically, (I'm assuming) this would mean that the only way to have transitions originate from the same sustain sample would be to have to key switch at the beginning of the sustained portion of every note, and a separate way to trigger transitions, (like velocity).
Even then you have some other obvious issues... The transition will never perfectly line up with the sustained portion of the note before the transition, because this would require specific note lengths; in which case you might as well just use phrases with time stretching, (which is hardly a desirable solution). The only way to have the sustained portion and transition be in phase is with a lookahead system that can crossfade the sustain 'body' with the sustain 'transition start', as well as the actual portamento transition; (which is how/why I'm assuming LASS has its lookahead feature, and why it has such a long delay).
This leads to an obvious question... If you had really long glisses that were (for example) 2-3 seconds, then wouldn't you need more than 2-3 seconds of lookahead? (Because the "pre-transition" needs its own lookahead buffer, not just the transition length).
How on earth would you be able to dynamically crossfade with something this complicated? I'm sure it's possible, but can you imagine how complicated it would be to develop something like this?
I'm assuming the reason this doesn't currently exist is because the issues above show that it's incredibly difficult to solve, and the only current ways known how to solve it would result in crazy long delays that are simply impossible to record a MIDI performance to... And, while working composers might be ok with having some template tracks dedicated for this kind of thing, (where you would ultimately need to record MIDI on a different instrument track, then move the MIDI); it wouldn't be a hit with people who already tend to complain about legato as in its bare form. The average person who doesn't understand the challenges going on under the hood would be likely to tear it apart...
TLDR; there are a number of reasons why I can see that no one's bothered trying to tackle this yet. Someone probably will, if they do it will probably be quite expensive...