What's new

The vi-c blinded violins shootout - stage 1 completed.

here are disadvantages to all methods, including this, but it was explicitly stated in the rules, that contributors were encouraged to try to show the library in its best form.
My point is mainly that it wasn't mentioned that one could add layers that needed to be purchased separately in order to make it sound as good as possible. But no disagreements or critique, I just think things need to be as clear as possible as early. If I for instance should make a Berlin Strings submissions, and layering it with their First Chairs or Nocturne solo violin, that should IMO be mentioned explicitly.
Re voting, I'm not worried, I don't see this as a contest anyway - and it all depends on how the results are summed/presented.
 
As others have stated before me, I agree that this test has become more of a comparison between the users abilities rather than sound quality. The test is still interesting - it does reveal a thing or two about user-friendliness, as well as our expectations to developers - and to what degree we composers need to take the time to learn the tools we invest in. As I am in the middle of my masters degree final project, I do not have the time to give any detailed review of the various libs, but since I am a proud owner of LASS, I am going to wager a guess that track number 1 is a rendition of that lib (there are some tuning instabilities that sounds familiar to my ear). If I am correct, there's a lot in that track that is not quite representative of what is possible to get out of LASS with more tweaking (it is not my intention to bash anyone's efforts here, just voicing an opinion based on my experience with LASS). On the other hand - I could be completely wrong about the identity of lib 1. I am still immensely grateful for all the time and labour everyone has put into this, and look forward to the unblinding:)
 
Last edited:
I think most of the libraries used here can sound good under some context.

What is interesting to me is how a library's strength (or weakness) can conform my writing. As the tools get more and more realistic sounding at certain workflows, I find that I restrain myself from writing stuff that a library cannot reproduce well. On the past, everything sounded like midi and this mental handicap wasn't there, I was more liberated in my compositions, because there were just meant to be placeholders, just in case I ever had the chance for a live performance. Nowadays, the midi mockup is the final product. So I must make it sound as good as I can!

Saxer's example phrases are a good stress test for my mockup skills. To be honest, I couldn’t get the 2 last phrases to work. So I was very curious to see how other users tackle the same problem with other libraries or, even better, the same library. I can learn from this.
 
I agree that any "comparison shootout" like this will always have pros and cons in its approach; it's unavoidable. Perhaps one way to solve the problem that libraries with multiple submissions are going to inherently receive more votes would be to group them when presenting the contenders. For instance:

Blind Library 1 (and list all its entries)
Blind Library 2 (and list all its entries)
etc.
etc.

There could be a note next to each submission noting what layering and/or processing had been used.

Then people would grade each library just once, based on the totality of all its submissions, on an A through F school grading system. I'm sure this approach has its own pros and cons (ie. it's not quite as completely "blind" as this was), but might be helpful for future blind tests.

Or alternatively, each library's final grade can simply be averaged from all its submissions.

By the way, this has been a wonderful test (thank you Garry!), and hopefully other such blind tests will be done too.
 
Last edited:
  • Like
Reactions: Vik
Libraries with more entries only get more votes if you're combining all the votes. Not sure why you'd do that! No, it should always be on a per-recording level. Simple fix.
 
  • Like
Reactions: Vik
*** UNBLINDED FROM HERE***

SPOILER ALERT!!

Ok, so since there are now LOTS of PM requests for the unblinding, I'm going to go ahead and release the unblinding today, as originally planned, rather than wait until Saturday. If anyone wants more time to review, please just don't look at the attached file, or read posts from here onwards, in which I assume people will be discussing the unblinded results.

Hope you found this fun, useful and enjoyable. If there are substantive comments about how it can be improved next time around, hopefully this exercise will have, if nothing else, at least served to motivate a 2nd round, with any learning from this incorporated.

I personally found it incredibly helpful, and appreciated the efforts of the community to pull it together, and test whether some of the claims made of these libraries stood up to blind testing. Interested to hear what others made of it, and see who wants to take up the reins for version 2.0: bigger and better than ever, a ground-breaking, paradigm shift in blind testing that will revolutionise your workflow, and give you that ultra-realism you've been waiting for!! ;)

Thanks all.

:)
 

Attachments

  • Shootout Unblinding Information.pdf
    43.9 KB · Views: 303
Last edited:
Here's the new top fifteen scores adding together my, @ModalRealist , @Vik , @M0rdechai , and @teclark7 's grades.

15 points: #29, #30
14 points: #31, #32, #71
13 points: #57
12 points: #33, #34, #72, #74
11 points: #26, #58, #70, #73, #75

and the bottom fifteen, if you're interested... in no particular order:

#1, #2, #5, #7, #9, #11, #12, #19, #35, #60, #61, #65, #66, #68, #76

Amazing - referencing against the unblinding chart, CSS completely dominates. Some other observations: CH Ensemble Strings also provides a strong showing, surprising given they have a reputation for being complex to work with but shows the customisation they provide can achieve great results!

On the other end of the scale, the Spitfire libraries in general fare quite badly, unable to generally make it into the higher gradings.
 
Last edited:
DISCUSSING UNBLINDED RESULTS

First, thanks to Garry and all thirteen composers for their hard work.

Since the library I like the most received the highest possible marks from all the people who listened through all 78 examples, it would be easy to point at this competition as proof that "It's the best." ;)

However, I do agree with previous posts that the competition had two weaknesses. First, the libraries are completely at the mercy of the user to demo them well, and second, multiple entries didn't necessarily lead to unfair grading but it did give that library more chances to catch people's attention when they were skimming through all 78. I think in future competitions of this type, the organizer should choose just one example for each library and should exercise their judgement to pick the best option out of however many are submitted.

However, all I can really do is note those two issues in passing and move on.

I think there were 5 results of the competition that really surprised me or are worth remarking on:

1. Cinematic Studio Strings received consistently top grades from every reviewer. It's possible that publishing my own grades early on drew attention to those tracks, but I think everyone was just impressed by those recordings. What makes the library stand out is not just the quality of the individual samples but the effort & insight in the programming that connects them. Because of that, CSS can pass most of Saxer's obstacle course with no sweat, especially obstacles that challenged many other libraries like the repeated-note test, the runs test (marcato legato), and the long/short mixture test. There were few obstacles where other libraries consistently beat CSS. Perhaps on example D, the repeated transition test, there were a few libraries that sounded more natural. Compared to CSS, Cinematic Studio Solo Strings received a significantly less positive response and wasn't a standout among the solo-violin entries at all. However, CSSS+CSS still achieved top grades when blended together.

2. All graders exhibited a strong tendency to give low grades to the entries that ended up being not true-legato demos at all, but "sus patch" demos. Composer #1 for instance gave us several recordings of libraries that were never designed to be able to play Saxer's passages. This shows that true legato sampling does create an audible difference... not exactly a lightning bolt revelation but still interesting to prove empirically.

3. Most of the time, the standard deviation of the 5 grade lists was low. That means the reviewers generally agreed with each other. The tracks with the most significant disagreement were the Virharmonic Bohemian Violin and several of the tracks (from different composers, even!) demonstrating Spitfire Chamber Strings. For those two libraries, A's and C's were handed out in equal measure!

4. Chris Hein's strings received higher - and more consistent - grades than you would expect from the little discussion & hype on VI-Control. Across the six tracks he submitted, the five reviewers gave Chris's tracks just one C, seventeen B's and twelve A's. Overall, his library received the highest scores for non-solo strings if you take out CSS, with Hollywood Strings coming close behind.

5. The dog that didn't bark in this competition, is how many of the high priced, true-legato libraries from the 2010-2016 era received very middling or even low marks. It would be very unfair to single out any individual library since the grades were so totally dependent on how capable the user was and how much time they took to create these demos. However, there were a lot of "flagship" string libraries that received solid B's or even low B's across the board.
 
Last edited:
I had the same favorites as the majority. Interesting results!

My personal résumé: at the end it's all strings. No library really 'sounds' bad. Some are probably harder to control or have too much attack or vibrato or doesn't fit the situation... but most comments are about musicality, less about the tone itself. Even less about room or stereo placement. Similarities are bigger than differences. I think it's much more possible to combine different string libraries than I thought before. And it's a bit like DAWs: the best is the one you can handle.

Thanks all for who took part and thanks for using my midi files. And special thanks to Garry!
 
Thanks all for who took part and thanks for using my midi files.
I'm still hoping 'Saxer's Seven' will catch on! I'm envisaging a time when on release of each new library, there'll be a collective call from the community for the developer to release it with 'Saxer's Seven' as one of the demos! Then finally, we'll have a better way of deciding whether these new libs deserve our hard earned cash!
 
While I'm not surprised that CSS performed well (there's a reason people love it here), I'm inclined to think it mostly just means CSS is great at legato playing (well duh!), which a lot of these lines are mostly testing, and I think by far the predominant factor people are listening for. The different types of legato in CSS (and how little tweaking you need to perform for a single legato line) is a key reason the library is so brilliant and usable. I don't think it means CSS is necessarily better than the other libs though overall, even though I'll admit I tend to use this library a fair bit in my own writing for this type of playing.

But even then, like Saxer states, I tend to double it with other kids.

Edit: Or "libs", if auto-correct can behave itself. Doubling with kids may be awkward... goats tend to eat everything. :)
 
Last edited:
@Gingerbread entries 23-26 appear to be (broken link removed)

Thanks NoamL. Personally, I was really impressed by the Hein Solo Violin (#26), and without this 'competition,' it wouldn't have been on my radar. But I thought it compared very well (or better) to some of the more recent hyped solo violins. Of course, that could definitely also reflect the user, as I suspect that is especially true of solo violins.
 
One of the things that caused me in my personal notes to lower the marks for 16 and 17 was the odd sounding artifact at about 1:02. I can reproduce it in the MIDI since I have the library. Sounds like a cough to me. Is that was it is?
 
Across the six tracks he submitted, the five reviewers gave Chris's tracks just one C, seventeen B's and twelve A's.
So.... how do you count this? I mentioned, for instance, that these five entries probably came from the same lib and that I liked that lib. Does that count as 5 As? Pardon my ignorance... I just need to know how our/these comments are interpreted.
 
Just figured out my ear-favorite is #25 Violin, realistic.
And I have to agree with you on at least one thing:
Some composer did a "good" job and some did "bad" job in terms of making it sound musical. But I do appreciate their work. This happened between entries such as SF-CS and Hollywood String. When I graded these, it was based on whether they sound similar or close to recordings on youtube that I listen daily. I consistently mark #23-26 as good ones.
Section ONe.jpg Section Three.jpg Section Two.jpg
Here's my grading. In case, some one wants to collect data for more comprehensive statistic test.
Thanks again for Garry and everyone's contribution! I notice that I need to spend more time on learning how to make the tools work better, because even good libraries can sound wrong if it is not working right! And my ear surely like reverb but not too much;).
 
By the way, if anyone wants to do a more "proper" way to measure all things, linear model would definitely come in handy. We want to if the grade is associated with type of composers, but that easily can be told when you have multiple entries from different person (usually >5) on the same library. I didn't really carefully see if this was the case.
 
Top Bottom