# The vi-c blinded violins shootout - stage 1 completed.



## Garry (Jun 15, 2018)

Hi All,

With thanks to everyone who contributed to this effort, the blinded results can now be downloaded.

As explained in the other thread, we decided to forego the 'competition' part, so whilst you will be able to review these, blinded as promised, there is no voting planned. If someone chooses to organise voting, you are of course welcome to do so, but the output that will likely be most useful to everyone, is that we now have a resource that allows anyone to compare almost all of the major violin libraries across 7 distinct melodic lines. I hope you find this useful in evaluating the libraries for your own individual purposes.

*MEMBERS' CONTRIBUTIONS*
Some of the libraries are represented more than once. You may feel that a particular entry didn't fully represent the library's capabilities. Please consider 3 things regarding this:
- someone took the effort to contribute this, so please be respectful: indicate constructively how you think it could be improved
- please consider sharing a new version with the forum on this thread. This will make the resource even more valuable, as over time, we will get the best out of every library, and it could grow to be the most definitive benchmark; future libraries could be added, enabling comparison of new offerings with existing libraries.
- the volume has NOT been normalised across the tracks: for our purposes, this isn't necessary, since you all have volume faders; please adjust to taste!

*AUDIO FILE FORMAT*
The zip file contains a Logic file, with 78 audio tracks. For those who don't use Logic, there is also a folder containing 78 blinded wav files.

*FILES AND NEXT STEPS.*
I will shortly release an unblinded version, once people have had an opportunity to download and review the blinded files. The blinded versions are available https://wetransfer.com/downloads/95ff2946725876d3ab9fccd9083aeb7520180615155535/f42361 (here) (this is now the wetransfer link - see below).

Enjoy!


----------



## Mucusman (Jun 15, 2018)

Thank you so much for doing this, to Garry, and to all those who took the time to submit an entry (or entries). Loading the WAVs into Studio One was quick and easy. 

I'm about a third of the way listening through all of the entries. My first impression is that the majority are excellent. [Edit: the further down I made it through the list, the more lackluster submissions I encountered.] The amount of work put in my so many developers really shines. It's fun to notice the subtleties and differences in tone and the lyrical nature (or lack thereof) of each library. 

There are some really lovely renditions of this MIDI file. Some seem to just have that special sauce!

I think there indeed is incredible value in this as a tool for folks wanting to invest in a new string library (orchestra, ensemble, or solo strings). I wish this existed several years ago when I began researching and buying packages. Kudos! This is, truly, musicians helping musicians. 

Looking forward to finishing my listening session and then, eventually, seeing which library is which.


----------



## Casiquire (Jun 15, 2018)

Downloading now!


----------



## Garry (Jun 15, 2018)

Houston, we have a problem!

Dropbox just suspended downloading due to excessive activity!!

Anywhere else (free) that I can host it?


----------



## Alex Fraser (Jun 15, 2018)

Wetransfer (as a url)
?


----------



## d.healey (Jun 15, 2018)

What file size?


----------



## Mucusman (Jun 15, 2018)

d.healey said:


> What file size?



2.38GB

Note, it could be split into two separate files (each about 1.1GB): A Logic package, and a non-Logic (WAV) package.


----------



## Garry (Jun 15, 2018)

Ok, I’ll try wetransfer, and will update once it’s available.

I’m assuming wetransfer doesn’t restrict bandwidth for free accounts?


----------



## Garry (Jun 15, 2018)

uploading now...


----------



## Garry (Jun 15, 2018)

Ok, https://we.tl/UIFBWaXSdC (here) is the wetransfer link. Please let me know if this works.

This no longer contains the Logic file - only the wav files, since these can be easily imported to any DAW, and cuts down the file size.


----------



## One Dove (Jun 15, 2018)

Works perfectly here on my end (Norway). Thanks!


----------



## One Dove (Jun 15, 2018)

WOW. After a VERY quick listenthrough, it's abundantly clear to me that apart from obvious coloring by eq and reverb, the differences aren't that extreme. Most of these (apart from the smaller ensembles and solos) share a lot of the same qualities, although there are some obvious differences in how legato and retriggering of samples are handled. This is SUPER interesting - thank you very much again! Now back to the monitors


----------



## Alex Fraser (Jun 15, 2018)

Very interesting, thanks Garry. Whilst it's not a definitive example of the capabilities of each library (number 60..whoah!) it's a great exercise in seeing how much midi mangling each library needs out of the box.
A


----------



## erikradbo (Jun 15, 2018)

Thanks a lot Garry and all contributors, this is great. Huge differences, would be interesting to know how much time is put into each of the tracks to see how much that could explain it rather than the library itself.

I know that you didn't want to clog this with your judgement Garry, but do you have easily available a list of what tracks are from the same library, and even better - which one from each library you prefer. Some are very obvious, but I'm betting I'm getting tricked a lot, and although that is a lesson in itself, 72 tracks are a lot to go through in detail...if you could send this over PM, without revealing what libraries they actually are, that would be a huge help.


----------



## Garry (Jun 15, 2018)

I'll put you out of your misery on Monday! Blind reviews until then!


----------



## fiestared (Jun 15, 2018)

erikradbo said:


> Thanks a lot Garry and all contributors, this is great. Huge differences, would be interesting to know how much time is put into each of the tracks to see how much that could explain it rather than the library itself.
> 
> I know that you didn't want to clog this with your judgement Garry, but do you have easily available a list of what tracks are from the same library, and even better - which one from each library you prefer. Some are very obvious, but I'm betting I'm getting tricked a lot, and although that is a lesson in itself, 72 tracks are a lot to go through in detail...if you could send this over PM, without revealing what libraries they actually are, that would be a huge help.


Even more, 78 tracks !


----------



## puremusic (Jun 15, 2018)

I have to say thanks for all the hard work to everyone who participated in this.

I am enjoying listening to the tracks tonight, I have my little notepad open to take down brief impressions. At first I put them in my daw, but now I just am listening to the wave files in my media player. I believe I'll go through all 78.


----------



## markleake (Jun 15, 2018)

Like puremusic, I'm just listening to the files in VLC media player in the background. That seems the easiest approach.

To make things easier, I've leveled the volume as best I can for each file (it's far from perfect, but better than the raw WAV files' levels) and converted them to High Quality MP3. This makes for a much smaller download, plus an easier listening/comparison experience.

*>> Volume-Leveled MP3 Versions Here <<*

Hopefully the dropbox doesn't exceed it's download too quickly. This file is around 224 MB in size.


----------



## Casiquire (Jun 15, 2018)

Amazing. Clearly there's enormous interest in this! I'd almost suggest letting it stew a bit longer than Monday as far as which library is which. In any event I think that even eliminating the contest we wind up with a fantastic cooperative effort here with some illuminating results


----------



## NoamL (Jun 16, 2018)

Gonna group them into three cohorts:

*Group A *- actually impressive & feels like music
*Group B* - passable for a VI
*Group C* - mediocre or bad programming/sampling. Although it could always be that the library is poorly demo'd by the user.

I skimmed each of the examples, usually listening to more than 10-15 seconds only if I needed more information to make up my mind. I generally didn't listen much past the first example if the legatos didn't sound very good...

*Group A*
#20 - low A / high B, quite solid execution, tone needs some EQ
#26 - Solo vln. Tone needs hella work, but the sampling is clearly good
#27 - solid throughout all the sub examples
#29 - low A. Sounds like CSS? 
#30 - same as 29. These sound like 1st/2nd vln sections.
#43 - definitely one of the top tier solo vln libraries (at least until those runs at the end, ouch!).
#47 - I'm gonna go out on a limb and say the accent-on-every-note issue here is from the user. This seems very solidly sampled.
#57 - low A / High B. can't decide.
#67 - this is the Bohemian right? This actually feels like music! I don't like the double bows though and there are a few passages that it really flubs. Low A.
#70 - technically proficient and agile. As violins should be! The musicality is very "neutral," maybe too much so.
#71 - a bit better than 70
#74 - low A / high B



*Group B*
#3 - half-decent
#6
#14 - could be better but I see promise in this one
#16
#17 - low B
#19 - low B
#22 - decent
#23 - quality varies a lot between the sub examples
#25 - low B
#31 - good. High B.
#32 - Seems paired with 31.
#33
#34
#38 - low B... nice recording but transition issues
#39 - it's "musical" but not actually fooling me into thinking it's real
#41 - low B... a bit grating in tone?
#45 - low B
#46 - same, very low B
#58 - decent
#63 - kinda thin and wheedly sounding, but solid sampling
#72
#73
#75


*Group C*
#1 - sample cutoff
#2
#4
#5
#7
#8 - some legato transitions are very realistic, others feel totally cut off
#9 - transitions are quite bad
#10 - transitions
#11 - unmusical
#12 - mushy
#13 - wow, these samples are *really *quite bad. It sounds like a section of unconfident (and inaccurate) amateur players and the recording engineer has brought out the worst, scratchiest frequencies of the violin! Imagine paying $300 for this... this is the only library in the bunch that really stood out to me as unusable. I doubt this library is from one of the major developers, although I'll laugh if it's 8Dio or NOVO or something like that.
#15 - unconnected
#18 -
#21 - inconsistent stereo?
#24 - not great tone (not as bad as 13!)
#28 - pretty meh
#35
#36
#37
#40 - stereo image issues?
#42
#44 - same as 40?
#48
#49
#50
#51-54 difficult to say much about these
#55+56 - okay. High C? unmusical.
#59
#60 - completely unrealistic
#61
#62 notes don't connect at all
#64 major issues at the end? B up to that point
#65+66 - reverb issues
#68
#69
#76
#77
#78



Listened through 45 examples in 50 minutes, will edit as I listen to more.

EDIT: in the end I graded them all in 80 minutes. It's impossible really to grade all 78 of them fairly, and no doubt I've made some blunders or oversights here, but I hope I was able to pick out at least _most_ of the highest quality libraries & demos.


----------



## Garry (Jun 16, 2018)

NoamL said:


> Gonna group them into three cohorts:
> 
> *Group A *- actually impressive & feels like music
> *Group B* - passable for a VI
> ...



Absolutely fascinating!!! I think you will find the unblinding *REALLY interesting*!

So, what do others think? Would be great to get other people's impressions, either in detail like @NoamL's, or general overall impressions, before we unblind the entries. After the unblinding, it will be all too easy to say 'ah yes, I just _knew_ that one was from Developer X' - much more interesting and useful to state this ahead of time.


----------



## puremusic (Jun 16, 2018)

I would like a bit of a delay of a couple days before the unblinding personally, it's no big deal though.  A separate thread for discussing the unblinded libraries would be nice, so folks who don't want to know yet can remain happily in the dark a little longer.


----------



## Vik (Jun 16, 2018)

Removed.


----------



## Garry (Jun 16, 2018)

puremusic said:


> I would like a bit of a delay of a couple days before the unblinding personally, it's no big deal though.  A separate thread for discussing the unblinded libraries would be nice, so folks who don't want to know yet can remain happily in the dark a little longer.


No problem - let's say we leave the unblinding for 1 week from release, so that would be next Saturday? I'll then send as an attachment, so you can view whenever you're ready.


----------



## ModalRealist (Jun 16, 2018)

Garry said:


> Absolutely fascinating!!! I think you will find the unblinding *REALLY interesting*!
> 
> So, what do others think? Would be great to get other people's impressions, either in detail like @NoamL's, or general overall impressions, before we unblind the entries. After the unblinding, it will be all too easy to say 'ah yes, I just _knew_ that one was from Developer X' - much more interesting and useful to state this ahead of time.



I've only had a brief, skipping listen-through so far. Will sit down with a notebook later today (might copy @NoamL's three categories approach). I am actually quite surprised at most of them: in no way am I criticising peoples programming, or even sampling in general, but what a wake-up call in terms of what "people in general" can get out of these things. There's nothing weird or unusual in @Saxer's passages: to the contrary, it's a great selection of bread-and-butter string writing. It's astonishing just how few of the recordings successfully pull off a convincing rendition of all the passages.

Perhaps rival developers submitted terrible renditions with each others' libraries?  _*Joking, of course!*_


----------



## puremusic (Jun 16, 2018)

Thanks @Garry.

These files are helping me train my listening skills too. I'm getting a lot of benefits out of listening to them carefully through my HD600's. I figure I'll test out new libraries in the future using @Saxer's MIDI and some variations on it too. Thank you too @Saxer.


----------



## brek (Jun 16, 2018)

Vik said:


> Just had a brief listen (skipped some parts), but already have some favourites in there, like everything between 29 and 34.
> 
> 20/21/22 also seem very similar, but with different mics.



I'd include 19 in that group as well. I'm fairly certain what library that is. 21 and 22 definitely sound like the same performance, the first is dry and the other is "mixed". Possibly the same situation with 19 and 20. 


An initial thought while skimming through: I'm hearing the biggest differentiation in the last two lines. Some of these libraries that sound OK at the start really fall apart on the fast stuff (17, for example). Side note, this is why I really like Resonic Player for listening.


----------



## Garry (Jun 16, 2018)

ModalRealist said:


> I've only had a brief, skipping listen-through so far. Will sit down with a notebook later today (might copy @NoamL's three categories approach). I am actually quite surprised at most of them: in no way am I criticising peoples programming, or even sampling in general, but what a wake-up call in terms of what "people in general" can get out of these things. There's nothing weird or unusual in @Saxer's passages: to the contrary, it's a great selection of bread-and-butter string writing. It's astonishing just how few of the recordings successfully pull off a convincing rendition of all the passages.
> 
> Perhaps rival developers submitted terrible renditions with each others' libraries?  _*Joking, of course!*_



I have to say, I agree that some don't sound convincing at all (naming no names) - no doubt, some contributors will have taken more time than others, which is fine. Once we release the library names (we won't be releasing the contributor names), it will be really interesting if people can then improve upon the representation for that library, where people feel that's needed based on current entries. At that point, it will no longer be blinded, but the blinded test will have served it's purpose, and we can focus on compiling the very best version of each library: at that point, we start to address a different question: in the hands of a skilled user, what are the differences between the libraries? That will be a separate, but I think really useful question we can address with this.


----------



## ModalRealist (Jun 16, 2018)

Right. I finished listening. Here's my breakdown. I've ordered into four categories: A is pretty great; B is passable; C is disappointing; D is amusingly disappointing (tongue in my cheek - these are basically Category C, but for more memorable reasons!).

_(Relatively ninja edit: copied my penultimate, rather than final, list. Now corrected. Only a few small changes.)_

*Group A*
I enjoyed listening to these. They feel enough like music to listen along _with, _as music. Once the unblinding happens, I will definitely be checking out the libraries behind these renditions.

22 Musical. And the samples just about handled all the passages, inc. D etc. Some of the slow transitions were too portamento-y for me. Might be programming.
29 Musical performance. I didn't dig G (too much attack and not enough blur on those runs). I couldn't decide whether I liked the tone or not. I suspect I could get it to something I did dig with EQ and some 'verb.
30 Musical also. As 29, didn't like G. Preferred the tone, even though I think it's the same library as 29. C was more convincing in this example.
31 The p to mf in B was almost non-existent to my ears. Nearly an A.
32 The transitions seemed to be struggling against keeping time in A a little. Nearly an A.
44 Musical, but the sound itself didn't really do it for me.

71 I enjoyed listening to this. Though I didn't like the performance of passage C.
72 I really liked the performance. Not so much the tone.
*Group B*
These were just about passable, but definitely nothing to write home about. Some came close to Group A. Others were quite close to slipping into Group C.

3 Quite liked passage C here especially. D sounds odd. Not a fan of the tone. Otherwise might have been an A.

16 Vib-tastic, and not in a way I found pleasant. A bit stilted. But no passage was howlingly bad.
20 A bit of musicality (e.g., C isn't how I'd want to hear it personally, but it is still musical). Some weak transitions. Weak runs. Nearly dropped to a C grade.)
21 I quite liked this, but nearly put it in C on account of some "flickering" across the stereo field.
26 Musical, but the tone was unconvincing to me.
33 Didn't like passage G or passage F, otherwise nice.
34 Same comment as 33. Think it might be the same library.

43 Musical, and somewhat convincing in terms of sound. But the runs at the end… my ears!
57 I liked this, though it's not completely convincing to me. Borderline A.
69 Just about passable.
73 I think this is 72 (grade A) with reverb or a room mic? The change in tone wasn't to my liking(!).
74 Passage D let it down for me
75 74 with reverb/room mic?
*Group C*
These were disappointing. If I bought a library and it sounded like this, I'd be disappointed (or at least frustrated that to make it sound better I had to do more).

2 Stilted
4 Stilted throughout, even in A.
6 In places, the legato is nice, but overall it's still quite stilted. Repetitions in D were painful. Runs in G were synthy.
8 Repeated notes in C and F really stand out. Not musically convincing to me.
10 It was going well until D.
13 The tone is horrible. The transitions are not great either.
15 A was okay. C and D though…
17 Passage D particularly bad.
18 Didn't like the transitions in slower passages; passages D and G were awful. Disliked the tone as well.
23 Sounds off (e.g., passage D).
24 Transitions, even in A, felt odd to me. Never mind in G.
25 Did not like the tone, or some of the transitions.
27 meh
28 The faster passages are more convincing. Way too much sliding about; it gets annoying very quickly.
35 Passage C…...
36 Not convincing to me. Some slower passages okay (D was surprisingly good given the others).
37 Not digging the sound here. For example: passage C.
38 Marginally better than 35-37.
39 As 38.
40 Musical in places, but the tone… _Violins: "Fallout" edition_.
41 Musical as 40, but I don't like the tone here either.
42 Just not convinced by this.
45 Synthy, especially, e.g., G.
46 Unconvincing. Struggled with timings.
47 meh
48 Weird attack envelope to my ears (e.g., passage A).
49 even from passage A, I didn't gel with this

50 unconvincing, didn't like the tone
51 synthy, transitions seemed off
52 synthy to me
53 synthy to me
54 synthy to me, seemed same as 51-53
55 I quite liked this, but nothing I liked was quite pulled off to completion, so to speak.
56 same as 55?
58 Was this 57 from a different mic position? In any case, didn't like the tone here, c.f., 57.
59 meh
61 meh, really didn't like the tone
63 Was a B, then I heard passage G.
64 As 64 passage G was when it all fell apart.
67 The vibrato is… heartfelt…
68 meh
70 Nearly a B. Didn't like the tone, or passage G.
77 "uneven", passage G was a bit awful
78 meh
*Group D*
This is really a sub-category of Group C: these just had something extra special to them. (My tongue is in my cheek here!)

1 Oh dear: example D and F in particular.
5 This is what happens if you select all in Sibelius and then press the Marcato symbol.
7 Select all + Marcato again…
9 The legato is smudge-tastic. Passages D, F and G are particularly bad too.
11 :(
12 Really did not like the transitions
14 Volume jumps and other artefacts. Not a fan of the tone.
19 Noise much? Sounds like my old CD player stuffed in a shoebox.
60 Did they put WD40 on the fingerboards?
62 Made me flashback to Kontakt Silver for Sibelius Student, circa 2003.
65 Someone dropped the mic into a reverb stuck on a cathedral setting.
66 As 65.
76 Entirely subjective, but I really did not like the tone.

Thanks to @Garry for making this happen, and everyone else who took part in one way or another. Can't wait for the reveal, to be honest! 

(Oh, and thanks @NoamL for the inspiration to use a group/tier marking system. And thanks to @markleake for the volume-levelled version [I've listened to both the WAVs and the levelled MP3s in finalising my list]).


----------



## NoamL (Jun 16, 2018)

ModalRealist said:


> There's nothing weird or unusual in @Saxer's passages: to the contrary, it's a great selection of bread-and-butter string writing. It's astonishing just how few of the recordings successfully pull off a convincing rendition of all the passages.



Maybe I'm stating the obvious here but each of @Saxer 's excerpts is testing for a different problem area in sampling:

A) Basic legato example. Looking for realism of legato transition samples & appropriate variety in their length.

B) Bow change legato. Testing for realistic intermixture of "bow start" and legato samples.

C) Can the library realistically mix legato and short articulations.

D) Repeating the same legato transitions. How much can the library avoid the artificial sounding "seesaw" effect?

E) Testing for note retriggering in the middle of a legato phrase.

F) Fast arpeggios

G) Fast runs

It's carefully designed to be an obstacle course of things that aren't challenging at all for real players but that do add to the technical specs of virtual instruments. For example to pass D), the library must either have round robin legato transitions or the legato transitions must have been carefully planned and selected from recordings so that there is no "telltale sound" artifact that reminds the listener they're hearing the same transition over and over.


----------



## ModalRealist (Jun 16, 2018)

NoamL said:


> Maybe I'm stating the obvious here but each of @Saxer 's excerpts is testing for a different problem area in sampling:



Absolutely.  I wasn't trying to imply that sampling all these different techniques is easy for developers, or so on. Only that all these passages are ones that one might realistically, _regularly _reach for when writing music (at least, in my experience, such as it is). I felt this very much recently when I was lucky enough to write for live orchestra _from scratch. _The sense of liberation was immense!


----------



## M0rdechai (Jun 16, 2018)

Nice to see people are doing the same thing  bellow a copy paste from my sheet.

disclaimer: I'm a total newb in this. just listening to what I would like to hear, hardly any experience in using a library.

not much love for 51-54 from earlier posters. I liked the sound..
so far 29-34 seem to score well


4 A
7 A Nice sound, good fast Strong Attack (con?) very niche use (not for long/slow)
9 A nice present sound sometimes synthy (longs)
18 A not so good at fast
25 A
28 A
29 A
30 A
31 A
32 A
33 A very nice
34 A
38 A strong vibrato not strong fast
39 A nice caracter not accurate ritmically
42 A nice sound, very dry very dry
50 A strong fast notes bad at slow
51 A good sound much legato used
52 A good sound
53 A good sound
54 A good sound
55 A
56 A
57 A well played
58 A
62 A very expressive
64 A good sound not good at fast
67 A expressive false at times
77 A good sound bit heavy on vibrato
78 A good sound
3 B bit in your face
5 B Ok sound clunky played
8 B Nice sound Badly played
10 B
12 B out of sync
13 B rough rough
14 B clunky
16 B
20 B
23 B weird fast notes
24 B
26 B
27 B
36 B
37 B
40 B ok sound bit unrefined
41 B ok sound bit unrefined
43 B mechanical
45 B nice sound clunky played
46 B nice sound clunky played, distance
47 B lots of emotion strong attack, gypsy
48 B not played suptle (attack?)
49 B some parts more wet
59 B in your face, bit clunky
63 B ok sound bit clunky (wet/dry changes)
65 B ok sound waaay to wet
66 B ok sound lots of mid
69 B ok sound
70 B ok sound
71 B ok sound
72 B ok sound
73 B ok sound
74 B ok sound
75 B ok sound
1 C not smooth
2 C huge difference in samples
6 C you hear single players
11 C
15 C clunky
17 C clunky
19 C lots of mid, choirlike sound
21 C clunky
22 C clunky, very wet
35 C
44 C synthy, clunky
60 C badly programmed
61 C in your face, clunky
68 C ok sound, clunky (you hear the legato)
76 C punchy attack (good fast) harsh and rough


----------



## robgb (Jun 16, 2018)

I was generally underwhelmed by all of them (including the one I submitted). I think a lot of it comes down to a complete lack of expression/dynamics in most of the samples. I know I didn't add any at all when I did my sample, and am sure that the library can be VERY expressive when played right. But going on just general tone and playability, I wasn't impressed by any of them and was downright surprised by how many sounded just horrible to my ears.

I'd say that what this has taught me is that no library is perfect and execution/mixing is 99% of the work.


----------



## Solara_Audio (Jun 16, 2018)

brek said:


> Side note, this is why I really like Resonic Player for listening.


Thank you, this is basically the music player I have been waiting for without even knowing that I am waiting for it. Good stuff!


----------



## Saxer (Jun 16, 2018)

I didn't have the opportunity to listen to the examples (I'm out for some live gigs). Really looking forward to! Thanks for all who recorded examples and thanks to Garry for organizing all that! I didn't expect that much output. Impressive community!


----------



## NoamL (Jun 16, 2018)

Looking at the 3 sets of grades so far, the maximum possible marks (unanimous approval) were given to 29 & 30, with runners-up 31, 32, 57, 71, and then in third place 20, 26, 33, 34, 43, 67, 72, 74.

Minimum or very low marks were given to almost 30 of the 78 entries.

Keep the grades coming! I have a spreadsheet with everyone's answers.


----------



## brek (Jun 16, 2018)

This is daunting... but super useful!

After browsing through a bit it became apparent that there was a fair amount of disparity within each library in how they handled the various obstacles. Some are agile but lack nuance, others may drip with beauty on the slow stuff but stumble over themselves when "virtuosity" is called for. A surprising number of them are just not very good (or suffered at the hands of a poor midi performance). 

So rather than rate a library collectively, I started giving grades based on the individual phrases. So far, I've mostly focused on Phrase F because nothing jumped out as having nailed it. I'm also skipping over solo violins, because that really feels like apples and oranges to me. Even then I couldn't get through all of it. Like I said, "daunting" 

There's also definitely an issue of ear fatigue when grading these.

https://docs.google.com/spreadsheet...v2ikpRtUoAIQvEIl9FU/pubhtml?gid=0&single=true

I think it will be very interesting to note when the libraries are revealed how much the midi performance or mix impacts the overall sound. As I mentioned earlier, I'm reasonably sure 19-22 are all the same library but the end result is drastically different.


----------



## markleake (Jun 16, 2018)

For those keen to know which libraries are the same, you can load up the files in your DAW and work them out. The image rending of the audio files looks the same, and they react the same way to the dB volume meter on the mixing window.

Oddly enough, having all 78 tracks playing at once in my DAW doesn't sound all that bad. And it gives a good overall view of the differences in interpretation between the different files.

Of course you may not want to do this to remain neutral in your assessment. (It's too late for me unfortunately, I have a fair idea which ones are which now, after having leveled the volumes out.)


----------



## Casiquire (Jun 16, 2018)

Thanks so much to NoamL for introducing the rating system and then organizing the results! This might not really be a contest anymore but we're getting so much useful info. I will try to listen in the next night or two and we'll see if I agree with the others.


----------



## Garry (Jun 17, 2018)

A few people have asked me privately for a sneak peak at the blinding, having already completed their own reviews, on the condition of not revealing these to everyone else. If you would like to receive these early, please just send me a PM (will send shortly to those who have already requested). 

For everyone else who requested to have more time to review, the entries will be unblinded on Saturday.


----------



## Vik (Jun 17, 2018)

I'm sure many will discover this thread after the library names have been published. So, if latecomers to this want to try a blind test, maybe it's a good idea to upload the names as a downloadable pdf instead of lust listing the names in a post here?


----------



## Garry (Jun 17, 2018)

Vik said:


> I'm sure many will discover this thread after the library names have been published. So, if latecomers to this want to try a blind test, maybe it's a good idea to upload the names as a downloadable pdf instead of lust listing the names in a post here?


Yes, I plan to do this Vik.


----------



## Garry (Jun 17, 2018)

Ok, just sent out the unblinding information to those that requested it (if there are others, just send a PM). Please *be sure not to reveal the info*, but I'm interested to know your thoughts in a general sense, now that you have the information. Were you surprised by the libraries you rated high/low - is it what you would have expected? How did the more prominent libraries compare to lesser known or less expensive ones?

Have fun!


----------



## Alex Fraser (Jun 17, 2018)

Hey Garry. Thanks for the PM. To clear things up for me (if you don't mind) - how much editing was done to these files? Did the composer(s) use the original midi file unedited, or was it replayed/re-programmed for the individual libraries? A


----------



## Alex Fraser (Jun 17, 2018)

robgb said:


> I was generally underwhelmed by all of them (including the one I submitted). I think a lot of it comes down to a complete lack of expression/dynamics in most of the samples. I know I didn't add any at all when I did my sample, and am sure that the library can be VERY expressive when played right. But going on just general tone and playability, I wasn't impressed by any of them and was downright surprised by how many sounded just horrible to my ears.
> 
> I'd say that what this has taught me is that no library is perfect and execution/mixing is 99% of the work.


I thinking along the same lines, having seen the results list and had a second listen.


----------



## Vik (Jun 17, 2018)

Re. your question, Alex: IMO one limitation which is important to be aware of is that several of the contributions aren't at all making these libraries sound as good as they can.Not blaming anyone of course, but there are quite a few places where small fixes/edits are absolutely needed.


----------



## teclark7 (Jun 17, 2018)

Here's my go a classifying them as NoamL did but I have a special E grade reserved for the particularly bad.

In my notes below:

slow transitions = phrases A-E 
repetitions = phrase D
arps = phrase F
runs = phrase G

*No. Grade Comment
==========================================
1 *D Arps bad, runs synthy
*2 *D Stilted, Volume all over the place
*3 *B Arps and runs sound synthy
*4 *C Stilted, runs bad
*5 *E Too choppy, not much redeeming
*6 *D Too staccato transitions, runs not good
*7 *D Really stilted, arp ok, runs blurred
*8 *D Repetitions not good, Arp struggled, runs blurred
*9 *D Blur everywhere, runs unreaslistic
*10 *C Bit staccato transitions, arps blurred, run a mess
*11 *C Too staccato, fasts synthy
*12 *E Blurry transitions, arps and fasts all a mess
*13 *C Scratchy tone, squeaky high notes, arps messy
*14 *C Jumpy sound, too verby
*15 *C Good on slow, repetitions bad, runs blurred
*16 *B Good but too much reverb
*17 *C Same as 10? repetitions not good, arps and runs not right
*18 *C Same as 17, bow noise, arps and run not right
*19 *D Too noisy, arps and run too blurred
*20 *C Bow noise, arps and run too blurred and synthy
*21 *C Solo, OK across board, arps have some weird artifact
*22 *B Too much reverb. Blurred arps
*23 *B Solo, Dynamics a bit awry sometimes, run too spiccato
*24 *C Solo, arps bit nasally sounding, runs sound synthy
*25 *B Solo, arps and runs not so good in some transitions
*26 *A Solo, bow noise, arps and runs musical
*27 *C Tone bit thin, arps and runs a bit synthy
*28 *B Solo, bit too portamento, arps bit blurred, runs nice and clear
*29 *A Nice overall but arps and runs too blurred
*30 *A Same as 29 but less reverb, slightly better than 29
*31 *A Same as 29 but less with dynamics, not as good as 30
*32 *A Same as 29 but better reverb but struggled with runs
*33 *B Nice slow transitions, struggled with arps and runs
*34 *B Same as 33 more reverb
*35 *C Solo, tone not so good / phasy, runs and arps sound weird
*36 *C Solo, same as 35 but slightly better
*37 *C Solo, same as 35, tone not so good,
*38 *C Solo, same as 35, but improved tone
*39 *B Solo, same as 35, thin tone, more reverb better
*40 *C Tone muffled, arps synthy, runs fail
*41 *C Same as 40, more reverb, runs till no good
*42 *C Arps and runs synthy
*43 *C Good slow transitions, bad reverb, runs not work
*44 *B Tone not so nice, arps blurred, runs blurred
*45 *C Arps and runs not work - synthy
*46 *C Transitions slightly off, blurred arps and runs
*47 *C Solo, too stilted on slow transitions, arps better, runs ok
*48 *C Too much noise, bit stilted, runs too blurred
*49 *C Too much reverb, timing out on runs
*50 *C Stilted slow transitions, arps and runs OK
*51 *C Same as 50, not as stilted, slow transitions blurred , runs blurred
*52 *C Same as 50, slow transitions slightly better
*53 *C Same as 50, more reverb
*54 *C Same as 50, too much reverb
*55 *B Good tone, good slow transitions, arps good, runs struggled
*56 *B Same as 55, better for the less reverb
*57 *B Same as 55, better with slight reverb
*58 *B Same as 55, not as good as 57
*59 *C Bit stilted, arps not good, runs synthy and cut off at end
*60 *E Portamento all over the place, arps and runs a joke
*61 *C Bow noise, arps too blurred, runs too blurred
*62 *C Solo, transitions not good, bad tone, bad repetitions
*63 *C Slow transitions good, nice tone, runs not working
*64 *C Same as 63, longer reverb, not as good as 63
*65 *D Bow noise, way too much reverb
*66 *D Same as 65, better reverb but not much
*67 *B Same as 23?, solo, rich tone, runs too spiccato
*68 *C Slow transition sloppy, arps synthy, runs synthy
*69 *B Some transitions bit awkward, nice tone, arps and runs good
*70 *B Pretty good tone, arps just ok, runs not quite right
*71 *A Good slow transitions, nice dynamics, good arps and runs
*72 *B Thin tone, nice dynamics, arps and runs musical but noises
*73 *B Same as 72 but better with some reverb
*74 *B Same as 72, better tone still noise
*75 *B Same as 72 again but better reverb, still noisy
*76 *D Tone not so good, repetitions not so good, runs synthy
*77 *C Good on slow transitions, arps synthy, runs weird artefacts
*78 *B Good tone, good dynamics, bit blurry on arps and runs


----------



## Garry (Jun 17, 2018)

Alex Fraser said:


> Hey Garry. Thanks for the PM. To clear things up for me (if you don't mind) - how much editing was done to these files? Did the composer(s) use the original midi file unedited, or was it replayed/re-programmed for the individual libraries? A


All entrants were free to make any/all adjustments they saw fit within the library, or to the midi file, to make it sound as best they can. If they used anything outside the library, they were free to do so, but just had to state clearly what was used, and include 2 submissions: with and without external effects.

I agree, there are certainly some surprises in there, and there are clearly improvements that can be made to some of the submissions. However, I think that is an important, but slightly different question. If, after having completed the blind test, we now ask: what is the very best that each library can produce, then we can address that question with updated entries. However, it's not necessarily what _this_ blind test was there to address. The very best test would have been ideally one single, expert user producing the phrases with all of the libraries. But since that's a high burden on 1 individual, and since there was no one offering themselves for that job, then the next best is what we went with: self-selected contributions. There are clearly disadvantages to this: on listening, some may feel a particular library wasn't well represented by that user's contribution - that's a clear limitation of our pragmatic approach; however, not only can we now address this with improved (albeit unblinded) submissions, but it does give interesting information in itself: what can the *average, unselected *user do with this library? One of the problems with listening to demos from developers, is that we can't know just how much went into producing this, and also, if the average user will be able to get the same sound out of that library, as the expert commissioned by the company to do so. I have no idea of the typical skill level or the range of skill levels of the contributors who submitted entries (and with respect to our contributors, I have no doubt we have some excellent ones), but with warts and all, I feel this gives more of an idea of the average user's capabilities with these libraries, than what I might be deceived into thinking an average user like me can produce, given the expert demos.

No single test can answer all questions of course, but I think we certainly got some useful information from this; if afterwards we have other questions, we can design different tests to answer those.


----------



## Garry (Jun 17, 2018)

The other option is to have the developers contribute the entries themselves. I think if we have pushed the ball even slightly in that direction, it would be a huge advantage. Imagine, if for all new libraries, we had Saxer's 7 phrases produced by the developer. In the same way as we benchmark computers' specs, this would really allow us to appropriately evaluate developers' claims of 'revolutionary', 'paradigm-shift', because we could listen to exactly what the new library offers that distinguishes it from what's already available. Perhaps the new library differs on many other things too, that would also influence your purchase (ease of workflow, which engine, RAM usage and many other factors), but the tonality is one thing (and perhaps the most important?) that we currently find very difficult to compare pre-sale, just by listening to company's own demos. 

We did have 1 developer (since we decided to keep the contributions anonymous, I'm not going to name the developer), but I would love to see this involving more developers in future, ensuring that the very best their library has to offer is demonstrated, on a single agreed benchmark.


----------



## M0rdechai (Jun 17, 2018)

personally I was relieved watching the results, as the libraries I was drawn to by hearing demo's / comparing via youtube etc. where the libraries I rated higher in the blinds as well..

very nice project this, which will surely help me in making a choice which libraries to buy. I hope in the future developers will post in the list of demo's the 'Garry-test' as well 
(and I would vote for doing this for more instruments, like full ensemble strings/brass/woodwinds)


----------



## Garry (Jun 17, 2018)

M0rdechai said:


> personally I was relieved watching the results, as the libraries I was drawn to by hearing demo's / comparing via youtube etc. where the libraries I rated higher in the blinds as well..
> 
> very nice project this, which will surely help me in making a choice which libraries to buy. I hope in the future developers will post in the list of demo's the 'Garry-test' as well
> (and I would vote for doing this for more instruments, like full ensemble strings/brass/woodwinds)


Thanks for this M0rdechai, but I think it should be referred to as *'Saxer's Seven'*! It's a very nice collection that really puts a library to the test across a broad range. Next time a library is released, let's collectively demand, 'but how does it perform on Saxer's Seven?'


----------



## Vik (Jun 17, 2018)

Removed.


----------



## NoamL (Jun 17, 2018)

Here's the new top fifteen scores adding together my, @ModalRealist , @Vik , @M0rdechai , and @teclark7 's grades.

15 points: #29, #30
14 points: #31, #32, #71
13 points: #57
12 points: #33, #34, #72, #74
11 points: #26, #58, #70, #73, #75

and the bottom fifteen, if you're interested... in no particular order:

#1, #2, #5, #7, #9, #11, #12, #19, #35, #60, #61, #65, #66, #68, #76


----------



## NoamL (Jun 17, 2018)

Like some other users I have noticed that the top scores appear in *runs*: 29-34, 70-75, and 57-58 are all in the top 15. The only other entry that made it into the top is #26, which seems (to my ears anyway) to be one-library entry i.e. it's not paired with anything else.

PS - I've received the results from Garry but haven't checked it out before making this latest total of the score, to assiduously avoid being biased about anything  I'm going to check it out now.. for the next week I won't make any posts here except just to report the latest totals without any commentary. Submit your own grades and maybe Garry will give you the results early too!


----------



## Garry (Jun 17, 2018)

Fascinating - a couple of observations, without revealing the blinding:

one library is doing well, but interestingly NOT based on the submission of the files that library's own developer submitted!!
one library is in both the top and the bottom 15


----------



## Garry (Jun 17, 2018)

NoamL said:


> Submit your own grades and maybe Garry will give you the results early too!



Noam, you raise a good point here. Up to now, I've released the unblinding to anyone who asked, I think it would be fun if from now, early release is limited only to those who publish their gradings to help Noam's rankings. 

Also Noam, I really like the way of releasing the bottom scores: lowest 15 in no particular order is a great way of avoiding the negative connotations that I was hoping to avoid by abandoning the voting; your way works really well for this.


----------



## erikradbo (Jun 17, 2018)

Avoiding to read the other posts, here are my thoughts after focusing on the first three passages, after listening approx 1 hour. My ears are so confused! Again, big thanks to everyone, and hope that no one gets hurt by the opinions, it’s all very subjective.

The ones I'd see fit to use in a production:
23, 27, 29-32 (prob same library), 33-34 (prob same), 52, 55 (except some volume spikes), 57-58 (same?), 77

The disasters:
1 (the pumping), 5 (the attack and vibrato), 7 (the attack), 11 (very flat), 18 (the sound and lacking legato), 40-41 (sound and legato), 43 (sound and legato), 50 (legato not working), 59-61 (what’s going on here?), 65 (the tone, too much room), 68 (nothing works here), 70 (no legato or dynamics whatsoever), 76 (totally flat),

And the rest end up in between I guess.


----------



## Vik (Jun 18, 2018)

Great work from Noam, Garry and many others here - thanks!

I have some suggestions for the next time a similar thing is being done.
I know the results now. Won't share them of course, but it shouldn't be a secret that some of the entries we are listening to are two layered presets (V1 and V2), or a V1 with an added solo violin, and even a combined V1 and V2 with two different solo violins.
I haven't contributed with any files, so I should maybe keep my mouth shot.  But since I falsely assumed that we wouldn't see layered contributions here, I wouldn't have thought of using that method if I would have sent something. This isn't a critique of Garry or anyone else of course! But next time, it's IMO better to either

tate that it's OK to post layered entires like that - or say that that isn't allowed. My main method to make fake strings sound good, btw, is to use layers. V1s+V2s can be good etc. Adding solo violins or first chairs also, of course. I wrote something about layers here: https://vi-control.net/community/threads/your-favourite-sounds-patches-or-layers.70265/

I'm not saying that the library that was represented several times here, in various combinations/layers, doesn't sound good alone. It does! But in all fairness, for the future, it would be better IMO if the comparison premises "officially" would or would not be allowing layering.

Regarding the "voting" (IMO this can't possible be seen as a contest, btw, due to the method), when one lib if one lib is represented a lot of times, and another one only once, the lib with many different entries will of course get more votes. So it could IMO be better with a main presentation where all the libs are presented once, and then have link to another file/files with multiple versions of the same lib. This would have it's pros and cons of course, but one useful one would be this: if I'm not sure if i like lib # 91, and there's a link to 9 different versions of #91, I could go and check how these sounds - knowing that they all are made using the same lib, before I vote for lib #91. 

Re the initial version of this comparison: As it is now, a lib which is generally seen as a very good lib is represented only once, but with an ending which IMO is difficult to recreate... it sounds quite weird. I tried several times to recreate that weird sound using the same library, but couldn't. No big deal in a way, it's not an actual competition etc. But some people may use all this to compare libraries. So while I agree that the initial version of the entries isn't so important. It's even less important that I thought it would be, due to what I described above.



NoamL said:


> Here's the new top fifteen scores adding together my, @ModalRealist , @Vik , @M0rdechai , and @teclark7 's grades.
> 
> 15 points: #29, #30
> 14 points: #31, #32, #71
> ...



Thanks for sharing this. I voted for two "rows"; 29-34 and 70-75, these seemed clearly to all be based on one single library. So now I have given two libraries 5 votes each. An idea for a future comparison: add a poll where the votes can't be seen until one has voted - and don't post anything about which libraries that are in the lead until some weeks later. If some libs would take the "lead" early, other would pay extra attention to that lib...it would increase the chance that others voted for it. Not that it matters much in this context, because we are comparing apples and oranges here (layered vs non layered, random not-so-good of good libraries with proper versions of others etc). Nevertheless, some will use the results, after the list has gone public, to assume that X is much better than Y. That's kind of unfair - no, that IS unfair knowing about the various limitations and premises this comparison has.

Personally, btw, I didn't get any big surprises from knowing the results. I was think at some point that "maybe this is library X", but I have that library, and know that it couldn't sound like that (and was wrong because I didn't even think about that some of the entries had up to four layers). So I still like the asme libs I liked before, and got curious about a few others.
Too bad that some libraries come out of what we know have in a very "unfair" way, but in spite of the limitations and unclear premises (again: no critique of anyone), it was useful to hear all this stuff blinded.Thanks again.


----------



## Garry (Jun 18, 2018)

I don't mean my reply to sound defensive: if it does, please put that down to either me not expressing myself well enough, or the nuance of intention not coming across in this format, but either way, just know that it's not defensive, but just a reply to your comments, based on my own thoughts (they're neither right, nor wrong, just mine!).

On the layering question: the intention of the competition was to compare the libraries, as they can be used. That is, we were specifically NOT trying to equate the libraries (which leads to other objections, as you'll know, having done it this way in the past), but to allow users the freedom to get the best out of the library (or combine it with other 3rd party plugins, just be transparent about this, and submit with/without). There are disadvantages to all methods, including this, but it was explicitly stated in the rules, that contributors were encouraged to try to show the library in its best form. When I'm looking to make a purchase, I want to know what Library X is capable of, not what Library X is capable of if I restrict it to the constraints of Library Y. So if layering makes it sound better, all contributors were permitted to do so. If in the future, you were to run a different competition with this restriction, this would also be perfectly fine, but would answer a different question, and have it's own problems.

On the voting: yes, the original intention was that votes would be submitted not on the forum, but sent as a PM (and not to me, who was unblinded but to ka00, who was to remain blinded). This way, people's views were not exposed to interference from seeing other people's comments. I have no doubt that the voting in this case was corrupted in this sense. However, the reason for this was that having abandoned the voting, I took no effort to control this; indeed, it's difficult to see how this could have been practically achieved: people, either through not being aware, or through insistence for their own reasons, were always likely to have posted their comments/votes. The value is in each person now having a database to use to compare across libraries, and reach their own conclusions, based on what they feel is important, rather than on what the group voting says is important. People's enthusiasm, participation and thoughtful replies seem to indicate this was successful.

On the number of libraries: I don't agree that just because a library has more entries representing it, that it will automatically be voted higher. If the library is inferior, there will be more votes against it; if the library is very dependent on the user and requires a greater degree of skill and experience to get the best from it, then the votes are likely to be variable. You recommended having only 1 entry per library, but this would require a filter: I wasn't prepared to have my view be that filter (as was proposed earlier in setting this up): this would have been inappropriate and I have absolutely no doubt people would have (quite rightly) complained at the subjectivity of this. We already had 1 person explicitly say they would not participate because these were "Garry's rules" (despite having incorporated feedback and amended rules the whole way through), so imagine if I'd have said that people's entries will only be seen if I decide they do!! The only other way would have been to have a 'pre-competition' to see which user would represent which library - which brings its own problems (more time, more effort, more organisation, and pre-exposes people to the samples). There isn't a perfect approach, and all have their flaws; I would only say that if people feel strongly about a specific limitation, they should organise a follow-up to address it: this was always going to be iterative and incremental. Again, I hope this doesn't sound defensive - I can only defend against that by reiterating that it isn't intended to be, and say thanks for your comments.

So, thanks for your comments!


----------



## Steve Martin (Jun 18, 2018)

hi Garry,

when the results are out, is it possible to know the ones that were layered, and what solo string library they were layered with?

thanks Garry!


----------



## Vik (Jun 18, 2018)

Garry said:


> here are disadvantages to all methods, including this, but it was explicitly stated in the rules, that contributors were encouraged to try to show the library in its best form.


My point is mainly that it wasn't mentioned that one could add layers that needed to be purchased separately in order to make it sound as good as possible. But no disagreements or critique, I just think things need to be as clear as possible as early. If I for instance should make a Berlin Strings submissions, and layering it with their First Chairs or Nocturne solo violin, that should IMO be mentioned explicitly.
Re voting, I'm not worried, I don't see this as a contest anyway - and it all depends on how the results are summed/presented.


----------



## One Dove (Jun 18, 2018)

As others have stated before me, I agree that this test has become more of a comparison between the users abilities rather than sound quality. The test is still interesting - it does reveal a thing or two about user-friendliness, as well as our expectations to developers - and to what degree we composers need to take the time to learn the tools we invest in. As I am in the middle of my masters degree final project, I do not have the time to give any detailed review of the various libs, but since I am a proud owner of LASS, I am going to wager a guess that track number 1 is a rendition of that lib (there are some tuning instabilities that sounds familiar to my ear). If I am correct, there's a lot in that track that is not quite representative of what is possible to get out of LASS with more tweaking (it is not my intention to bash anyone's efforts here, just voicing an opinion based on my experience with LASS). On the other hand - I could be completely wrong about the identity of lib 1. I am still immensely grateful for all the time and labour everyone has put into this, and look forward to the unblinding


----------



## eli0s (Jun 18, 2018)

I think most of the libraries used here can sound good under some context.

What is interesting to me is how a library's strength (or weakness) can conform my writing. As the tools get more and more realistic sounding at certain workflows, I find that I restrain myself from writing stuff that a library cannot reproduce well. On the past, everything sounded like midi and this mental handicap wasn't there, I was more liberated in my compositions, because there were just meant to be placeholders, just in case I ever had the chance for a live performance. Nowadays, the midi mockup is the final product. So *I must* make it sound as good as I can!

Saxer's example phrases are a good stress test for my mockup skills. To be honest, I couldn’t get the 2 last phrases to work. So I was very curious to see how other users tackle the same problem with other libraries or, even better, the same library. I can learn from this.


----------



## Gingerbread (Jun 18, 2018)

I agree that any "comparison shootout" like this will always have pros and cons in its approach; it's unavoidable. Perhaps one way to solve the problem that libraries with multiple submissions are going to inherently receive more votes would be to group them when presenting the contenders. For instance:

Blind Library 1 (and list all its entries)
Blind Library 2 (and list all its entries)
etc.
etc.

There could be a note next to each submission noting what layering and/or processing had been used.

Then people would grade each library just once, based on the totality of all its submissions, on an A through F school grading system. I'm sure this approach has its own pros and cons (ie. it's not quite as completely "blind" as this was), but might be helpful for future blind tests.

Or alternatively, each library's final grade can simply be averaged from all its submissions.

By the way, this has been a wonderful test (thank you Garry!), and hopefully other such blind tests will be done too.


----------



## Casiquire (Jun 18, 2018)

Libraries with more entries only get more votes if you're combining all the votes. Not sure why you'd do that! No, it should always be on a per-recording level. Simple fix.


----------



## Garry (Jun 18, 2018)

**** UNBLINDED FROM HERE****

*SPOILER ALERT!!*

Ok, so since there are now LOTS of PM requests for the unblinding, I'm going to go ahead and release the unblinding today, as originally planned, rather than wait until Saturday. If anyone wants more time to review, please just don't look at the attached file, or read posts from here onwards, in which I assume people will be discussing the unblinded results.

Hope you found this fun, useful and enjoyable. If there are substantive comments about how it can be improved next time around, hopefully this exercise will have, if nothing else, at least served to motivate a 2nd round, with any learning from this incorporated.

I personally found it incredibly helpful, and appreciated the efforts of the community to pull it together, and test whether some of the claims made of these libraries stood up to blind testing. Interested to hear what others made of it, and see who wants to take up the reins for version 2.0: bigger and better than ever, a ground-breaking, paradigm shift in blind testing that will revolutionise your workflow, and give you that ultra-realism you've been waiting for!! 

Thanks all.


----------



## AoiichiNiiSan (Jun 18, 2018)

NoamL said:


> Here's the new top fifteen scores adding together my, @ModalRealist , @Vik , @M0rdechai , and @teclark7 's grades.
> 
> 15 points: #29, #30
> 14 points: #31, #32, #71
> ...



Amazing - referencing against the unblinding chart, CSS completely dominates. Some other observations: CH Ensemble Strings also provides a strong showing, surprising given they have a reputation for being complex to work with but shows the customisation they provide can achieve great results!

On the other end of the scale, the Spitfire libraries in general fare quite badly, unable to generally make it into the higher gradings.


----------



## Alex Fraser (Jun 18, 2018)

Nice one Garry, thanks for investing the time.


AoiichiNiiSan said:


> Amazing - referencing against the unblinding chart, CSS completely dominates.


Yep, the legato for that library is still the bananas.


----------



## NoamL (Jun 18, 2018)

*DISCUSSING UNBLINDED RESULTS*

First, thanks to Garry and all thirteen composers for their hard work.

Since the library _*I*_ like the most received the highest possible marks from all the people who listened through all 78 examples, it would be easy to point at this competition as proof that "It's the best." 

However, I do agree with previous posts that the competition had two weaknesses. First, the libraries are completely at the mercy of the user to demo them well, and second, multiple entries didn't necessarily lead to unfair grading but it did give that library more chances to catch people's attention when they were skimming through all 78. I think in future competitions of this type, the organizer should choose just one example for each library and should exercise their judgement to pick the best option out of however many are submitted.

However, all I can really do is note those two issues in passing and move on.

I think there were 5 results of the competition that really surprised me or are worth remarking on:

1. *Cinematic Studio Strings* received consistently top grades from every reviewer. It's possible that publishing my own grades early on drew attention to those tracks, but I think everyone was just impressed by those recordings. What makes the library stand out is not just the quality of the individual samples but the effort & insight in the programming that connects them. Because of that, CSS can pass most of Saxer's obstacle course with no sweat, especially obstacles that challenged many other libraries like the repeated-note test, the runs test (marcato legato), and the long/short mixture test. There were few obstacles where other libraries consistently beat CSS. Perhaps on example D, the repeated transition test, there were a few libraries that sounded more natural. Compared to CSS, Cinematic Studio _Solo_ Strings received a _significantly_ less positive response and wasn't a standout among the solo-violin entries at all. However, CSSS+CSS still achieved top grades when blended together.

2. All graders exhibited a strong tendency to give low grades to the entries that ended up being not true-legato demos at all, but "sus patch" demos. Composer #1 for instance gave us several recordings of libraries that were never designed to be able to play Saxer's passages. This shows that true legato sampling does create an audible difference... not exactly a lightning bolt revelation but still interesting to prove empirically.

3. Most of the time, the *standard deviation* of the 5 grade lists was low. That means the reviewers generally agreed with each other. The tracks with the most significant disagreement were the *Virharmonic Bohemian Violin* and several of the tracks (from different composers, even!) demonstrating *Spitfire Chamber Strings*. For those two libraries, A's and C's were handed out in equal measure!

4. *Chris Hein's strings* received higher - and more consistent - grades than you would expect from the little discussion & hype on VI-Control. Across the six tracks he submitted, the five reviewers gave Chris's tracks just one C, seventeen B's and twelve A's. Overall, his library received the highest scores for non-solo strings if you take out CSS, with Hollywood Strings coming close behind.

5. *The dog that didn't bark* in this competition, is how many of the high priced, true-legato libraries from the 2010-2016 era received very middling or even low marks. It would be very unfair to single out any _individual_ library since the grades were so totally dependent on how capable the user was and how much time they took to create these demos. However, there were a lot of "flagship" string libraries that received solid B's or even low B's across the board.


----------



## Saxer (Jun 18, 2018)

I had the same favorites as the majority. Interesting results!

My personal résumé: at the end it's all strings. No library really 'sounds' bad. Some are probably harder to control or have too much attack or vibrato or doesn't fit the situation... but most comments are about musicality, less about the tone itself. Even less about room or stereo placement. Similarities are bigger than differences. I think it's much more possible to combine different string libraries than I thought before. And it's a bit like DAWs: the best is the one you can handle.

Thanks all for who took part and thanks for using my midi files. And special thanks to Garry!


----------



## Gingerbread (Jun 18, 2018)

Entry 26 is listed only as "Solo Vln 1." Anyone know what library/developer this was?


----------



## NoamL (Jun 18, 2018)

@Gingerbread entries 23-26 appear to be Chris Hein Solo Strings Extended


----------



## Garry (Jun 18, 2018)

Saxer said:


> Thanks all for who took part and thanks for using my midi files.


I'm still hoping '*Saxer's Seven*' will catch on! I'm envisaging a time when on release of each new library, there'll be a collective call from the community for the developer to release it with 'Saxer's Seven' as one of the demos! Then finally, we'll have a better way of deciding whether these new libs deserve our hard earned cash!


----------



## markleake (Jun 18, 2018)

While I'm not surprised that CSS performed well (there's a reason people love it here), I'm inclined to think it mostly just means CSS is great at legato playing (well duh!), which a lot of these lines are mostly testing, and I think by far the predominant factor people are listening for. The different types of legato in CSS (and how little tweaking you need to perform for a single legato line) is a key reason the library is so brilliant and usable. I don't think it means CSS is necessarily better than the other libs though overall, even though I'll admit I tend to use this library a fair bit in my own writing for this type of playing.

But even then, like Saxer states, I tend to double it with other kids.

Edit: Or "libs", if auto-correct can behave itself. Doubling with kids may be awkward... goats tend to eat everything.


----------



## Garry (Jun 18, 2018)

NoamL said:


> @Gingerbread entries 23-26 appear to be Chris Hein Solo Strings Extended


Correct.


----------



## Gingerbread (Jun 18, 2018)

NoamL said:


> @Gingerbread entries 23-26 appear to be Chris Hein Solo Strings Extended



Thanks NoamL. Personally, I was really impressed by the Hein Solo Violin (#26), and without this 'competition,' it wouldn't have been on my radar. But I thought it compared very well (or better) to some of the more recent hyped solo violins. Of course, that could definitely also reflect the user, as I suspect that is especially true of solo violins.


----------



## puremusic (Jun 18, 2018)

One of the things that caused me in my personal notes to lower the marks for 16 and 17 was the odd sounding artifact at about 1:02. I can reproduce it in the MIDI since I have the library. Sounds like a cough to me. Is that was it is?


----------



## Vik (Jun 18, 2018)

NoamL said:


> Across the six tracks he submitted, the five reviewers gave Chris's tracks just one C, seventeen B's and twelve A's.


So.... how do you count this? I mentioned, for instance, that these five entries probably came from the same lib and that I liked that lib. Does that count as 5 As? Pardon my ignorance... I just need to know how our/these comments are interpreted.


----------



## Ran Zhou (Jun 18, 2018)

Just figured out my ear-favorite is #25 Violin, realistic.
And I have to agree with you on at least one thing:
Some composer did a "good" job and some did "bad" job in terms of making it sound musical. But I do appreciate their work. This happened between entries such as SF-CS and Hollywood String. When I graded these, it was based on whether they sound similar or close to recordings on youtube that I listen daily. I consistently mark #23-26 as good ones.










Here's my grading. In case, some one wants to collect data for more comprehensive statistic test.
Thanks again for Garry and everyone's contribution! I notice that I need to spend more time on learning how to make the tools work better, because even good libraries can sound wrong if it is not working right! And my ear surely like reverb but not too much.


----------



## Ran Zhou (Jun 18, 2018)

By the way, if anyone wants to do a more "proper" way to measure all things, linear model would definitely come in handy. We want to if the grade is associated with type of composers, but that easily can be told when you have multiple entries from different person (usually >5) on the same library. I didn't really carefully see if this was the case.


----------



## NoamL (Jun 18, 2018)

Vik said:


> So.... how do you count this? I mentioned, for instance, that these five entries probably came from the same lib and that I liked that lib. Does that count as 5 As? Pardon my ignorance... I just need to know how our/these comments are interpreted.



For each of the 78 entries, I counted A's as 3 points, B as 2, C as 1, D as 0, E as -1. For your grades, you've removed your post, but IIRC you mentioned the entries you especially liked and disliked, I gave those 3 and 1 points, and 2 for all the entries you didn't mention.

To be clear each of the 78 entries has its own line in the spreadsheet. You weren't adding "extra" points to CSS by saying you liked all of the 6 tracks.


----------



## col (Jun 19, 2018)

Many thanks to all contributors - found the exercise useful and interesting !


----------



## Vik (Jun 19, 2018)

NoamL said:


> For each of the 78 entries, I counted A's as 3 points, B as 2, C as 1, D as 0, E as -1. For your grades, you've removed your post, but IIRC you mentioned the entries you especially liked and disliked, I gave those 3 and 1 points, and 2 for all the entries you didn't mention.


Had I known that listing some of these as 'favourites' and others as 'passable' would mean giving them 3 and 1 points, I wouldn't have listed them the way I did. I actually adapted the term "passable" without thinking much about it, my additional list wasn't actually just 'passable' libraries.

I see all this as an early alpha test checking out if we can find a good way to compare libraries in a good way, and I think that what we have seen so far has been interesting, but also: it shows that this method in so many ways are flawed. We haven't found that method yet.

We aren't journalists, and this thread isn't a music magazine. But many (eg random readers who only read a few of the posts here) will - knowingly or unknowingly, get a feeling that the entries (and the results) are based on some kind of fair comparison. _They aren't._

Many of the entries shouldn't have been there at all, since the concept was to give each lib the a treatment which made it show itself from it's best side.

As mentioned earlier, I disagree in many of the premises here. *I removed my comments because I don't want to contribute to something which has the same effect as bad journalism, poor reviews or clearly unfair comparisons.*

*If this was a singer contest, or a song contest, and some of the singers would perform 9 times while others would perform only once, or some would sing with with lots of backing vocals and overdubs while others didn't realise this was allowed etc, the competition would have been clearly flawed. 

In our case, the singer who performs 9 times is the same singer who has backing vocals and overdubs.

Plus, the fact that someone could send contributions from other participants (without they knowing or acknowledging it) that certainly showed them from a much worse side than they usually perform is a very, very bad idea as well. It's as unfair as it gets, even if the intentions are as good as an intention can be.*

One single contributor (and they all are anonymous) could actually (knowingly or unknowingly) boycott a singer/company the way this comparison works right now, at the current stage. Seeing how some of the 7 segments came out in some of the entries, it almost seems as if this has happened. So please remove my "votes" from the final result, OK? Thanks!


----------



## markleake (Jun 19, 2018)

I just went through and listened to a selection of libraries I know reasonably well. It's a mixed bag... some are rendered quite well, some OK, and some poorly, compared to what I know each library is capable of.

To pick one as an example: Bohemian Violin. [Just as an example, there are others that I could use also, so no offense to whoever did this one!]. It has key switches -- why they weren't used, I don't know, but it sounds odd in this rendering, and nothing like it usually sounds when I use it. For a fairly rigid set of lines like this, when the keyswitches aren't used, you get strange artifacts like the re-bows where a player would never re-bow. It's never going to work this way.

You could say the same for any of these libraries... you can't just put random midi against them and expect them to perform. That seems like it is leaving a lot up to chance for it to come out sounding good. For a number of the entries (some the only examples of that library) it seems there was no or little updates made to the provided midi.

If that is the case, we are comparing apples to oranges, and saying that the oranges are scoring really well on having a lot of orange colour to them and the apples aren't. So yes, I'm with Vik. To rank them in any way is not fair.

It seems to me that you can't crowd-source a violin comparison. Not to say the exercise is without any merit, it may well be useful for some people. But I'd caution people to be very careful in drawing conclusions from this comparison, as some of the renderings are probably misleading.


----------



## Jeremy Spencer (Jun 19, 2018)

markleake said:


> If that is the case, we are comparing apples to oranges, and saying that the oranges are scoring really well on having a lot of orange colour to them and the apples aren't. So yes, I'm with Vik. To rank them in any way is not fair.
> 
> It seems to me that you can't crowd-source a violin comparison. Not to say the exercise is without any merit, it may well be useful for some people. But I'd caution people to be very careful in drawing conclusions from this comparison, as some of the renderings are probably misleading.



Yep, this whole ranking thing was a bad idea. I realize the intention, but it is flawed in so many ways....and totally unfair for the developers and prospective buyers.


----------



## dsblais (Jun 19, 2018)

Wolfie2112 said:


> Yep, this whole ranking thing was a bad idea. I realize the intention, but it is flawed in so many ways....and totally unfair for the developers and prospective buyers.



A fascinating and illuminating experiment in any case, though not without its caveats. I would say this is more informative in many ways than a single reviewer's opinion, for example, though it still requires a relatively full salt shaker of reserve when considering the results.

It does not seem that any library is, as yet, "perfect," though many can be massaged into beauty or are simply more suited towards certain uses. I'm reminded a bit of 4k demos and similar -- working within the limitations of the technology appears a necessary prerequisite of the digital orchestrator presently.


----------



## NoamL (Jun 19, 2018)

I think we should give people more credit for thinking for themselves.

I use Musical Sampling's Trailer Strings and Adventure Strings all the time and I don't draw any bad conclusion from their low grades here. Obviously they cannot handle most of the legato challenges, because they don't have legato patches!

Some of the entries were very low effort and, again, that comes through in the entries themselves. For example in #8 and #19 you can hear portamento articulations incorrectly being triggered during the repeated-transition test (0:40) because the demo makers did not even bother changing the note velocities from the MIDI provided by Garry.

Only about 1/3rd of the entries, or even a little less, seemed to me to be actually programmed with care & ability. Out of those, I thought the entries that turned out to be CSS were the best at handling the technical challenges.

If you think a library was particularly badly represented here, go for it yourself! I tried yesterday to create my own rendition with CSS and found that it was not much better than the competition entry!


----------



## ModalRealist (Jun 19, 2018)

If "poor showings" of some libraries encourage developers to put out more detailed and transparent showcases and walkthroughs of what their libraries can do and with what MIDI editing, then that seems like a good result to me. Now, I'm not saying developers don't already work hard at this, but I don't generally see even Spitfire (who have very detailed walkthroughs and use-case videos) or Orchestral Tools (or Cinematic Studio Series!) put out videos showing how to execute deliberately stress-test style lines like the "Saxer Seven". You might say: of course not, because they want the libraries to be shown in the best possible light, so they avoid stress-test cases. But in that case (and that's fine!) the community definitely shouldn't feel bad about filling the void.


----------



## NoamL (Jun 19, 2018)

ModalRealist said:


> You might say: of course not, because they want the libraries to be shown in the best possible light, so they avoid stress-test cases.



And also because these stress tests don't represent the usual use case of the libraries in the hands of working composers. "Modern" cinematic writing is very segregated into shorts-based and longs-based writing. It's a bit of a chicken & egg situation since, as example C demonstrated, so many libraries are bad at mixing short & long articulations in the same phrase convincingly. The developers are more interested in showing how their short articulations work together to build ostinatos, and their long articulations join smoothly to create simple and large-note-value legato lines, since for better or worse... that's what is being written across the media universe today... Saxer's Example C is clearly a reference to classical era music (in fact isn't it a quote from some Beethoven or Schubert string quartet?  it's tickling the back of my brain... maybe because I listened to it too many times!!)


----------



## Vik (Jun 19, 2018)

ModalRealist said:


> I don't generally see even Spitfire (who have very detailed walkthroughs and use-case videos) or Orchestral Tools (or Cinematic Studio Series!) put out videos showing how to execute deliberately stress-test style lines like the "Saxer Seven".


There are some rather good demonstrations of how Berlin Strings handle fast runs in this soon 5 year old clip:


Check out the fast notes at the end of this attached clip, it just sounds plain wrong. I have tried hard to recreate it with BS myself, but simply can't create such a bad fast notes run with BS even if I try.

Why is such a rendering even included in this comparison? In such cases, it should IMO be made an attempt to get more BS demos from other users or from OT.

[AUDIOPLUS=https://vi-control.net/community/attachments/bs-mp3.14072/][/AUDIOPLUS]


----------



## Saxer (Jun 19, 2018)

Vik said:


> Check out the fast notes at the end of this attached clip, it just sounds plain wrong. I have tried hard to recreate it with BS myself, but simply can't create such a bad fast notes run with BS even if I try.


I'm afraid that was me. I used the adaptive legato and the speed knob (the newer one in the Capsule GUI) went straight to 'Fast Runs" when playing that tempo. I also tried to add blurred spiccatos and blurred staccatos but it didn't really help. I don't use BS legatos very often so I'd be glad to see how you get better results and what I did wrong.


----------



## Vik (Jun 19, 2018)

Thanks for the reply, Saxer! Which version of Capsule do you have? If there's a speed knob in there I don't think I've ever touched it. I tried a few things, in addition to just press play and see what happened. One was to use the Fingered Legato preset, another one was to use the way to lock either of the legato modes manually with the solo buttons next to each of the modes (see pic). 
I'm not looking for a sinner of course. This is only one of several examples (not the worst one) where libs I know sound lot worse in some of the entries in the comparison than they IMO sound when using none or just simple methods to affect how the libraries behave in situations that may need special treatment.


----------



## Garry (Jun 19, 2018)

Saxer said:


> I'm afraid that was me. I used the adaptive legato and the speed knob (the newer one in the Capsule GUI) went straight to 'Fast Runs" when playing that tempo. I also tried to add blurred spiccatos and blurred staccatos but it didn't really help. I don't use BS legatos very often so I'd be glad to see how you get better results and what I did wrong.


Thank you Saxer! To me your post exemplifies 2 important points people should bear in mind:
*- What does the library sound like in the hands of an average to competent user:* I'm extremely grateful that you identified yourself as the composer of that entry, even in the face of criticism of it. Just that in itself I think is worth an honourable mention - bravo Sir! In doing so, it also allows me to make a point that I've wanted to, but couldn't before: I was specifically contacted, by a developer (I can't say who, he/she hasn't allowed me to yet), who participated in the competition and was glad that the original MIDI file came from Saxer, having met Saxer in person, and commended him on his ability to work with strings both in person and virtually. So, if someone like this, clearly well above the standard of the average user, acknowledges the difficulty he had making the library perform well under all of these passages, I find that extremely helpful to know, as someone like me who is no doubt less competent than Saxer. I want to know this, because the demos the company produces will deceive me into thinking that I too can produce such quality; Saxer acknowledging his shortcomings with the library is a noble and helpful way of enabling me to understand the limitations I will encounter with the library in my own hands. As NoamL points out, it's too easy in this exercise to simply blame the user, rather than acknowledge the limitations of the library:

_"I think we should give people more credit for thinking for themselves.

I use Musical Sampling's Trailer Strings and Adventure Strings all the time and I don't draw any bad conclusion from their low grades here. Obviously they cannot handle most of the legato challenges, because they don't have legato patches" _

*- If you can do better, go ahead and SHOW the community what it should sound like*: a number of the criticisms come from people who declined to participate in submitting their own entries, but seem to have plenty of time to devote to reviewing the efforts of others. Denigrating the premise of this whole activity and its outcomes, or the competence of the users who submitted entries is the easy part. In stating this, it of course betrays my own mild irritation at comments like these, but that’s not what is important or the point I’m trying to make: I mean it most genuinely (and not the spite that it will no doubt come across in text with!): if you can do better, please do so! Honestly (not spitefully), it would be an enormous help to the community: if you can show that a library was poorly represented, all of the forum posts in the world (or removal of them) will not be as helpful as submitting a new version, and showing how the original missed the mark. Indeed, you might find out, as NoamL gallantly conceded that _"I tried yesterday to create my own rendition with CSS and found that it was not much better than the competition entry!", _and having the courage to admit that would be helpful too.


----------



## Garry (Jun 19, 2018)

NoamL said:


> I think we should give people more credit for thinking for themselves.
> 
> I use Musical Sampling's Trailer Strings and Adventure Strings all the time and I don't draw any bad conclusion from their low grades here. Obviously they cannot handle most of the legato challenges, because they don't have legato patches!
> 
> ...


Well said!


----------



## Garry (Jun 19, 2018)

Saxer said:


> I'm afraid that was me. I used the adaptive legato and the speed knob (the newer one in the Capsule GUI) went straight to 'Fast Runs" when playing that tempo. I also tried to add blurred spiccatos and blurred staccatos but it didn't really help. I don't use BS legatos very often so I'd be glad to see how you get better results and what I did wrong.


Not only did you contribute the original midi file, but you acknowledge your own entry and it's limitations. Thank you Saxer - an incredibly helpful contribution to the community.


----------



## Vik (Jun 19, 2018)

First - and again - me expressed skepticism isn't in any way a critique of anything or anyone involved. Your initiative, investment of time and suggestions, Garry, what NoamL and Saxer has done, all the contributions, "bad" or "good" have helped this process evolve. I think this can grow into something very useful with some more time. 


Garry said:


> 2 important points people should bear in mind:
> *- What does the library sound like in the hands of an average to competent user:* I'm extremely grateful that you identified yourself as the composer of that entry, even in the face of criticism of it. Just that in itself I think it worth an honourable mention - bravo Sir! In doing so, it also allows me to make a point that I've wanted to, but couldn't before: I was specifically contacted, by a developer (I can't say who, he/she hasn't allowed me to yet), who participated in the competition and was glad that the original MIDI file came from Saxer, having met Saxer in person, and commended him on his ability to work with strings both in person and virtually. So, if someone like this, clearly well above the standard of the average user, acknowledges the difficulty he had making the library perform well under all of these passages, I find that extremely helpful, as someone like me who is no doubt less competent than Saxer.


You bring up som very important aspects here, Garry. IMO the Berlin Strings thing may be a special case, and I may comment more upon that later. (My main comment is that even if I play very fast real time passages with BS, I don't get the artefacts I heard in the BS demo here.) 

In some cases, an experienced user like Saxer not succeeding in making a fast passage right may certainly mean that it's difficult, and of course even more so for less experienced users. Other times, the recipe to get things right may actually be to do _less_ with the original material. Or something different, meaning that it would not be difficult even for an average user. 

I posted the BS legato clip again because right from the beginning of that YT-clip, that's how BS sounds _without_ editing/tweaking. Things can easily go wrong if one tries a new lib and assumes it behaves like one's main lib. Smart legatos, fast runs are dealt with differently from company to company of course, and that's particularly important for when it comes to "Adaptive legato" (BS), "Performance legato" (SF) and all that. 

What Alex Wallbank has done with CSS is certainly impressive in terms of both sound and what one can achieve with relatively few edits/few things to be aware of. It's a fav lib for me along with BS and SCS, and I certainly stand by my words about CSS+CSSS having my favourite entries among the 78 we have heard. But while I want the whole thing be more fair and to continue to grow as a concept, I still want my "votes" removed from the results, as I see this as some kind of alpha test. But no grumpiness or bad feelings towards or anything else. Thanks again, kumbaya etc. 

PS - one thing to think of in terms of future/similar comparisons: IMO, the overwhelmingness of this needs to be dealt with, and believe that the low number of members here who actually have taken the effort to listen through and comment the 78*7 segments may support this. 

A final idea for future comparisons: maybe a small group of people could listen to the incoming entries, filter out those which don't fit with "make the library sound as good as it can"-concept, or at least those who clearly needs more work, and and address that situation in one way or the other. Would that be something that's hard to organise? Probably not?


----------



## Garry (Jun 19, 2018)

Vik - rather than endless discussion (which can be misinterpreted), as someone who has tried these sorts of shootouts yourself in the past (but using a format that didn't motivate the forum's engagement), and since you didn't have time to submit an entry to the competition, it seems you have more time now, so why not now re-do Saxer's MIDI file with your own string libraries? Your points would be much better illustrated by simply showing the community how these libraries _could _have been represented. Honestly, that would be really valuable, and you're well placed to do it. I make this as a genuine invitation to you.

If not, then it's difficult to interpret your forum posts as anything other than simply Monday-morning quarterbacking, and as such, your points have less credibility and weight. So, I hope you choose to do so, as this is a thread where actions speak MUCH louder than words.


----------



## dsblais (Jun 19, 2018)

It seems like there are a few different things potentially being measured:

How beautiful/realistic/expressive/etc does a VI sound?
How beautiful/realistic/expressive/etc can a highly skilled and familiar performer make a VI sound? (i.e. demo style)
How beautiful/realistic/expressive/etc can a somewhat less familiar and expert performer make a VI sound? (i.e. the average user)
There problem perhaps is that (1) is rather dependent on (2) and (3).

There are a few different ways to solve this. One way is to have each vendor or their chosen player(s) perform a single, standard competition piece (e.g. Saxer Seven -- perhaps changing each time) and then submit them to blind review/rating. This is the "fairest" to the vendors, I think. Another approach is to have a set of contestants who are rated according to their experience with any number of libraries. These players are then tasked with performing the piece. One variant would be to have only complete newbies in a library who yet are generally musically skilled and have some general VI experience create the pieces -- a rather extreme example of (3). Another would be to only let relatively experienced performers play the familiar VIs and then review it as a blind _set _in itself. For example, if PersonX is very familiar with CSS, CSC, BS, and CHS then they would perform it for those four and the rating would be ordering strictly among those four.

There are rather endless things to consider, but one perhaps worth regarding not only for this but for similar things is that Flores and Ginsburg found in 1997 that the judges of the Belgian Queen Elisabeth Music Competition consistently ranked performances differently based simply upon the order in which they heard them. In that particular case, as it is a multiple-day competition, the performers on the last day were ranked significantly higher. (_The Statistician, 45_, 97-104)

And at the end of the day, it's good to not underestimate the value of imperfect experiences.  Very interesting shootout, indeed.


----------



## gyprock (Jun 19, 2018)

Imagine giving the same eggs, butter, sugar, flour and water to ten different chefs and then have 10 judges rank the taste results. But we don’t even have that restriction here because we have effectively different food ingredients from different sources prepared by apprentices through to master chefs and then tasted by the average Joe through to the Michelin five star food critic. Don’t even get me started on a wine, cheese or beer tasting analogy.


----------



## Garry (Jun 19, 2018)

@dsblais: I think you characterise the challenges well, but the problem with your solutions is a pragmatic one:

*Vendors choose their champion*: this was open to vendors, and they could have submitted entries for their own libraries; indeed 1 developer did exactly that, and all credit to him/her, but for the most part, vendors will shy away from such a test, because it truly put their claims of differentiation to the test. Therefore, if we try to run such a test, it will be unlikely to happen. (Also, as an interesting aside, for the library for which we received an entry from the developer, it was NOT the entries submitted from the developer that were highly rated, but those received from a forum user; had only the developer's entry been considered, it would have fared rather poorly).
*Expert panel: *have a set of high ability users, "contestants who are rated according to their experience with any number of libraries". Rated by who (how will you identify/rate the raters who decide who are to be the experts?), and by which criteria will you use to decide who is/is not expert? How many experts would you need for the large number of libraries? They may be expert on 1 and not other libraries, so what if 1 library is differentially represented: there will still be variability even amongst experts (for example, see the situation above that I mentioned regarding the developer's submission ranking poorly compared to that of a forum user: this developer is also an accomplished violin player, so adding this to the fact that s/he's a successful VI developer for violins, I'm guessing s/he would have been considered an expert, yet note the variability in reviews of the same library?). How will we recruit these experts? Do you anticipate many coming forward? Who will organise such a competition? It is time-consuming enough to organise things like this (believe me, I now know very well!), but now you're talking about a pre-competition before the main competition!! If you can pull it off, I wish you all the luck in the world, and will enthusiastically participate in the voting, but here again, if we try to run such a test, it will be unlikely to happen. In addition to these pragmatic problems, there is the concern of generalisability: how useful would it be to see what an expert can do with the library, compared to the average Joe? Whilst he may be interested in such a result, does the average Joe not care more about what the average Joe can do with the library? None of these problems are insurmountable (assuming you have enough time, enough experts, enough willing participants, and some objective criteria that won't elicit a hundred objections like this one did), but your solutions to them will necessarily involve compromises, and will not address all questions: no single experiment does.
*Musically skilled newbies*: all of the pragmatic problems above apply here too, we've just shifted the problem to a different portion of the ability curve, so again, I'd have to anticipate that if we try to run such a test, it will be unlikely to happen.

Regarding Flores & Ginsburg, yes, it's a well known effect called primacy/recency effects that goes beyond music (I referred to this earlier in the thread). You'll find the same effect if you just test yourself to remember a list of words that exceeds your short-term memory capacity: those that you recall will have a high likelihood of being from the beginning/end of the list. It is for this reason that the original intention, when the plan was to have a blind vote, was that the order would be blindly randomised, so as to not consciously favour any library by giving it a preferred position. This too is a pragmatic compromise, since we had only 1 randomisation (in a research study, items would be presented in a different pseudo-randomised order to all participants).

Bottom line: we _can_ propose many elegant and complex solutions to address any shortcomings (which I fully acknowledge in the current design), and if we were conducting funded research, with professional researchers and paid participants, your suggestions are likely along the lines I would go. However, in reality, something like this, based on a community forum, requires pragmatic compromise. If any of the proposed solutions are carried out, them I'm all in favour, and more power to your elbow! 

_"And at the end of the day, it's good to not underestimate the value of imperfect experiences."_

Yes, I fully agree!


----------



## dsblais (Jun 19, 2018)

Garry said:


> @dsblais: I However, in reality, something like this, based on a community forum, requires pragmatic compromise. If any of the proposed solutions are carried out, them I'm all in favour, and more power to your elbow!



Oh dear, I think I just... uh... sprained my elbow. 

It was a fascinating study and I’d love to see more “imperfect” studies. It’s certainly preferable to waiting for Godot...


----------



## Garry (Jun 19, 2018)

dsblais said:


> Oh dear, I think I just... uh... sprained my elbow.


 
Brilliant!


----------



## Vik (Jun 20, 2018)

Garry said:


> If not, then it's difficult to interpret your forum posts as anything other than simply Monday-morning quarterbacking...


Let's just try to keep this as friendly as possible...

I removed my earlier comments and want my "votes" removed from the results because, as I said, "I don't want to contribute to something which has the same effect as bad journalism, poor reviews or clearly unfair comparisons". So asking me for come up with more contributions is rather optimistic _because I disagree with combinations of the premises and the voting thing_. Wolfie and others has summed this up much better and shorter than me: "it is flawed in so many ways....and totally unfair for the developers and prospective buyers".

I'm not even saying that BS is better for fast runs than CSS. I just wondered how BS ended up sounding the way it did in Saxer's example since I've never heard such sounds coming out from BS. Btw, I'm probably the least ideal BS user to ask to demonstrate how that best is done, being someone who mainly doing slow and quiet music with the only purpose of making people feel miserable. .

For the records #1: BS doesn't sound better than CSS out of the box here, but it sounds better than in Saxer's example.

I've spent money on multiple good libraries to _not_ have to sit and tweak details knowing that another lib can do the same phrase out of the box. And my personal "problem" with CSS doing some stuff better than BS is solved: I bought CSS quite early, after having bought BS.

For the records #2: IMO it wouldn't at all be wrong to have a clear agenda showing that A is better than B. That's totally fine, and as an example: even if Saxer, who both made the examples used in this thread and also created the BS entry knew beforehand he created that clip that BS sounds bad at that phrase - that's not only 100% OK, it's very helpful info. It's a discussion forum after all!


----------



## Garry (Jun 20, 2018)

I think you just sprained your elbow writing that!


----------



## fretti (Jun 20, 2018)

It was actually really interesting to see how differently the same library can perform.
Yet I didn't want to rate them publicly, as it was in my eyes not really a contest, but more a chance to listen to and compare different libraries without any biased expectations of the specific library. Therefor I think it was an extremely helpful thing, so a big thank you to @Garry for starting it and also a big thank you to @Saxer for providing the midi file

Also I have/want to take credit for the not so good recieved entries of the Symphony Series with huge volume differences (51-54 I think it was). I was halfway through, when the company that I work part time next to my studies accepted a big client, and I therefor wasn't able to give this project here the time it deserved. Meaning I did every part of Saxers midi file isolated and didn't check the full finished result afterwards, as I wanted to be finished and also wanted to contribute this specific library because I knew it wasn't as highly seen on this forum and was from the beginning relatively sure, it won't get as many entries as other flagship VI's, but sadly couldn't put more time into it (actually writing this from my office-computer right now).
So my apologies if the "bad"/not so good results for my entries are (mostly?) based on the volume differences. That ones on me, not the library

(Just my amateur two cents here)


----------



## Saxer (Jun 20, 2018)

Vik said:


> Which version of Capsule do you have? If there's a speed knob in there I don't think I've ever touched it. I tried a few things, in addition to just press play and see what happened.


To answer the question pages before (late, because I was out and this thread generates too much text for me reading everything on the phone display):
I used Capsule 2.5, Adaptive Legato set to runs. Don't know what went wrong. Would be great if some BS users could make a better version... OT doesn't deserve bad results!

To all the questions about skill level and fairness in that competition:
I see this more as a user exchange of experiences. For a professional competition there should be a jury and a selection of experts and all developers should be informed etc. I don't even see it as a competition. It's more a comparison of results different users generate. It contains all uncontrollable aspects of different knowledge, invested time, or just operating errors (as my bad BS example shows). But everyone reading here knows that. There are no absolute results.

I listened through the files but didn't make a +/- list. I'm interested in the good examples and there are more than I expected. I struggled with my test phrases on some libraries where other users got good results. Very interesting! So it's probably a workflow thing or I'm too used to the way I work... soundwise nearly all libraries are on a really good pro level (at least the good examples). I was afraid I'll find an example with a sound to die for but far of my workflow. So I'll happily going on using the libraries I can handle.


----------



## bigcat1969 (Jun 20, 2018)

All this reminds me of how far we have to go before we have 'virtual violinists' that play at our Bach and call. Pianos seem to me to be stepping across that uncanny valley, but Violins seem to still be struggling to get across.


----------



## pipedr (Jun 20, 2018)

I have been following this thread with great interest. Thank to the organizers, and thanks to the community for all the submissions. I've been learning a lot.

A call out to any violinists out there: Can someone play these passages on a real violin and submit them?
I think it would be supremely helpful as a comparison.


----------



## Jeremy Spencer (Jun 20, 2018)

Saxer said:


> soundwise nearly all libraries are on a really good pro level (at least the good examples). I was afraid I'll find an example with a sound to die for but far of my workflow. So I'll happily going on using the libraries I can handle.



+1000. They are all good, it all comes down to personal preference and ultimately, what you are comfortable and familiar with. For example, Hollywood Strings (which didn't score well) is my favourite hands down. It's a royal PITA to learn, but it's the devil I know, and it has gone the extra mile for me for years.


----------



## Casiquire (Jun 21, 2018)

"...that play at our Bach and call"

I'm leaving.


----------



## robgb (Jun 21, 2018)

Looked at the reveal. Once again proof that it really comes down to execution. Because even the so-called "best" (or at least most expensive) libraries can sound bad. So another lesson here is to beware the sound demos done by top pros. The results they're getting from a library may differ wildly from the results you can get. Making ANY of these libraries sound great takes a combination of composing talent, playing and tweaking, and mixing skills. But I think most of us already know that.


----------



## pipedr (Jun 21, 2018)

A couple of observations on the solo violins, IMHO:

1) The solo violins seem to manage the note transitions much easier than the ensemble violins, particularly on fast passages. Why? Is it that the ambience in an ensemble sample blurs out the note transitions? Is it that the attack is unnatural in an ensemble because the sample is triggering everyone to play at the same time, whereas the notes would be staggered when 16 individuals are playing together? 

2) Even the Garritan stradivarius, to my ear, did quite well. I had to look this one up--I think this is very old, discontinued as an individual product, but comes with the Garritan personal orchestra now (which is only $120), and doesn't even have legato sampling. How did this do so well even without legato?

3) It's fun to combine different instruments from the same composer. Composer 4's are nicely lined up, and putting together combinations of the solo violins results in a pretty good chamber/divisi/small section sound. These sound better at the fast passages in terms of note definition and transitions than the Spitfire chamber strings, to my ear.

4) But you can't get the same ensemble sound from the individual solo violins put together. Maybe a lot of this is the ambience of the recordings (e.g. Air studios for Spitfire), which can't be reproduced with convolution reverbs? Maybe this is from the reverberations from many instruments actually playing together and the spacial placement in the room?

5) I don't think a real violinist would play the fast passages with uninterrupted legato. For example, composer 4 gives us the SWAM violin, which plays an uninterrupted legato that, to me, sounds unnatural because it is so consistent. 

6) So how would one ideally articulate the fast last passage? The CSS solo violins from composer 4 switch from a more detache articulation to smooth legato in the middle of the phrase. I think Chris Hein has commented in another post that there is no legato with a bow change, and in a long passage like this I think it would not be possible to play it with one stroke, so there probably should be at least one detached note. But spiccato did not sound right to me, as in 23-Italian Violin, except perhaps on the last flourish (interestingly, composer 3 gave us legato in the other solo violin contributions, which I think sound better-I wonder why? Based on the names, I'm assuming these are all from Chris Hein solo strings). To simulate a real player's phrasing, would one use legato articulations for, say 4 notes, and then a spiccato or detache, and then 4 more notes? What do others do?


----------



## JeeTee (Jun 21, 2018)

pipedr said:


> To simulate a real player's phrasing, would one use legato articulations for, say 4 notes, and then a spiccato or detache, and then 4 more notes?


As a violinist, I think I can answer this. In fact, what's written in the sheet music for this is exactly how a string section (or soloist) would execute it. The slurs that you see in figure G relate directly to bow changes. So, starting on a downbow, the first 8 notes are played in 1 bow, then the next 8 on the upbow and so on. A new slur indicates a change of bow. The whole test is totally playable using the bowing (slurs) provided.
Though, having said that, I'd want to get rid of the first slur in figure A, and start on an upbow...


----------



## Vik (Jun 21, 2018)

Vik said:


> One was to use the Fingered Legato preset, another one was to use the way to lock either of the legato modes manually with the solo buttons next to each of the modes (see pic).


Hi Saxer, did you try that fast phrase in the end with the fingered legato preset and the agile legato mode yet?


----------



## Saxer (Jun 21, 2018)

Vik said:


> Hi Saxer, did you try that fast phrase in the end with the fingered legato preset and the agile legato mode yet?


I'll give it a try tomorrow.


JeeTee said:


> Though, having said that, I'd want to get rid of the first slur in figure A, and start on an upbow...


Good to know! Thanks


----------



## NoamL (Jun 21, 2018)

pipedr said:


> Is it that the attack is unnatural in an ensemble because the sample is triggering everyone to play at the same time



That's my observation, yeah. Sample strings become more realistic when you layer. But only when you program it so the latencies of different libraries cancel out.



pipedr said:


> 4) But you can't get the same ensemble sound from the individual solo violins put together.



I used to think that, and consider that a fatal flaw of the VSL Dimension / Chris Hein Ensemble Strings approach to sampling. However, in the actual blinded test, I graded Chris's libraries highly and didn't perceive that it was a layering of solo violins.



pipedr said:


> 6) So how would one ideally articulate the fast last passage?



The way Saxer notated it:







Each of the set of notes under the slurs should have legato transitions, and the first note of each slur should have a "new bow" transition with no overlap.






It's also appropriate to put a slight accent on the first note of each slur as violinists will naturally do this to stay in time. 

Doing example G with spiccato articulations for every note, as some entries did, is bluntly wrong and un-idiomatic.


----------



## eli0s (Jun 21, 2018)

pipedr said:


> Even the Garritan stradivarius, to my ear, did quite well. I had to look this one up--I think this is very old, discontinued as an individual product, but comes with the Garritan personal orchestra now (which is only $120), and doesn't even have legato sampling. How did this do so well even without legato?


The original Garritan Stradivari Violin and the Gofriller Cello (which was a bit more advanced) had legato. In fact, the technology used on those instruments was the same "sample Harmonic alignment" that later on became the basis for Samplemodeling's instruments. To be honest, I prefer the sound and timbre of the original Gofriller than the (now Audiomodeling's) SWAM cello. It is sample based and it holds some characteristics that the "all physical modeling" based SWAM counterpart is lacking, or more precisely, sound synthesized (on my ears).
This second part (2':54") of a 10+ year old composition, has 2 Stradivari Violins and a Gofriller Cello making some... noise 

However you judge the composition, I think we can agree that the VIs were way ahead from their time in terms of capabilities.
I wish Samplemodeling didn't abandon their solo string series, they were making a viola at some point but shifted the project towards the SWAM engine and now there are two different companies.


----------



## Garry (Jun 21, 2018)

NoamL said:


> That's my observation, yeah. Sample strings become more realistic when you layer. But only when you program it so the latencies of different libraries cancel out.
> 
> 
> 
> ...


Learned a lot from you across this whole thread NoamL, thanks


----------



## Vik (Jun 22, 2018)

NoamL said:


> Sample strings become more realistic when you layer. But only when you program it so the latencies of different libraries cancel out.


Certainly agree in the realistic/layer part, but what exactly do you mean by "cancel out" latencies?


----------



## Batrawi (Jun 22, 2018)

So saxer's original example was really made with BS..?!!! that's actually the first shock to me since I really liked what I heard while I never really liked the sound of BS ! what kind of tricks my ears have been playing on me


----------



## eli0s (Jun 22, 2018)

Batrawi said:


> So saxer's original example was really made with BS..?!!! that's actually the first shock to me since I really liked what I heard while I never really liked the sound of BS ! what kind of tricks my ears have been playing on me


I don't think it was made with Berlin Strings. I bet they are multiple instances of SWAM Violin, cleverly put together in a way to avoid fazing. I wouldn't be too surprised if he used something more exotic though, like Synful, for example!


----------



## Saxer (Jun 22, 2018)

eli0s said:


> I don't think it was made with Berlin Strings. I bet they are multiple instances of SWAM Violin, cleverly put together in a way to avoid fazing. I wouldn't be too surprised if he used something more exotic though, like Synful, for example!


Nothing exotic... you are right, it was five instances of SWAM Violins, 5 voices Dimension Strings and SCS Performance Legato (room mics only). No key switches, only CC1 for dynamic and CC21 for vibrato.



Batrawi said:


> So saxer's original example was really made with BS..?


I made example 77 too (which didn't sound as convincing as BS could).

Here's another try, I hope it's better now... ?

BS-Run.mp3

[AUDIOPLUS=https://vi-control.net/community/attachments/bs-run-mp3.14120/][/AUDIOPLUS]


----------



## Batrawi (Jun 22, 2018)

Saxer said:


> Nothing exotic... you are right, it was five instances of SWAM Violins, 5 voices Dimension Strings and SCS Performance Legato (room mics only). No key switches, only CC1 for dynamic and CC21 for vibrato.


that's some clever setup! I really liked the result


----------



## Pantonal (Jun 22, 2018)

I believe this whole exercise has been extremely useful to the entire VI community, THANK YOU!! Are there ways to improve it? Probably, but that would require resources from the volunteers in this community that are probably not available. I just hope that this effort can become an enduring part of this community. If there's some way to distill the important parts and make that thread a stickie at the top then this effort can and will continue. By that I mean that those libraries that didn't benefit from expert renderings over time may get them. 

I've learned two things from all this, 1) buying Kirk Hunter Strings on sale may not have been the wisest thing to do; 2) buying CSS was apparently brilliant. No one submitted a rendering from Kirk's libraries and I may try to do that as an experiment just to see how they fare. I may also try to do the same with CSS to see if I can match the quality of the best renderings in this shootout (and in the process hopefully improve my skills). Being relatively new to the current capabilities of these string libraries I doubt I'll upload any contribution here, but this experience will help me acquire better skills. 

My only remaining question is where is that original midi file? I guess I'll have to dig back a page or two ti find the original thread. Again, thanks to all who participated and even to the gadflys who only criticized and complained, this effort was improved or at the very least made more entertaining from their contributions.

What would be the process to distilling the more relevant posts from all the threads and making it a stickie? At the very least we should have ongoing access to the original midi file, the audio files that resulted along with any subsequent contributions and the unblinding of the original shootout. Anything else? Is this even a good idea?


----------



## Garry (Jun 22, 2018)

Pantonal said:


> I believe this whole exercise has been extremely useful to the entire VI community, THANK YOU!! Are there ways to improve it? Probably, but that would require resources from the volunteers in this community that are probably not available. I just hope that this effort can become an enduring part of this community. If there's some way to distill the important parts and make that thread a stickie at the top then this effort can and will continue. By that I mean that those libraries that didn't benefit from expert renderings over time may get them.
> 
> I've learned two things from all this, 1) buying Kirk Hunter Strings on sale may not have been the wisest thing to do; 2) buying CSS was apparently brilliant. No one submitted a rendering from Kirk's libraries and I may try to do that as an experiment just to see how they fare. I may also try to do the same with CSS to see if I can match the quality of the best renderings in this shootout (and in the process hopefully improve my skills). Being relatively new to the current capabilities of these string libraries I doubt I'll upload any contribution here, but this experience will help me acquire better skills.
> 
> ...



Thanks for the feedback - great to know you appreciated it and found benefit. The original MIDI file and notation is here.


----------



## pipedr (Jun 22, 2018)

Thanks so much, JeeTee and NoamL for the lessons in phrasing. So, for the other and slower passages A-F, would they also be phrased as notated, with legato under the slur, and a detached "new bow" transition in between?

I noticed that our composers differed as to when to throw in portamento. When would you choose to use a portamento legato vs. a normal slur? (Is that also notated in the score?)

eLios, thanks for the info on Garritan. I enjoyed your piece, and what you did with the Garritan Strad and Gofriller Cello. I have SampleModeling's trumpet, which I think is great, and the most musical and convincing for a note that has any crescendo or decrescendo. Do you still use those Garritan solo instruments, or have you found other newer ones more satisfying?

Listening to the shootout, I think SWAM similarly really shines at the decrescendo in passage B. The crossfade approach in the other libraries can't quite touch it in that respect. But there's something about SWAM that doesn't sound quite right to me...or maybe it's just hard to play (Rohan De Livera has posted some incredible work with it). I wonder what other libraries would sound like if they could use this "harmonic alignment" technology in between the three or four velocity layers that they usually give us.


----------



## awaey (Jun 22, 2018)

thank you everyone who participated in the entry, special thanks to Garry for the idea. I have a question, can we take a retake of our entry if we want just in case if we want t make it better and stand out more of the sample when we use it ? thanks


----------



## Garry (Jun 22, 2018)

nawzadhaji said:


> I have a question, can we take a retake of our entry if we want just in case if we want t make it better and stand out more of the sample when we use it ? thanks


Yes - interested to hear others' views, but personally this is how I envisage it could go:
*- stage 1*: blind shoot out: gives an opportunity to review the libraries without prior biases; has limitations in that the entries are from different users, so both user and library are not constants, however, it's main purpose has been to generate interest (check), throw up some interesting questions about how to get the best out of these libraries (check); have the most pragmatic way of crowd-sourcing a database of libraries all performing the same melodic lines with different challenges (check) - Ok, this part is now over, and served the purpose well.
*- stage 2*: collate an unblinded library of 'Saxer's Seven' for violins, with as many libraries as possible. These should now be the very best that the library can provide, as before, using whatever is available in the library OR using other elements as long as they are clearly stated and both versions, with/without 3rd party plugins. Anyone can submit an unblinded entry (developers to be invited as well), and we poll the forum to select the 1 best representation for each library - most votes wins for each library (open and unblinded), with kudos as the prize for having won best entry! This database then provides a VI-C benchmark for future comparisons for new libraries, perhaps housed as a sticky if @Mike Greene agrees?
*
*** REQUEST FOR INPUT ****

*So, what do people think - is stage 2 a useful resource for the forum? Is it worth doing? RATHER THAN ANOTHER POLL - HIT 'LIKE' ON THIS MESSAGE, AND WE'LL GET A SENSE OF WHETHER PEOPLE SEE IT AS VALUABLE. IF SO, I'LL SET IT UP SHORTLY... *

Stage 3 could be to repeat stage 1 (with the learning from this round incorporated to address some of the limitations) and stage 2 with other instruments (perhaps cello next?), but I think we should complete stage 2 before we get there.


----------



## eli0s (Jun 22, 2018)

pipedr said:


> Do you still use those Garritan solo instruments, or have you found other newer ones more satisfying?


No, I don't use the Garritan any more, but that been said, while I bought the SWAM strings (except from the base), having high hopes that will be everything I was hoping for, I don't use them either.
They do offer a ton of control and the expensiveness is directly related to the effort you'll put into their programing, however, their timbre doesn't cover my musical needs. They sound synthy, somewhat processed, like electrical instruments. The violin does sound good on the high range, but as I go down the timbre is getting worse. The most disappointing is the cello.
Don't get me wrong, the SWAM strings are a big step forward as a VI, but they are missing a "humanity" factor that I seek in the sound. I am sure that other composers can achieve great results with these instruments.

Nowadays, I tend to avoid writing for solo strings, I feel that the tools are not there yet. When I have to, I use CSSS, but this library is also not the leap I was hoping for. They are somewhat limited and have hard edges you need to avoid for not to break the illusion of a "real" performance. In other words, they are limiting my composing choices.
I have Joshua Bell violin and the Bohemian Cello which are great sounding instruments, but they are not that easy to blend within a small orchestra-chamber sized arrangement.
Finally, Chris Hein's Solo Strings have a similar approach like Samplemodeling's aligned samples when using the X-fade mode, but imo the process they done (separating the bow noise from the tone and then recombining them), is severely affecting the sound in a degrading way. At least on the Contrabass.


----------



## Vik (Jun 22, 2018)

Out of respect for potential users and those who make these libs, it's IMO important to state as much details about each entry as possible, as long as that info is relevant for how the end result sounds of course, including additional layers are used, and if additional purchases are needed.
And, as someone who has started a few polls on this site, I'm getting more and more sceptical about the contest aspect of all this. For instance, I started a poll about which string libraries users here saw as their favourites. That poll is here: https://vi-control.net/community/th...rite-non-solo-string-libraries-and-why.60460/
We soon realised that the poll results wouldn't say tell us what the best string libraries were, for many reasons. One was that the more expensive libraries had fewer users, and therefore fewer users that could vote for these libraries. 
At some point @dhowarthmusic chimed in and started a related poll - about which libraries people _owned_. That poll is here: https://vi-control.net/community/th...ly-own-not-solo-string-libraries.60592/page-3

Then I tried to generate a list of how popular these libraries where among those who actually owned them. The idea was to bypass the influence of some libraries being old, cheap, expensive, not being promoted actively, very new (eg so new that it was added after most people have voted) and so on. The results are here:
https://vi-control.net/community/th...g-libraries-and-why.60460/page-7#post-4072545
But not only do any of these lists tell what library that's best, or most popular; the results can be manipulated by voters. 

It's too bad if a company, small or large, lose sales due to unfair premises etc. So while I think it's good idea to share files and keep updating them, maybe the best idea would be to stay clear of any level of contest, at least until decent versions of most libraries exist. And even then, there won't be a "best library", because we have different needs and taste. Some people really love a library others regret they bought and so on. So why compete at all? 




Garry said:


> an unblinded library of 'Saxer's Seven' for violins, with as many libraries as possible. These should now be the very best that the library can provide, as before, using whatever is available in the library OR using other elements as long as they are clearly stated and both versions, with/without 3rd party plugins. Anyone can submit an unblinded entry....


Anyone can submit an unblinded entry, but most people won't. Presenting only one single lib in a fair way takes quite some effort. So with that in mind, the most important thing in this process is to get as good versions as possible of as many libs as possible. It will take weeks, probably months, before we have that.


----------



## Garry (Jun 22, 2018)

The purpose of stage 2 isn't to create a contest, it's to create a database, of the very best each library has to offer. Anyone can then consult this database as a way of comparing Library A that they're considering purchasing, vs Library B which they already have (i.e., do I need Library A: is it sufficiently different) or is Library C better. The intention is NOT to do this by polls: no one has to agree whether Library A or B is better - everyone can have a different opinion on that; we just have to agree whether submission 1 is the best representation of it, or submission 2; then we move on to the next library.

For those that don't see value in it - no problem. No one is forcing anyone to use it - just use it if you think it helps you, and ignore it if it doesn't. @Vik - I think we can put you down in the latter category. No problem - your vote is duly noted. Anyone who feels they fall in the former category, and thinks this would be useful, please 'like' my previous post, and we'll see how much interest there is. If there is none or not sufficient to warrant the effort, that's fine too, it's just an idea of how we can help each other out


----------



## Garry (Jun 22, 2018)

OK, if we must let's do a poll... I'll start a new thread to ask the question of whether we want to create a database.


----------



## Vik (Jun 22, 2018)

Garry said:


> I think we can put you down in the latter category


I'm all for moving this to stage 2, 3 and beyond, Garry, I even found stage 1 useful - even if I found the introduction of voting premature after having thought about it. I just don't like the contest aspect of this, as in "most votes wins for each library", "having won best entry" etc, due the all the involved limitations around this. But as someone who has created a few polls here myself, which maybe are seen as contest by some visitors, I'm not in a position to say that polls/votes always is a ad thing - it's just an afterthought based on your project and my earlier polls.


----------



## Garry (Jun 22, 2018)

The poll is just to identify, in as fair and transparent manner as possible, which entry should represent each library, and to provide some fun. If you have a better way, that is equally fair and transparent, please feel free to suggest it.


----------



## Garry (Jun 22, 2018)

OK, poll created here. If there's interest, we'll go ahead; if not, no problem, that's ok too!


----------

