# The vi-c blinded violins shootout - FINAL UPDATE.



## Garry (May 30, 2018)

Ok, the poll results are in from this thread, and as of 12pm CET, an overwhelming majority (*86%* of the 73 votes cast) voted 'yes', we want to see a blinded shootout between libraries, either just for fun or to help them make decisions about which libraries to purchase. So, your wish is my command...

Thank you to everyone for your helpful comments and suggestions in that thread. As one of the contributors to that thread rightly pointed out, we will not please 100% in exactly how we run this, so since this is our first attempt, let's go with it as described below, and there's nothing saying we can't improve the rules on the next shootout, as we learn from the experience. *We now need your contributions*, but before doing so, please note the rules.

The *RULES*:

*SOME KEY POINTS*: This is a blinded test, so to maintain the blinding, I've highlighted below some key points - please pay close attention to these, repeated here: (i) do not submit files to this thread, submit as PM to me, (ii) do not submit votes to this thread - send to the 'Randomiser' (explained below). To maintain credibility of the competition, do not use anything outside the library (or if you do, 2 versions must be submitted with/without the 3rd party plugins), and do not embellish with additional instruments or harmonic lines.

The *INSTRUMENT*: we'll be starting with violins only. If this round is successful and people find it useful, we can move on to others in subsequent rounds, but only violins for now.
The *LIBRARY*: any libraries are eligible that have violin patches.
The *PLUG-INS*: 
For 'wet' libraries: (i.e. the library offers multiple different mics, EQ and reverb included within the library): within the library, you are free to set any combination of mic positions, articulations, effects that come provided with the library, that you feel best show off the library's capabilities.
For 'dry' libraries: (ie. the library doesn't have this, or is built to offer a really dry sound with no room), then third party-plugins can be used, but you must include (i) one dry version and one wet version separately, and (ii) a screenshot to show settings of all 3rd party plugin settings/presets (credit to @fretti here for a great solution to something many had expressed concern about). Remember, we are not trying to equate the contributions, we are trying to capitalise on the considerable expertise we have in this community to show off the libraries at their very best, or show what is additionally required to make them shine.
it would _almost_ seem to go without saying: platform engines, e.g. kontakt, are of course allowed!! Apparently it _does _need saying, so, 'yup, they are allowed too' 


The *MELODIC LINE*: with thanks to @Saxer , we have 7 phrases which provide a perfect variety of tone, tempos and articulations. The MIDI file is provided below. Please do not embellish with additional instruments (even from within the same library) or harmonic lines - this would become more of a test of the composer, and that's for another day!! Please do not write the midi in yourself - we want to make this the constant on which to base comparisons from, however, you can add CCs, velocities, etc, to enrich the performance.
The *SUBMISSION*: please send 1 audio file per library as a PM to me. Please *DO NOT *submit the file to this thread - this is an important part of the blinding. Please include in the filename, the name of the library used. The submission must include all of the sections of Saxer's midi file in order to qualify. I will collate all the files, and attach pseudolabels (library_A, library_B, etc) to each. I will not vote, as I will be unblinded. You can submit more than 1 library - as many as you choose - remember, 1 audio file per library. Please also consider sending a note to this thread (not disclosing the library name), so that people are aware of the ongoing activity.
The *RANDOMISATION & POSTING*: I ask for 1 other member to volunteer to receive these files from me (please send me a PM), and simply randomise the order, randomly re-labelling the files library_1, library_2, etc, in order that no file benefits from an order position effect. The person randomising the files (let's call this 'the Randomiser') will post all the files to this thread, at the same time. S/he will be allowed to vote, since they remain blinded, and so cannot exert undue influence.
The *VOTING*: votes will be sent by PM to the Randomiser only. Again, *please do not post your votes to the thread*, as it could easily influence other people's ratings. This too is a key part of the blinding.
The *RESULTS* - the Randomiser will then post the results of the voting (still blinded). I will then post the unblinding (including names of libraries and contributing author for each file (unless you specifically indicate to me that you would prefer to be anonymous), and all will be revealed!
The *DEADLINE*: 1 week today (Wednesday, June 6th)
I've no doubt there are flaws with this method, and some people will have objections, and perhaps someone else could do a better job. But, in the interest of getting things moving, and not letting the excellent be the enemy of the good, let's go with this, warts and all, and any suggestions for improvement can be included in the next blinded shootout. Finally, thanks to @Vik, whose recent, single-handed heroic efforts, comparing violins and cellos, provided a great platform to learn from that exercise, get feedback from the community, and now extend this out to the broad VI-C expertise.

*So, now we need your contributions*! Please feel free to start sending the audio files to me by PM (remember to label them with the library name).

Let Battle Commence!

[AUDIOPLUS=https://vi-control.net/community/attachments/violinslegatotest-mp3.13651/][/AUDIOPLUS]


----------



## Garry (May 30, 2018)

In case it helps, here is the notation for the MIDI file, provided by @Saxer:


----------



## fretti (May 30, 2018)

Maybe the middle way is the following:(?)
- library offers multiple different mics and a included reverb control: has to/ can be used to the extend of the individual doing the work to his/her liking.
- if the library doesn't have this, or is built to offer a really dry sound with no room (like VSL) then the there should be one version without reverb and one with the reverb the submitter decides to use, so it is in his/her eyes the best result.
But the PM to @Garry should then include:

what 3rd Party reverb was used (product wise; plugin or external etc.)
what preset or settings were used (maybe a screenshot)
so both versions can be included, but the reverb details have to be listed later so we at least can see the difference reverb can make.

Could make it fairer, towards certain libraries. Might also just lead to the opposite of: the included mics sound different than a 800€ reverb plugin on top of a good quality sample library...


----------



## Garry (May 30, 2018)

fretti said:


> Maybe the middle way is the following:(?)
> - library offers multiple different mics and a included reverb control: has to/ can be used to the extend of the individual doing the work to his/her liking.
> - if the library doesn't have this, or is built to offer a really dry sound with no room (like VSL) then the there should be one version without reverb and one with the reverb the submitter decides to use, so it is in his/her eyes the best result.
> But the PM to @Garry should then include:
> ...



Great suggestions, thanks Fretti - I'll update the rules.


----------



## EgM (May 30, 2018)

VSL Instruments (the free player) has built-in algo reverb, they can use that since it's not outside the purchase of the library.


----------



## Garry (May 30, 2018)

Batrawi said:


> Eventually no one's forced to buy the full version, yet if they like what they hear and they're serious about getting same results, then it's still a good thread for them to help them consider buying the full version...for good reasons.


Just to be clear, Kontakt (whether full version or not) is clearly allowed. It was obvious to me, and I thought to others, that we were aiming to preclude the use of third-party plugins that would be used to enhance the sound outside the constraints of the library, and it seems to me an obfuscation to pretend to be unaware that was the case. For the sake of complete clarity, I’ve added what seems an entirely unnecessary clarification that Kontakt is of course allowed!

I hope you can enjoy what was intended to be of mutual benefit to the whole community, and hopefully fun in the process. Let’s see...


----------



## fiestared (May 30, 2018)

Garry said:


> Yes, both good points. I’m not sure I can shorten the original, because I will of course be accused of missing something (ahem...). But you’re right, we just have to hope people are interested enough to read it through. Also, if it’s shorter, then we’ll probably end up with this being a very long thread of clarification questions.
> 
> As to your 2nd point, yes I’m concerned about that too. I had planned to periodically update the number of submissions, and perhaps highlight libraries that had not yet been represented. As it is, that’s not yet our problem. We don’t yet have any submissions! So, come on VI-C, let’s get this going.
> 
> Thanks for pointing these out Ka00 - perhaps in the next round, the lengthy description will be unnecessary, and we can have another way of submitting. The reason for sending to me is that I then blind them, which means when I send them on to someone else, it becomes double-blind - a point that was made in the other thread as being important. If not sent to me via PM in future iterations, we’d still need an independent way of collating them. I stay independent by not voting. Open to suggestions though.


Why not simply "sending you the file and writing a post to say it to others" ok, two actions, but at least this thread would be active.


----------



## Casiquire (May 30, 2018)

Well a good way around that issue is to have members post something when they've submitted files. As most of us have more than one string library and we wouldn't know which example came from which member, blind-ness would be retained.


----------



## Garry (May 30, 2018)

@fiestared and @Casiquire: great suggestions - I’ve now updated to request people do this. Thanks guys.


----------



## Garry (May 30, 2018)

ka00 said:


> But does it in fact need to be double blind? Just blind to the thread readers would be sufficient. What sort of bias would you personally introduce as the admin of this test? You are just compiling and posting the resulting sample tracks. Everyone is making conclusions for themselves at the end, no?
> 
> Would eliminating your own ‘blindness’ simplify things on your end?


Let’s say I preferred library A over library B - if I was trying to bias it deliberately (or indeed subconsciously), I could put library A at the beginning or the end (primacy/recency effect: we tend to bias our preference and improve recall of items that occurr at the beginning or end of a list). So, if I blind the files, and then someone else randomises, I can’t influence the order, and they can’t prejudice one library over another in ordering, because they’re blinded to library name.

In medical science, we do this all the time: the pharmacy prepares the drug or placebo (so is unblinded to the treatment), but it is the statistician that pre-determines the treatment randomisation - the combination protects against bias (my real job is in neuroscience - I’m just a wannabe musician!).


----------



## Garry (May 30, 2018)

ka00 said:


> If it simplifies things, when you get all the files, just list them for yourself here and hit 'randomize', then adhere to that random order. If we can't trust you to do that then the whole thing is null and void anyway. PS: I trust you!


Yes, tools like that are useful when we want to randomise (although they typically have more structure than you’d think), but transparency is important in any trial, and the elimination of doubt is key to the acceptance of the results. You trust me now, but what about when your favorite library, that you paid top dollar for, comes bottom of the pile!! 



ka00 said:


> Then you know we're all just going to Yanny and Laurel the results anyway!


What are you talking about, there’s only Laurel!


----------



## Arviwan (May 30, 2018)

Hi everyone,
excellent idea !
I've just downloaded the MIDI file and will try and record examples next week-end.
Just one thing i wanted to say : since the file includes 7 different phrases, it might be relevant to compare phrase by phrase, 'cause i think 1'33 for each bank is a little too long ... or at least is it for my ears ! 
Cheers


----------



## Garry (May 30, 2018)

Arviwan said:


> Hi everyone,
> excellent idea !
> I've just downloaded the MIDI file and will try and record examples next week-end.
> Just one thing i wanted to say : since the file includes 7 different phrases, it might be relevant to compare phrase by phrase, 'cause i think 1'33 for each bank is a little too long ... or at least is it for my ears !
> Cheers


Yes, I know what you mean: Saxer's file perfectly addresses a concern that was raised that a 10 second file doesn't give a true impression of a library, but on the other hand, 1'33 makes it a lot harder for people to compare - it becomes a memory test as much as anything! We could compare phrase by phrase, but then people would need to send 7 files, rather than one. Hmmm... I'm not sure - what do people think? We want it to be meaningful, but we want it to be practical as well. If it all just feels like too much trouble, people won't participate.

Also, you mentioning looking at this at the weekend raises another question: we need to set a deadline - does 1 week from today sound reasonable?


----------



## Garry (May 30, 2018)

fretti said:


> +another thing I just realized wich might be a problem: section sizes.
> Do we allow all sizes (depending on what the participant thinks sounds best with that exact library) or do we want limits?
> What do you all think: limits or just whatever sounds best for the one making the file?
> Just asking because with all the libraries out there right now we have like unlimited combinations and section sizes in those libraries (from 4 in SCS to 60 in HZS...)


Personally, I'm ok with this: does it actually sound better with 60 violins, or not? Of course, that may depend on the requirements of the music, and people will have to generalise their ratings there, since we can't have a competition for every type of articulation for every type of music. But I'm not convinced people will pick out the 60 violins - to my ears, they're not sounding as big as one might have expected. If 60 violins sounds awesome, and stands out above the rest, then Spitfire have a winner and will make a lot of money from their library, but I'm eager to see the results, and whether people can actually pick them out in a blind test.


----------



## Saxer (May 30, 2018)

If I may add my opinion: 
Don't complicate everything by rules and methods. If possible: no external plugins. Use the software that is needed to run the library. Use the whole midi file or a part of it. Or make your own music. If you use something else there's probably a reason. If so, it would be kind to tell us why. Send the files to Garry and let him sort the things. I think that should be it. And there are definitely no reasons to drive this thread into the drama zone.

I made the test example with all problems in mind that came across using different libraries. So there will probably problems coming up sooner or later. It would be interesting how the libraries or the users of it (you/me/we) can handle the examples. I could imagine that often more than a single patch is needed to get good results. The borders of libraries will be interesting too. If everything runs fine without problems with most libraries I will have to learn lot how you did it! Probably all my problems were home made then.


----------



## Garry (May 30, 2018)

Saxer said:


> If I may add my opinion:
> Don't complicate everything by rules and methods. If possible: no external plugins. Use the software that is needed to run the library.


I was originally of this opinion too, but I think it's a fair point that dry libraries would likely not fair well, so it's useful information to know what it takes to make them sound good, but disclosing the plugins used, and comparing with/without, and I still like Fretti's suggestion for dealing with this.


Saxer said:


> Use the whole midi file or a part of it. Or make your own music.


Hmm... I have to strongly disagree here. This is only a useful test when we hold one thing constant. If even this is variable, then we will be in no better a situation to compare libraries than we are now, just listening to what's randomly out there. Your midi file is the perfect remedy to this: it standardises the melodic line, but provides enough variation and complexity to be meaningful. It was perfect - don't lose me now! 

So, with your permission, I'm going to put this in the drawer labelled 'ideas for future tests', and not change the rules of _this _test, otherwise, I fear we'll never get off the ground.


Saxer said:


> I could imagine that often more than a single patch is needed to get good results.


I agree, but that is the answer to a different question I think. This test doesn't claim that using only 1 library will get you the best result - it just provides a basis to compare what each library individually is providing.


----------



## ka00 (May 30, 2018)

It’s smart to keep things simple. All I’m personally looking for is the same passages rendered with as many of the top end libraries as possible. That has inherent value whether it’s a secret which library is used or not.

Regardless, in the end, the appreciation of this will be more art than science.


----------



## Pianolando (May 30, 2018)

Cool thread! 

I do believe that the "dry" libraries should be entered as dry, as having external reverb will add to many possibilities. Sure, the dry ones will sound dry, but that is not suffer in my book, everyone worth their salt will know how you can use dry libraries and how flexible that is.

Different volume on the recordings will be a problem, but maybe OP can take care of that?

I also think that the example in the OP sounds very good!


----------



## Saxer (May 30, 2018)

I just thought: if a legato isn't done with a legato patch alone and needs some short note overlay of the same library it should be possible to do this. 
And if some parts doesn't work with a library (maybe there is no note repetition and portamento) it doesn't make sense to use the midi file at that point. But if somebody finds a workaround by editing the midi it would be good to know how he/she achieved it. Something like adding blurred samples to runs.


----------



## Vik (May 30, 2018)

Hearing a dry library with some reverb could be interesting, but if I had to chose between a relatively dry version of a library and one with added reverb, I'd definitely want to hear the one without external reverb (in this context). Or even better; both a dry version and one with a modest amount of reverb.


----------



## Garry (May 30, 2018)

Vik said:


> Or even better; both a dry version and one with a modest amount of reverb.


Thanks to @fretti's solution to this problem, you will hopefully get your wish, if we get submissions from users of dry libraries.


----------



## Vik (May 30, 2018)

Garry said:


> Thanks to @fretti's solution to this problem, you will hopefully get your wish, if we get submissions from users of dry libraries.


Sounds good! 
In the ideal world, I'd personally like to hear the driest possible version of any library included in this shootout, especially for libraries recorded in Air and other wet locations. And for the dry libraries out there, it will also be useful to hear the least dry version that can be done with internal mics. But this way, we could easily end up with three version of each of the libraries - or at least of the dry ones.


----------



## Arviwan (May 30, 2018)

Garry said:


> Yes, I know what you mean: Saxer's file perfectly addresses a concern that was raised that a 10 second file doesn't give a true impression of a library, but on the other hand, 1'33 makes it a lot harder for people to compare - it becomes a memory test as much as anything! We could compare phrase by phrase, but then people would need to send 7 files, rather than one. Hmmm... I'm not sure - what do people think? We want it to be meaningful, but we want it to be practical as well. If it all just feels like too much trouble, people won't participate.
> 
> Also, you mentioning looking at this at the weekend raises another question: we need to set a deadline - does 1 week from today sound reasonable?



About deadline, i'll let you have your say ...
And dividing in 7 phrases is probably too drastic, but 1'33 is too long, so maybe divided in .. 3 parts ? -> "Sustained", "Medium" and "Fast" ?


----------



## Garry (May 30, 2018)

Arviwan said:


> About deadline, i'll let you have your say ...
> And dividing in 7 phrases is probably too drastic, but 1'33 is too long, so maybe divided in .. 3 parts ? -> "Sustained", "Medium" and "Fast" ?


I think it's an interesting suggestion: @Saxer, what do you think? Could you select 3 lines from the current 7 for sustained, medium and fast? If so, could you send me new midi, audio & notation, and I'll change the current files? I think @Arviwan has a point, in that in our attempt to avoid 10 seconds being insufficient, we may have gone too long with 1'33 (I hadn't realised it was that long to be honest). This might be a great compromise?


----------



## Arviwan (May 30, 2018)

Maybe you don't even have to change the MIDI file ... if you take time to cut the audio files you'll receive all at the same timing ... but it's maybe too much work ...


----------



## Garry (May 30, 2018)

Arviwan said:


> Maybe you don't even have to change the MIDI file ... if you take time to cut the audio files you'll receive all at the same timing ... but it's maybe too much work ...


Let's just cut once in the beginning, then we don't have to do it multiple times.


----------



## Garry (May 30, 2018)

ka00 said:


> I wonder if you could release the results as multitrack stems? If it’s not too much work to align them all. That way we could toggle different tracks on and off to compare as short or as long a section as we want. Kind of like bypassing a plugin to gauge its effect. Just a thought.


I like it, but this rookie isn't the man for the job! If we have a volunteer amongst you pros, you got the job!


----------



## Garry (May 30, 2018)

ka00 said:


> Come on, Garry, it’s not neuroscience!
> 
> Just kidding. Ill volunteer if you send me a folder with each of the finished tracks.


You're hired! Thanks ka00. And of course, if you need any neuroscience doing, I will happily return the favour! 

What do you think @Arviwan - this seems a great solution to the issue you raised?

This is turning out to be a great collaboration - thanks guys.


----------



## Saxer (May 30, 2018)

Garry said:


> I think it's an interesting suggestion: @Saxer, what do you think? Could you select 3 lines from the current 7 for sustained, medium and fast? If so, could you send me new midi, audio & notation, and I'll change the current files? I think @Arviwan has a point, in that in our attempt to avoid 10 seconds being insufficient, we may have gone too long with 1'33 (I hadn't realised it was that long to be honest). This might be a great compromise?


As I wrote in the previous thread the examples have different tasks:

A - Espressive legato melody
B - Softer dynamic
C - Agile bowing with slurs into short notes
D - Legato ostinati and slur grouping
E - Note repetitons and portamentos
F - Fast arpeggios
E - Runs

That is already a selection of things that interest me personally. I'd say: if it's too long leave the things out that doesn't interest you or doesn't work. But if I should choose I'd choose C, D and F. Just because all libraries can do slow movement.


----------



## Garry (May 30, 2018)

Saxer said:


> As I wrote in the previous thread the examples have different tasks:
> 
> A - Espressive legato melody
> B - Softer dynamic
> ...


With @ka00's suggestion, we may not need to reduce anything - as I understand it, the way he would set it up with stems, you can quickly turn on/off each, and so easily switch between. I have to admit, I'm out of my depth here, and happy to be guided by you guys, but this sounds great to me, and the best of all worlds.


----------



## mc_deli (May 30, 2018)

nice idea

(just posting my sig really)


----------



## korruptkey (May 30, 2018)

Paul T McGraw said:


> So, in your mind being childish is holding someone responsible for accuracy. Meanwhile being wrong and refusing to admit it and apologize, that is not childish in your mind? How old are you Garry? At this point, I do not care what you decide are the"rules" of the Garry content. I will not be participating in the Garry contest. But I will certainly be reading the forum posts.



Don't even try reasoning with him, he picked on me once because I had a lower post count than him and that supposedly gave him more authority. I suggest backing away.


----------



## fiestared (May 31, 2018)

Number 01 of the shootout sent to Garry...


----------



## fiestared (May 31, 2018)

Number 02 of the shootout sent to Garry...


----------



## fiestared (May 31, 2018)

Number 03 of the shootout sent to Garry...


----------



## fiestared (May 31, 2018)

Number 04 of the shootout sent to Garry...


----------



## Garry (May 31, 2018)

Great, thanks @fiestared

Ok guys, please get your entries in, and let’s make this a really interesting shoot out. The more entries we have, the better. Remember to take a look at the rules before sending them in. 

Have fun!


----------



## Garry (May 31, 2018)

Entries are coming in nicely - several libraries represented already, but many are not yet. Don't let your favourite library miss out! Get your entry in, and see how it does in the blind shoot out. You can submit as many libraries as you wish.

I'm listening to these (unblinded), and it's absolutely fascinating! I can't wait to post them, and see what everyone thinks when you hear them blinded!

Remember, the deadline is *next Wednesday*.

Thanks to everyone for making this really fun - I think you're going to find the results really interesting, and especially if you have a horse in the race!


----------



## Garry (Jun 1, 2018)

Lots of libraries still not represented.

Can you help make this effort a really useful contribution to the community by adding 1 or more of your violin libraries? It’s very straightforward to do: just download the midi file from the beginning of this thread, load up your favorite violins patch, add adjustments to taste, and PM me with the audio file.

This could become a way to benchmark new libraries, to see how they differentiate from what’s already out there. Can you help to make it as broad a comparison as possible?

This is truly musicians helping musicians.


----------



## fiestared (Jun 1, 2018)

Garry said:


> Lots of libraries still not represented.
> 
> Can you help make this effort a really useful contribution to the community by adding 1 or more of your violin libraries? It’s very straightforward to do: just download the midi file from the beginning of this thread, load up your favorite violins patch, add adjustments to taste, and PM me with the audio file.
> 
> ...


Do you have HZStrings ?


----------



## Garry (Jun 1, 2018)

fiestared said:


> Do you have HZStrings ?


I can’t say!  I don’t want to unblind anything!

Let’s just say, if I do, I plan to include multiple, because that will be helpful for people to see different examples from different users; if I don’t, then it would be awesome to have HZS included, as it would be one people would want to see whether it stands out in a blind test.

Bottom line: if you have HZS, or any other violin library, please send a long an entry, as duplicates will not be discarded, and we want to avoid gaps where libraries are not represented.


----------



## brek (Jun 1, 2018)

Maybe this is obvious, but didn't see it explicitly stated - can we tweak the midi with CCs, velocities, etc?


----------



## hsindermann (Jun 1, 2018)

And alongside brek's question: Can we just play it in from scratch or should we use the provided midi file?


----------



## Garry (Jun 1, 2018)

brek said:


> Maybe this is obvious, but didn't see it explicitly stated - can we tweak the midi with CCs, velocities, etc?


That's a good question, and you're right, wasn't explicitly stated. I'm going to amend the description to clarify this.

But yes, we want people to be able to make the libraries sound as good as they can, so yes, you can do anything within the constraints of the library (apart from add other instruments/harmonic lines), or you can do anything outside of the library, as long as you state what it was, screenshot it and submit with and without the 3rd party effects.


----------



## Garry (Jun 1, 2018)

hsindermann said:


> And alongside brek's question: Can we just play it in from scratch or should we use the provided midi file?


Please use the midi file - we want to be able to have one constant, so that we can make it a reasonable comparison.

But again, that's a good question, and I'll update the description to reflect it.


----------



## Garry (Jun 1, 2018)

Ok, just added the following, under 'melodic line' to address @brek's and @hsindermann's questions.

_Please do not write the midi in yourself - we want to make this the constant on which to base comparisons from, however, you can add CCs, velocities, etc, to enrich the performance._

Thanks again.


----------



## Vik (Jun 1, 2018)

Personally, I think it will be easier to get more contributions if the list of which libraries that already are in the comparison was public.


----------



## Garry (Jun 1, 2018)

Vik said:


> Personally, I think it will be easier to get more contributions if the list of which libraries that already are in the comparison was public.


You could well be right, and perhaps that's a better way round to do it next time, let's see... There's definitely more than one way to do this, and we'll learn from this time around what works and what doesn't. I don't pretend to know the answer at this stage.

The key difference on this occasion is that _everything_ is blinded, and we'll see what value/fun this adds to the exercise, but some potentially interesting aspects of this approach; no one knows:

which libraries were contributed (potentially biasing their vote towards their pre-existing favorite),
who wrote the entries (potentially biasing them towards those contributors they already know to be highly skilled),
whether there are multiple entries for a library (will be interesting to see if there are disparities in votes for 1 library played by multiple contributors, allowing us to ask post-hoc, whether it the higher-rated one was just played better (gives an indication of what _can _be achieved with that library with a skilled user) or whether they were similarly played (indicating the difficulty in separating the libraries consistently, suggesting many are highly similar, causing arbitrary and inconsistent ratings)
if some libraries are missing, and whether their expectations are met, as to what was and was not included.
what other people voted, potentially biasing towards affirmation of the consensus
So Vik, will you be contributing? You've pioneered this up to now, and have multiple string libraries you could incorporate.


----------



## hsindermann (Jun 1, 2018)

Hey Garry, another question!  I only just noticed that all files are named 'Legato test'. Does that mean we are only supposed to use the legato/sustain articulations? At least for a couple of notes in phrase C I'd have used a staccato one.


----------



## Garry (Jun 1, 2018)

hsindermann said:


> Hey Garry, another question!  I only just noticed that all files are named 'Legato test'. Does that mean we are only supposed to use the legato/sustain articulations? At least for a couple of notes in phrase C I'd have used a staccato one.


No problem 

That just relates to the origin of the file, that came from Saxer. You should feel free to use whichever articulations you feel are most suitable.


----------



## fiestared (Jun 1, 2018)

Garry said:


> No problem
> 
> That just relates to the origin of the file, that came from Saxer. You should feel free to use whichever articulations you feel are most suitable.


As said before, for the moment, I only use the Vi legato, please, don't forget to post when you send a file to Garry, that's the best way to make this thread "alive".


----------



## Vik (Jun 1, 2018)

Garry said:


> So Vik, will you be contributing? You've pioneered this up to now, and have multiple string libraries you could incorporate.


Thanks for your response, Garry. I'm unfortunately too busy to participate this time, but if there aren't enough contributions around the deadline, maybe extending it a little would help?


----------



## Garry (Jun 1, 2018)

ka00 said:


> Garry, would you be okay with posting a list of which prominent libraries you *haven't* received entries for yet at this point? That way, people can feel more confident that their contribution will fill a genuine need. In the end, you don't have to say if you eventually received the ones you called out or not.


That doesn’t avoid unblinding (you now know which libraries are definitely in), and would be at the cost of disinclining those who may have submitted entries for which we already have representation, and those are also useful. 

Let’s go with a middle approach: let’s see where we are after the weekend, and then, if there are still key ones missing, we can put a call out for several, and I could add 1 or 2 that we’ve already got. That way, people still don’t know exactly what’s in the list, but we’re making efforts to plug our gaps without compromising what we set out to achieve in the beginning.

It’s funny you know, this feel very much like a treatment study, where people are eager to take an early peek at how things are going, and worry about recruitment. It’s something of a busman’s holiday for me!!  Then again, I don’t usually play the role of the one barring entry to the door!


----------



## fiestared (Jun 1, 2018)

Number 05 of the shootout sent to Garry...


----------



## fiestared (Jun 1, 2018)

Number 06 of the shootout sent to Garry...


----------



## Przemek K. (Jun 3, 2018)

Just one question regarding the midifile. It does not have the same tempo as the original mp3. Is that intentional or not important to the test?


----------



## Saxer (Jun 3, 2018)

Przemek K. said:


> Just one question regarding the midifile. It does not have the same tempo as the original mp3. Is that intentional or not important to the test?


Tempo should be included and is the same as in the mp3. It loads here in different hosts when opening the midi file. Like written in the score it's 80 for example A and B, 95 for C, D, E and 140 for F and G.


----------



## Przemek K. (Jun 3, 2018)

Saxer said:


> Tempo should be included and is the same as in the mp3. It loads here in different hosts when opening the midi file. Like written in the score it's 80 for example A and B, 95 for C, D, E and 140 for F and G.



Strange, here in Cubase it didn't load the tempo changes. But thanks for the tempochanges for the different examples. Will fit them manually to match the original version.


----------



## fiestared (Jun 3, 2018)

Saxer said:


> Tempo should be included and is the same as in the mp3. It loads here in different hosts when opening the midi file. Like written in the score it's 80 for example A and B, 95 for C, D, E and 140 for F and G.


I made everything(6 tracks) at the same tempo : 80, I downloaded only one Midi file and everything was in 80...


----------



## JeeTee (Jun 3, 2018)

Przemek K. said:


> Strange, here in Cubase it didn't load the tempo changes. But thanks for the tempochanges for the different examples. Will fit them manually to match the original version.


This is a Cubase option : go to Preferences/Midi/Midi File and untick 'Ignore Master Track Events on Merge'. This will overwrite your project's current Master Track settings with those contained in the midi file.

Actually, I made the same error....


----------



## Garry (Jun 3, 2018)

JeeTee said:


> This is a Cubase option : go to Preferences/Midi/Midi File and untick 'Ignore Master Track Events on Merge'. This will overwrite your project's current Master Track settings with those contained in the midi file.
> 
> Actually, I made the same error....


Yikes!! I hadn’t listened to them yet. If they’re at the wrong tempo (especially 80 instead of 140!), it will make comparisons difficult. Could I ask those who haven’t incorporated the tempo changes to resend a corrected version - sorry to ask! More time given if needed...


----------



## kimarnesen (Jun 3, 2018)

Just double-checking: We are not supposed/allowed to record the melody lines, only import the midi and change nothing, right?

And what about libraries that have just "high strings" patches, are they accepted?


----------



## Garry (Jun 3, 2018)

kimarnesen said:


> Just double-checking: We are not supposed/allowed to record the melody lines, only import the midi and change nothing, right?
> 
> And what about libraries that have just "high strings" patches, are they accepted?


Thanks for your questions. 

Yes, no melodic lines - only the notes in the midi file. As for 'change nothing', you can change articulations, mic positions, velocities, reverb, EQ, just not the notes themselves - this is the constant across entries.

High string patches will include primarily/exclusively violins, so yes, these are allowed.


----------



## Saxer (Jun 3, 2018)

kimarnesen said:


> Just double-checking: We are not supposed/allowed to record the melody lines, only import the midi and change nothing, right?


I don't want to infiltrate Gerry's thread rules... but at the end it's about the musical output. To get the best results you have to edit or even re-record the midi. Some libraries need less or more overlapping time to recognize legato, some have different timing (CSS!) or attack times depending on note velocity, some might need short note overlay on a separate track... So I think the goal should be to make the musical output work (and not to show what doesn't work if you don't edit the midi file).
It just doesn't make sense to record other melodic material as you can't compare it any more.


----------



## nik (Jun 4, 2018)

Hey this is a really cool idea!! Its very interesting. On the one side i want to contribute a library that is missing,so a list would be really cool....on the other side, its a great moment to just do it with all my string libraries and learn more about their particular strengths and weaknesses. Thanks for the effort guys!!


----------



## fretti (Jun 4, 2018)

I think the general idea of the midi file should be kept (note lenghts etc.). 
So yes, when you edit a note so it begins a little earlier to trigger the legato script and make it sound better: absolutely. But if some start to interpret and change it so much, that it's not really the same anymore, because they feel that a 4 bar melody should be lengthened to a 6 bar and the quarter notes should be played in eighths or whatever, then I'd have to say no.
Because the idea of the whole thing would be lost.
But if you're a good piano player and can "stay in the sheet music" on page one from @Saxer , but it's easier to do the automations while playing, then why not


----------



## kimarnesen (Jun 4, 2018)

Saxer said:


> I don't want to infiltrate Gerry's thread rules... but at the end it's about the musical output. To get the best results you have to edit or even re-record the midi. Some libraries need less or more overlapping time to recognize legato, some have different timing (CSS!) or attack times depending on note velocity, some might need short note overlay on a separate track... So I think the goal should be to make the musical output work (and not to show what doesn't work if you don't edit the midi file).
> It just doesn't make sense to record other melodic material as you can't compare it any more.



That makes much sense as I tried this with a
pretty acknowledged library yesterday and it sounded crap without changing some of the midi :/


----------



## kimarnesen (Jun 4, 2018)

...and to compare this with competitions among musicians, which there is a lot of in classical music, there are always variation in the interpretation also for mandatory music.


----------



## Steve Martin (Jun 4, 2018)

Hi Garry,

do we get to know at the end, after the winning vote for the examples are given, the identity of each string library with each particular example beside the winner of the main vote?

thanks,

Steve.


----------



## Garry (Jun 4, 2018)

nik said:


> Hey this is a really cool idea!! Its very interesting. On the one side i want to contribute a library that is missing,so a list would be really cool....on the other side, its a great moment to just do it with all my string libraries and learn more about their particular strengths and weaknesses. Thanks for the effort guys!!


I’m going to take a look today at what we have, and will send out a list of libraries that we’re missing (along with a couple thrown in as decoys, so that I don’t give away what we DO have, to maintain the blinding).

BUT, please note: duplicates will be REALLY interesting and useful in this shoot out. There is an element of user skill in this, so the more examples of a library we have from different contributors, the better sense we’ll get of it, and how a library can shine in skilled hands.

So PLEASE DO feel free to just throw ALL your libraries at it. Thanks for your effort too.


----------



## Garry (Jun 4, 2018)

Steve Martin said:


> Hi Garry,
> 
> do we get to know at the end, after the winning vote for the examples are given, the identity of each string library with each particular example beside the winner of the main vote?
> 
> ...



Yes, we’ll unblind after the votes are in. The ‘winner’ is something that makes the competition interesting and fun, but, at least for me, the true value is being able to have a means to directly compare the libraries. Your ‘winner’ may not be the same as mine, and that’s fine, because maybe it helped you make a decision as to which libraries you need for your purpose. Still, it will be fun to crown a ‘King of the Violins’ won’t it?!!


----------



## Garry (Jun 4, 2018)

kimarnesen said:


> That makes much sense as I tried this with a
> pretty acknowledged library yesterday and it sounded crap without changing some of the midi :/


I don’t want to be too prescriptive about it - the ‘rules’ are just there to get some consistency, so we have a basis for comparison. If you feel, that for your particular entry, it is better to omit or extend certain notes, then I leave that to sensible choices of sensible individuals. We shouldn’t take ourselves too seriously here: the ‘rules’ of the ‘competition’ are just there to try to make some sort of structure to facilitate our comparisons.


----------



## kimarnesen (Jun 4, 2018)

Garry said:


> I don’t want to be too prescriptive about it - the ‘rules’ are just there to get some consistency, so we have a basis for comparison. If you feel, that for your particular entry, it is better to omit or extend certain notes, then I leave that to sensible choices of sensible individuals. We shouldn’t take ourselves too seriously here: the ‘rules’ of the ‘competition’ are just there to try to make some sort of structure to facilitate our comparisons.



Sounds very reasonable.
I don’t think omitting notes should be necessary, but I got issues due to note lengths, attack etc.


----------



## Garry (Jun 4, 2018)

Ok, so here's an update of what we need to make this comparison really useful. Below, I've taken the list of string library section sizes, that @Vik compiled on May 2nd here, so this is probably up to date and comprehensive. I've highlighted in RED those for which we don't currently have an entry for (as you'll see, there's LOTS of gaps, so plenty of opportunities for you to make a unique contribution).

*PLESE NOTE: * just to maintain the blinding, SOME of the red entries may or may not be missing, and SOME of the ones listed as entered may or may not be currently included! The reason for this is that we all remain blinded as to exactly what is currently in the database  It's of course not completely false, so you can take this as a meaningful list, but let's say it could be 90% true, so that you don't know for sure!

If you have a library that is already listed, and would like to enter it, PLEASE DO SO - for 2 reasons: (i) it may be a decoy below, and we don't actually have it, and (ii) multiple entries of 1 library will be really useful for comparison in the results.

Ok, here goes (have to post in multiple posts due to character limit, so please see following post for list):


----------



## Garry (Jun 4, 2018)

8dio Adagietto *(11-8-6-4, no violin 2)*
8dio Adagio *(11-7-7-7, divisi: 6-3-2-2)*
8dio Agitato *(11-8-6-6, divisi 3-3-3-?)*
8dio Anthology
8dio Cage Strings
8dio Century Strings *6-4-6-4-4*
8Dio Majestica/8W *(20-20-30-30)*
Aria Sounds London Symphonic Strings
Auddict: The United Strings of Europe (20 players)
Audio Imperia Jaeger/Essential Modern Orchestra *(16-10-6-4)*
Audiobro LA Scoring Strings (LASS) *16-16-12-10-8 *(+ First Chairs)
Audiobro Legato Sordino Strings
Audio Modeling SWAM Violin
Chris Hein String Ensemble
Cinematic Strings 2 *12-8-7-7-6*
Cinematic Studio Strings *10-7-7-6-5 *(+ solo strings as an option)
Cinematique Instruments Ensemblia
Cinesamples CineStrings *16-12-10-10-7*
Cinesamples CineStrings Pro
East West Hollywood Strings *16-14-10-10-7*
East West QL Symphonic Orchestra *18/4-11-10-10/3-9: *several different sized section patches:18 Violins, 11 Violins and 4 Violins/10 Celli and 3 Celli
Embertone Joshua Bell (solo)
Frozen Plain Arctic Strings
Garritan Instant Orchestra
Garritan Personal Orchestra 5 *(12-10-10-8-7)*
Garritan Personal Orchestra 5 Ensembles *(9-3-9-3)*
Heavyocity Novo
IK Multimedia: Miroslav Philharmonik (1/2)
Impact Soundworks Furia Staccato Strings
Impact Soundworks Rhapsody Orchestral Colours
Kirk Hunter Chamber Strings 3 (*4-4-4-4-4*, can choose from 2-4 for each section)
Kirk Hunter Diamond Symphony Orchestra Chamber Section* (4-4-3-2)*
Kirk Hunter Concert Strings 2 *(16-12-12-6, *half division* 8-6-6-3, *Quarter Division* 4-4-3-2)*
Kirk Hunter Concert Strings 3* (16-16-16-16-16, *can choose from 4-16 for each section)
Kirk Hunter Diamond Symphony Orchestra* (18-18-10-9-6)*
Kirk Hunter Diamond Symphony Orchestra Studio Section* (9-9-6-5-3)*
Kirk Hunter Pop Rock Strings
Kirk Hunter Spotlight Strings
Light and Sound Chamber strings *(6,5,3,3,1)*
Miroslav Philharmonik 1* (23-4-10-9)*
Miroslav Philharmonik 2* (14/4-8-5-4)*
Miroslav String Ensembles 2.0
Musical Sampling Adventure Strings
Musical Sampling Soaring Strings
Musical Sampling Trailer Strings *18 VI, 16 VA, 14 VC, 12 B *(60 players (Vln1+2 recorded as one section; no legatos)
NI Symphonic String Ensemble* (16-12-12-8-8, divisi 8-6-6-4-4)*
Native Instruments Action Strings: 36 Players in High (*22-8-6* no Basses) and 24 Players in Low (*10-8-6* no Violins)
Native Instruments Emotive Strings
NI Session Strings *4-3-2-2* times 4 (as they have sections 1,2,3 and 4 each with these settings)
Novo Intimate Textures
Orchestral Tools Berlin Strings *8-6-5-5-4 *(5 extra players with optional First Chairs)
Orchestral Tools Berlin Orchestra Inspire *8-6-5-5-4 *(+ optional First Chairs)
Orchestral Tools Metropolis Ark:
-Metropolis Ark 1 *(14-10-8-10)*
-Metropolis Ark 2 *(24-10-16-12)*
-Metropolis Ark 3 *(21-14-10-9)*
Orchestral Tools Orchestral String Runs
Orchestral Tools Symphonic Sphere *(16-10-8-6)*
Orchestral Tools First Chair
Output Analog strings
Performance Samples Con Moto* (8 violins, 6 violas, 6 cellos, and 6 basses)*
Peter Siedlaczek String Essentials (Complete Orchestral Collection)* 14 Vi, 10 Va, 8 Vc and 6 B*
Project SAM Orchestral Essentials
Project SAM Orchestral Essentials 2
Project SAM Symphobia
Project SAM Symphobia 2
Project SAM Symphobia 3: Lumina
Project SAM Symphonic Colors Orchestrator
Red Room Audio Palette Symphonic Sketchpad:
-Strings, Full *(12-10-8-6-4)*
-Strings, Chamber *(6-5-4-3-2)*
Sampletank 3 Alleged Strings
Solid State Symphony by Indiginus
Sonic Implants Symphonic Strings *8-6-6-5 or 6-4*
Sonivox Orchestral Companion Strings
Sonokinetic Capriccio
Sonokinetic Da Capo
Sonokinetic Espressivo
Sonokinetic Grosso
Sonokinetic Maximo
Sonokinetic Sotto
Sonokinetic Tutti
Sonokinetic Vivace
Soundiron/Native Instruments Chamber Strings
Spitfire - Bernard Herrmann Composer Toolkit
Spitfire Albion 1 *(11/9/7/6/4)*
-Albion One (37 players - only high/low string sections available)
-Albion 2 *(8-6-4-4-3)*
-Albion 3 *(0,0,0,24,8)*
-Albion 4
-Albion 5 *(38-0-12-6)*
Spitfire Chamber Strings / Sable *4-3-3-3-3*
Spitfire Evo Grid
Spitfire Hans Zimmer Strings
Spitfire London Contemporary Strings
Spitfire Masse
Spitfire Olafur Arnalds Chamber Evolutions
Spitfire Olafur Arnalds Evolutions
Spitfire Symphonic Evolutions
Spitfire Symphonic Strings / Mural *16-14-12-10-8*
Strezov Cornucopia String Ensembles* (6-5-4-3-2)*
Versilian Studios VSCO 2 *(5-4-3-2)*
VSL Appassionata Strings *(20-14-12-10)*
VSL Chamber Strings *(6 Vi, 4 Vc, 3 Va, 2 B)*
VSL Dimension Strings *(8-6-6-4)*
VSL Orchestral Strings *(14 Vi, 10 Va, 8 Vc, 6 B)*
VSL Special Edition (*14* *violins, 10 Violas, 8 cellos, 6 basses)*
VSL Synchron Strings *(14-10-8-8-6)*
Zilhouette Strings by Cinematique Instruments* (7-2-7-2)*


----------



## Garry (Jun 4, 2018)

ONE MORE THING: *please note the tempo issue mentioned earlier*: there are 3 tempos in the midi file, so please incorporate the tempo when importing the file.

If you have already submitted an entry and didn't incorporate the tempo and didn't manually change it, then *please could you send a revised version.*


----------



## Przemek K. (Jun 4, 2018)

Maybe I missed the info,but in which Audio Format the files have to be sent in? Wav,Aiff,Mp3. 44.1khz/16bit or 48khz/24 bit?


----------



## Garry (Jun 4, 2018)

Przemek K. said:


> Maybe I missed the info,but in which Audio Format the files have to be sent in? Wav,Aiff,Mp3. 44.1khz/16bit or 48khz/24 bit?


MP3 if it’s easier, otherwise, whatever you choose should be fine.


----------



## Garry (Jun 4, 2018)

Quick question: having received some entries, a point that I think was raised earlier has emerged, which is that the files are not normalised for volume. I hadn't planned to do this, since I wanted to leave this open to each contributor, however, it does seem there is quite some disparity, and I think people generally feel that louder entries will be rated higher, all things being equal.

Therefore, what is the group's thoughts on this: should they be normalised, and if so, is there an easy, objective way to do so? I guess one of us could just listen and make changes by ear, but is that the best way? Thanks for your thoughts.


----------



## Jeremy Spencer (Jun 4, 2018)

Garry said:


> MP3 if it’s easier, otherwise, whatever you choose should be fine.



But this will affect sound quality, will it not?


----------



## Saxer (Jun 4, 2018)

Garry said:


> Therefore, what is the group's thoughts on this: should they be normalised, and if so, is there an easy, objective way to do so? I guess one of us could just listen and make changes by ear, but is that the best way?


I think just normalizing is ok. Mp3 too.


Wolfie2112 said:


> But this will affect sound quality, will it not?


Make it 320 kBit/s. I never ever heard a difference. Do you?


----------



## Mike Greene (Jun 4, 2018)

This looks like a fun project, so I'm going to add this thread to the newsletter tomorrow, but all the drama at the start is very distracting, so I moved those posts to the Drama Zone. Usually I would just delete the posts, but in this case, I didn't read very carefully, so some posts may have been moved mistakenly, in which case let me know and I will move those particular posts back.


----------



## Garry (Jun 4, 2018)

Mike Greene said:


> This looks like a fun project, so I'm going to add this thread to the newsletter tomorrow, but all the drama at the start is very distracting, so I moved those posts to the Drama Zone. Usually I would just delete the posts, but in this case, I didn't read very carefully, so some posts may have been moved mistakenly, in which case let me know and I will move those particular posts back.


Great, always good to get rid of the drama, and thanks for highlighting it - it might encourage some more entries, just before the deadline.


----------



## Jeremy Spencer (Jun 4, 2018)

Saxer said:


> I think just normalizing is ok. Mp3 too.
> 
> Make it 320 kBit/s. I never ever heard a difference. Do you?



It does if you're knocking down from 48khz/24 bit


----------



## DavidY (Jun 4, 2018)

Garry said:


> Ok, so here's an update of what we need to make this comparison really useful. Below, I've taken the list of string library section sizes, that @Vik compiled on May 2nd here, so this is probably up to date and comprehensive.


It occurs to me that the new Spitfire LABS strings aren't on that list? It might be an interesting product to add into the comparison?


----------



## Garry (Jun 4, 2018)

DavidY said:


> It occurs to me that the new Spitfire LABS strings aren't on that list? It might be an interesting product to add into the comparison?


Yes, good catch, that would be great if someone wants to contribute that (wow, will that cause trouble if it wins!!!).


----------



## Casiquire (Jun 4, 2018)

I was going to sit this out but there are three libraries I can make examples of that don't seem to be included...I may jump on this tonight


----------



## I like music (Jun 4, 2018)

What'll be very interesting is when the same library gets multiple contributors, but each with different reverb/EQ. I'm wondering how much of a disparity in results we might see for those libraries that have 2+ entries.


----------



## Garry (Jun 4, 2018)

I like music said:


> What'll be very interesting is when the same library gets multiple contributors, but each with different reverb/EQ. I'm wondering how much of a disparity in results we might see for those libraries that have 2+ entries.


Yes, exactly - this is why I'd like to encourage people to submit their libraries, irrespective of whether it appears there may already be a submission.


----------



## Garry (Jun 4, 2018)

*EXCITING NEWS: *so happy to say that we have now received an entry to the competition by one of the HIGH PROFILE developers themselves, using their own libraries! Due to the blinding, I can’t yet say who it is, but you’ll find out once the results are unveiled. I have to say, I am extremely gratified by their contribution. To me, this is precisely the sort of openness that shows a developer’s confidence in the quality of their own products, that makes me pay particular attention to their libraries.

Thank you to the developer for getting involved (you know who you are) - I hope you receive the kudos from the community this deserves for being the first to do so, and I hope this inspires other developers to get involved directly.

Deadline tomorrow!


----------



## Garry (Jun 4, 2018)

So, for fun: can you guess who it is? Which developer do you think has sufficient confidence in their own product’s quality, sufficient openness and transparency, and sufficient engagement with their customers, to get involved and be prepared to have their product compared in a randomised, double-blind shootout!


----------



## Vik (Jun 4, 2018)

Garry said:


> I hope this inspires other developers to get involved directly


+1


----------



## fretti (Jun 5, 2018)

Garry said:


> So, for fun: can you guess who it is? Which developer do you think has sufficient confidence in their own product’s quality, sufficient openness and transparency, and sufficient engagement with their customers, to get involved and be prepared to have their product compared in a randomised, double-blind shootout!


Orchestral Tools (/Hendrik Schwarzer) is my guess.


----------



## N.Caffrey (Jun 5, 2018)

Garry said:


> So, for fun: can you guess who it is? Which developer do you think has sufficient confidence in their own product’s quality, sufficient openness and transparency, and sufficient engagement with their customers, to get involved and be prepared to have their product compared in a randomised, double-blind shootout!


Chris Hein. He usually does that.


----------



## lucor (Jun 5, 2018)

My money is also on Chris.


----------



## Vik (Jun 5, 2018)

If any of the major libraries are missing, maybe it would be a good idea to contact them and ask if they are interested in providing a file? Maybe the non-major manufacturers also would be interested in being interested in being part of this, and provide files?


----------



## Casiquire (Jun 5, 2018)

Sent in a few examples!


----------



## I like music (Jun 5, 2018)

Garry said:


> So, for fun: can you guess who it is? Which developer do you think has sufficient confidence in their own product’s quality, sufficient openness and transparency, and sufficient engagement with their customers, to get involved and be prepared to have their product compared in a randomised, double-blind shootout!



They probably don't trust us with their product, so want to do the job correctly themselves! Cynical bastards! Joking...


----------



## Garry (Jun 6, 2018)

Last day for submissions - we do have a great line up now for comparison, but if you'd like to get your favourite in, today is your last day to do so.


----------



## Arviwan (Jun 6, 2018)

Hey @Garry, i've got a bunch of submissions ready to send, but i can't find who to send it to !!?
Help !


----------



## Garry (Jun 6, 2018)

Arviwan said:


> Hey @Garry, i've got a bunch of submissions ready to send, but i can't find who to send it to !!?
> Help !


Please just send me a PM with attachments


----------



## Garry (Jun 7, 2018)

Thanks to everyone who entered submissions. We ended with a really impressive number and quality of contributions, so this should be a fun competition. I’ll compile the results later into a file, and then ka00 has kindly agreed to line up the stems and blindly randomise the order. After that, I’ll send out the files, and the voting can begin. Just bear with us while we get everything sorted first...


----------



## Chris Hein (Jun 7, 2018)

Before I even know the results of this great shootout,
I'd love to see this turning into an arranger competition.

"What, this is done with that library? - I could do better..."

Or: "Wow, this is done with that library? - Please send me the project file so I can study how you made it..."



Chris Hein


----------



## Garry (Jun 7, 2018)

Chris Hein said:


> Before I even know the results of this great shootout,
> I'd love to see this turning into an arranger competition.
> 
> "What, this is done with that library? - I could do better..."
> ...


Yes, absolutely - no reason this needs to be a static database. Others can add, and try to reproduce or improve on the original submissions, as you suggest. An additional direction could be:

"We're releasing this new library" - "Great, can you release it with the VI-C Violin Competition MIDI file, so I can hear how it directly compares to all the others?".

In this way, this competition can become a way to benchmark current and future libraries, and for that, no reason that the original database of entries couldn't be continually updated (with new libraries, as well as with new arrangements of old libraries), as you suggest.

Hopefully this will be a great resource for the VI-C community.


----------



## Chris Hein (Jun 7, 2018)

Argh, I'm trying to get this away from a developers competition. 
Its always the library to blame.
Still, before we hear any results, if genius VI-Controllers like Guy Bacos or Carles would do all mockups, we'd probably get different results.
Can't wait to hear something.

Chris Hein


----------



## Garry (Jun 7, 2018)

Chris Hein said:


> Argh, I'm trying to get this away from a developers competition.
> Its always the library to blame.
> 
> 
> Chris Hein


Of course, nothing stopping developers (or any representative of their choosing) releasing the MIDI file for their own libraries. I appreciate you may differ, but I don't at all want to get away from a developer's competition: it's what we consumers need since we don't get to try the libraries ourselves before we buy them: it's developer's products we're evaluating and basing purchasing decisions on. 

Ideally, for benchmarking purposes, we have a way to take out the performance aspect, and allow the library to stand on it's own merits. That could be the same 'genius VI controllers' doing multiple libraries, so that the performer is the constant - that's one way (but puts a heavy burden on a small number of individuals). In this first iteration of the competition, the burden problem is avoided by being distributed more broadly, and for some (not all) libraries, we'll have multiple contributors, so the value of the library will be evident in more than 1 pair of hands, mitigating (to some extent) the poor player explanation. That's not intended to be an ultimate solution, but it's a starting point, and if on the basis of the results, people find it useful, hopefully we can improve next time around.


----------



## Garry (Jun 7, 2018)

Yup, we can already say, before the results are in, that this won't answer every question we have about these libraries. Could the performance of library A be due to a poor player, or more than one? Sure! But it's more information than we had previously, and that's a start. Could a developer subsequently say, hey, my library A didn't do so well because of poor player X, check out this on the same midi file from player Y. GREAT! We've moved the data even further. We're now incrementally building a database, that we can all use for future comparisons, that gets towards the best version of that same benchmark midi file, for every library, and future library. This is to everyone's benefit. 

Now, will the unblinded, post-hoc contribution of player Y be compromised, compared to the blinded comparison? Yes! But all developers had an opportunity to contribute their own best version on the first round, and credit to those that did so, because they gave their library the best shot. 

But, I suggest that we wait until the results are out shortly, and then we'll start to know whether we have a player problem, and whether that needs fixing. Either way, we've all moved the ball forward here, and I think that's a great thing.


----------



## Garry (Jun 7, 2018)

OK guys, so we have a problem, though it's a nice problem to have, it's still a problem!! 

We were successful in soliciting entries. Perhaps a little TOO successful! We have *78* entries!!! So, how to reduce to something manageable that people can review? I have some suggestions below, but welcome your thoughts:


*Wet vs dry*: not all are unique: there are some submitted as both a dry and a wet version. I suggest we group these together (i.e. Library A(wet), Library A(dry)) as 1 entry. But this will reduce the list by less than 10.
*Multiple entries*: we have >12 contributors, and some libraries were submitted by more than 1 contributor. We could group these together (i.e. LibraryA_(comp1), LibraryA_(comp2), Library_B(comp1), Library_B(comp2).
pros: there was concern expressed about whether the competency of the contributor could negatively impact voting for that library. By grouping them together, people will get a better sense of how much difference there is across players for a given library, and can weight that accordingly.
cons:
a benefit of having multiple contibutors of the same library is it would potentially show how consistent/arbitrary the voting is: if all votes for Library A are high, irrespective of the contributor, then this likely reflects a strong library, whereas if voting for the library is inconsistent across the contributors, with little discernible difference in the composer's performance, then it is likely the differences between the libraries are minimal, forcing people to make arbitrary preference choices. This feature will be lost if we unblind this aspect of the data
it wouldn't reduce things by that much, since people have used different aspects of the library (e.g. different patches, different articulations, different ensemble combinations) and it wouldn't be meaningful to combine these, so it only makes sense where they have used the same patch from the same library (which isn't SO often).


*Changed Format*: the original plan was to just include all of them, and ask people to rate them against each other. However, with so many this might need a different format, so we could consider (as Christian Henson did for his shootout with reverbs), a tournament format, like World cup soccer (i.e. Group A (consisting of say 8 randomly chosen files per group), Group B, Group C, etc. Then, the winners of each group (there would be 8-9 groups if we reduce wet/dry), then go into a 2nd round, then quarter final, semi-final, final - winner.
pros: would reduce each voting round to a reasonable number
cons:
voting would be prolonged over several days/weeks, and people may lose interest.
the 'winner' isn't necessarily the best, it may just have had an easier path through.


*Status Quo*:The only other alternative I can think of, whilst maintaining the competition element, is that we had planned to release these not as individual files, but as 1 file, with multiple tracks, so you can audition them dynamically, by muting all, and soloing individual tracks, so you can quickly switch between, and you don't have to listen to the whole file for every file. We could decide to still go this route, and just accept that there's a LOT of tracks to compare. We could then just ask people to vote for their top 10.
*Abandon the Competition*: just let people have access to the files (blinded for a while), and then speculate & comment, followed by an unblinding.
Pros: we still get the database, and this was always the most meaningful output of this. We now have a database of many, if not all, of the violin libraries, and people can use this to directly compare when making their purchasing decisions. This was always the strongest aspect of the 'competition', and the winning library was just for fun.
Cons: where's the fun in that, we want a shootout! If we are to do this again (say with cellos next), will people still want to contribute their entries, without the fun of the competition?

What do you think?

Since this is more work than we originally envisaged (again, a nice problem to have!), and I'm out of town until next Tuesday, this will mean things will be delayed until I get back and can get all the files imported into one file, so we have some time to decide.


----------



## bigcat1969 (Jun 7, 2018)

Could you have several subsets? All the library wets in one contest, all the user dampened in another and all the drys in a third. All the solo violins in one and the small sections in another and the big sections in a third? I don't know how big the 'it came that way' wet solo violin category is of course or the dry huge section, but this might make it more manageable. I think it is fair to split as it is hard to compare one set of cat gut recorded in a studio with a close mic to HZ strings Violins with every violinist in the city playing in a monster hall.


----------



## Garry (Jun 7, 2018)

bigcat1969 said:


> Could you have several subsets? All the library wets in one contest, all the user dampened in another and all the drys in a third. All the solo violins in one and the small sections in another and the big sections in a third? I don't know how big the 'it came that way' wet solo violin category is of course or the dry huge section, but this might make it more manageable. I think it is fair to split as it is hard to compare one set of cat gut recorded in a studio with a close mic to HZ strings Violins with every violinist in the city playing in a monster hall.


Definitely an interesting idea, worth thinking about. I'll take a look over the files, and see if that grouping would break out reasonably evenly.


----------



## brek (Jun 7, 2018)

I'm personally less interested in the "competition" aspect, but if you want to pick a winner, how about we each rate each library and then average the results? Don't know if thats more work or less. 

I think another aspect of this that could be fun is a "Guess the Library" competition amongst the forum.


----------



## Vik (Jun 7, 2018)

Imo it’s best if we wait with the guessing part of this. Some people will guess correctly from the start, and if they publish their guesses, the whole blind test will soon become less blind.


----------



## Casiquire (Jun 7, 2018)

I say release them as a competition, all of them. It will actually help a lot if some libraries were submitted by multiple people, it will make it clear if it's the library or the person recording. Then after a decent amount of votes group them by library and reveal the libraries. Leave the person who made the mockup blinded, that would be distracting otherwise. I know some of us are better than others and the topic of who had the best ones will absolutely come up. However keeping it blind, if there's one library with four fantastic mockups, that's a sign.

Of course with that many examples not everyone will hear every mockup and people are sure to skim through the examples but that's fine for this purpose. The only real question I have left is, how many votes should each member get? One for each example so we can pick all the ones we like, even if we like them all, or only enough for half the demos to force a bigger gap?


----------



## fretti (Jun 7, 2018)

I am no pro, so my contribution won‘t stand (probably) against one of a professional (given that the used library has multiple entries). 
But I think we should (at least in the beginning) compare all.
Because if comparing all, maybe there is one entry you really like, and one you don‘t like at all. With the same library but from different people, so the surprise will be even bigger than just „ok that’s library x, that was definitively better than library y“ or so. Could be imo more fun


----------



## Pianolando (Jun 7, 2018)

Choose the entry you think sound best from each library and skip the rest, at least for now. There is very little to gain with having ten different versions with CSS, all sounding subtly different due to small tweaks and different mic choices, it’s just confusing. Just pick a good one that lets the library shine and move on to the rest. 

How many different libraries have entered?


----------



## kriskrause (Jun 8, 2018)

Garry said:


> OK guys, so we have a problem, though it's a nice problem to have, it's still a problem!!
> 
> We were successful in soliciting entries. Perhaps a little TOO successful! We have *78* entries!!! So, how to reduce to something manageable that people can review? I have some suggestions below, but welcome your thoughts:
> 
> ...


I like what you describe as the Status Quo. But how would we be sent the entires? Would it be in a DAW project file? If so, which DAWs would be supported?


----------



## Vik (Jun 8, 2018)

Pianolando said:


> Choose the entry you think sound best from each library and skip the rest, at least for now.


I'd start that way as well, you can always post more entries later. You can even wait with the dry versions.


----------



## N.Caffrey (Jun 8, 2018)

Pianolando said:


> Choose the entry you think sound best from each library and skip the rest, at least for now. There is very little to gain with having ten different versions with CSS, all sounding subtly different due to small tweaks and different mic choices, it’s just confusing. Just pick a good one that lets the library shine and move on to the rest.
> 
> How many different libraries have entered?


I don't agree. What is good for some, is bad for others. It's too subjective.


----------



## Garry (Jun 8, 2018)

Thanks for the comments so far. On reflection, I think what's best is the following:

*Publish all of the results*. It's the only open and transparent thing to do.
good question about DAW though - I only have Logic, do we just have others that can save & distribute in multiple DAWs - what's the best way of doing this?

*Anonymise all contributors *(unless any contributors specifically contact me and state that they DO NOT want to be anonymous. Several people have expressed concern that since there will be a question as to whether a poor rating reflects a bad library or a bad player, they don't want to be personally blamed for this. I think this is totally reasonable, so we'll publish only 'contributor 1', 'contributor 2', so you can then see if there is pattern across a contributor's files (e.g. this contributor didn't use CC for example), which is important to know, but doesn't identify anyone.
*Voting will be top 10 only*. With 78 entries, we can't have people rank 1-78! So to make both the voting, and the collation of results manageable, people will just vote for their top 10.
How does this sound? We were always going to learn as we go with this, so minor adjustments along the way are to be expected. No problem.


----------



## fiestared (Jun 8, 2018)

Garry said:


> Thanks for the comments so far. On reflection, I think what's best is the following:
> 
> *Publish all of the results*. It's the only open and transparent thing to do.
> good question about DAW though - I only have Logic, do we just have others that can save & distribute in multiple DAWs - what's the best way of doing this?
> ...


+ 1


----------



## Saxer (Jun 8, 2018)

Ok for me


----------



## eli0s (Jun 8, 2018)

Garry said:


> How does this sound? We were always going to learn as we go with this, so minor adjustments along the way are to be expected. No problem.


I Agree!


----------



## Garry (Jun 8, 2018)

Ok, great. Let's go with that. 

As I mentioned, there'll be a delay as I'd hoped to have wrapped this up by now, but with so many files and me being out of town for a few days, this will have to be next week now. 

@ka00 - I'll send you the Logic file once I've imported and renamed everything, probably by Wednesday.

Thanks for your help with this issue everyone.


----------



## DavidY (Jun 8, 2018)

Garry said:


> *Voting will be top 10 only*. With 78 entries, we can't have people rank 1-78! So to make both the voting, and the collation of results manageable, people will just vote for their top 10.


I think that even choosing a top 10 out of 78 options would be too much for me to cope with (I realise others would be better at this than me).
I'm wondering if scoring out of 10 would be better?
Also if there was some way to pick a random sample of, say, 15, out of the 78 (I wonder if Google Docs would do it?) so that people could mark a sample rather than trying to listen to all of them in one go. If enough people did this (and that might be a challenge) then the scores could be aggregated. 
Just my musings though...

Edit: and as I was typing that I see Garry's posted saying "let's go with that" on a different method.


----------



## Garry (Jun 8, 2018)

DavidY said:


> I'm wondering if scoring out of 10 would be better?



You know, I really like that - the problem with the 'pick your top 10 method' is that the first few and last few tracks in the list will be disproportionately selected (well established primacy/recency effect in psychology). It also becomes a very difficult memory task - was no.53 really better than no.6?!! This way, you can just give a score as you go, and don't have to remember too much. I like it!

Anyone disagree?


----------



## Garry (Jun 8, 2018)

As for the Google Docs (or some other alternative), I think it becomes too cumbersome. Also, given we don't know how many participants, you don't know how many randomisations/groupings needed to ensure all tracks are seen at least once, or to ensure each is seen by the same number of people. All a bit tricky for our purposes I think.

If you have a single DAW file, with all 78 entries lined up as tracks, you can mute all, and then quickly solo each individually, so you don't have to listen to the whole track to make your rating. I think this will work well, combined with your scoring suggestion.


----------



## Pianolando (Jun 8, 2018)

Seriously, how on earth can anyone have time to listen to 78*10(?) different snippets and if anyone does then how can anyone remember what the first few hundred examples sounded like.

Listening tests shows that even remembering how something sounds 10 seconds after is extremely hard.

I really admire this initiative and the work behind it, but it’s getting way out of hand imho.

How many “big” libraries are there? About 10 professional ones with symphonic size? Probably a few less with a chamber setting. Something like that could be manageable if splitted up to two different tests imo (one symphonic, one chamber) but 78 in the same test? I don’t really get it and cannot for the life of me find the time nor the energy to participate but maybe I’m in a majority here. It would be very interesting to see how many actually complete the test.


----------



## Casiquire (Jun 8, 2018)

78 times ten? Where did the times ten come from?


----------



## Pianolando (Jun 8, 2018)

I meant 78 entries*7 musical examples per entry (I wrongly guessed 10). That would make 546 musical phrases to listen to.

But maybe I misunderstood and there actually is “only” 78 phrases in total? In that case it’s much less overwhelming.


----------



## Garry (Jun 8, 2018)

Each MIDI file is 83 seconds. I don't think you need to listen to all of each recording to make a judgement, and the beauty of having them all as tracks in your DAW means you can quickly flip between them. If you spend half an hour doing this, you'll spend about 25 seconds on each. Some you'll spend more on, some less, but half an hour to evaluate almost all of the violin libraries out there - seems reasonable?


----------



## I like music (Jun 8, 2018)

And it may end up becoming the go-to resource (or at least a frequently cited source) when someone new to sample libraries asks the inevitable "which library..." question. It may in some cases mean that people spend £££ (or ££££) on the right libraries. For those people, I sure hope they'd listen through this. That said, I understand that there are lots of other variables and points to consider, but at least this states very clearly what it is trying to compare.


----------



## Casiquire (Jun 8, 2018)

Pianolando said:


> I meant 78 entries*7 musical examples per entry (I wrongly guessed 10). That would make 546 musical phrases to listen to.
> 
> But maybe I misunderstood and there actually is “only” 78 phrases in total? In that case it’s much less overwhelming.



Oh I figured most people wouldn't listen to all of them, and that's fine! People will skim through, listen to the just important parts in their eyes, and we'd wind up with enough people hearing enough parts to get a good feel.


----------



## bigcat1969 (Jun 8, 2018)

I still think 78 is just too many. It goes from being an interesting enjoyment to a work project, hence my theory of separation into smaller related chunks. As it stands many will listen to 10 minutes worth, pick a couple favorites and move on, so better hope your samples are in one of the first 10 to 20 or maybe in the last 10 if folks jump. Keep track of where the favorites are in the mix of the 78. I would strongly expect few from 25 - 60. Maybe try to put one each of the libraries in the first 20 and lesser variants later if you must release all 78 in one batch.


----------



## robgb (Jun 8, 2018)

Casiquire said:


> It will actually help a lot if some libraries were submitted by multiple people, it will make it clear if it's the library or the person recording.


This, for sure. I know that the one I submitted won't pass muster because I was under the impression we shouldn't add anything to the midi file, including dynamics/expression. So I just recorded it all flat with a little vibrato. Then, of course, I went back and listened to the example and realized I should have put a little more effort into my entry... And, unfortunately, I don't have the time to redo and resubmit.


----------



## NoamL (Jun 8, 2018)

This needs to be re-organized into some kind of tournament or qualifiers structure where we're only listening & deciding between 6-7 files at a time. 

BTW congratulations on the high amount of entries.


----------



## I like music (Jun 9, 2018)

Can we also do one of these for harmonica libraries?


----------



## kriskrause (Jun 9, 2018)

I understand Garry not wanting to use his personal preference to weed out competing versions of the same library. But for libraries with multiple entries, would it be possible to do preliminary blinded tournaments per library to determine which version the community likes best?

So if Library A has 6 entries, two versions get first round byes and it is otherwise set up like an 8 entry tournament done in 3 rounds until there is one representative for Library A. 

Would that make a significant headway into cutting down the number of entries?


----------



## markleake (Jun 9, 2018)

I've been watching this with interest. But as much as I do appreciate all the hard work and people's contributions, it just seems like it won't give much useful information to anyone, new or old users alike.

I was hoping for some blind comparisons of different _qualities_ of the entries, rather than some kind of "what is the best library?" ranking, which I think will be at best meaningless, or at worse misleading. The problem with a simple ranking or scoring is this...

_If each person ranks (or scores) each entry, we each end up providing averages(1) of averages(2), which is then averaged(3).

The 1st average is where we are asked to take into account all the musical phrases within each submission.
The 2nd average is where we are asked to take into account all the qualities for which we could evaluate the submissions.*
The 3rd average is where each score/rank is then put into the pool to be averaged with other scores._

The 3rd averaging is scientific and why you want to do this kind of test. The 1st and 2nd introduce a whole raft of measurement issues that confound the result. That's without even considering the issues of comparing such a large number of submissions introduces.

So putting my statistics hat on, the result will have no validity due to inaccuracies in the measurement/survey method, and the measurement is more likely testing people's patience with getting through the list, or other pshychological factors already mentioned, not any actual worthwhile quality comparison. Plus, it automatically biases the sample pool (even beyond where it is already biased) to only test people who are willing to put a lot of effort in to rank the submissions. This really is not desirable at all.

Basically, going through all the effort of a blind test is likely nullified by the above, I'm sorry to say. 

I don't want to say that, as Garry especially has put a fair bit of effort in. If there was some way to reduce/simplify or make the testing more targeted (eg. by library types, by specific quality*, by just one musical phrase), I'd say go with that, rather than do all as one big ranking.


----------



## markleake (Jun 9, 2018)

* By _quality_ in the above post, I mean rating libraries by things like (many of which are very subjective):
- overall tone
- detail of sound
- agility and speed
- lack of artifacts
- balance and consistency within/between notes
- dynamic range
- how flowing or smooth the sound
- realism
- emotional feel
- room tone
- is it a cinematic/classical/romantic/whatever sound
- etc.

Plus the big concern I have is even if a library ranks highly on a whole heap of the above qualities (if that is possible), it still says nothing about usability of a library - how easy the library is to use, and how well it would fit in with your workflow.

[sorry, I'll shut up now - I don't want to be seen as a hater! I really do like the idea of this, I'm just realising it may not be very useful.]


----------



## ModalRealist (Jun 10, 2018)

@markleake, I really don't think anyone set out to give a "scientific" ranking of the libraries. In fact, I have absolutely no idea what would even be involved in a "scientific" ranking of libraries. The idea seems entirely inchoate. Surely the objective of a blind test of this sort is to get listening participants to focus on _hearing _the libraries, and taking a 100% subjective measure of which _recordings of the agreed music _they like the most?

With regard to the first two averages, I really don't see why it's a problem, because the first two averages force voters to combine (a) how a given entry handles a variety of phrases, and (b) what aspects of a library matter to them as a listener, with regard to overall impact of the library. While this will vary hugely from voter to voter, that's _part of the point _in this case, because we're measuring something extremely subjective in the first instance. It's like saying that one cannot measure consumers' preferences for different brands of orange juice, because drinking orange juice muddles too many chemicals together at once, and then also asks drinkers to judge flavour and texture together!



markleake said:


> Plus the big concern I have is even if a library ranks highly on a whole heap of the above qualities (if that is possible), it still says nothing about usability of a library - how easy the library is to use, and how well it would fit in with your workflow.



Again, I just don't understand the complaint. Of course, listening to other peoples renditions of a phrase or phrases with a given library won't tell you anything about workflow. How could it?

It sounds to me that, since you were:


markleake said:


> hoping for some blind comparisons of different _qualities_ of the entries


you've done your best to trump up some charges


markleake said:


> the result will have no validity





markleake said:


> Basically, going through all the effort of a blind test is likely nullified by the above, I'm sorry to say.


on why the proposed scheme ought to be abandoned.


----------



## markleake (Jun 10, 2018)

@ModalRealist.

Er, OK, well... I'll try and not use statitistical terms or reasoning then. My concern is the point of the double-blind test could be negated if the test is to try evaluating all submissions at once. It is a double-blind test after all, so it's worth trying to get it as right as it can be. I wouldn't want to see the effort go to waste. My suggestion: break the test up into smaller parts (that was what I said above, if you re-read).

I know some people are not so familiar with scientific methods, so hopefully that helps you.

You may have missed my comments in my above posts... I'm not hating on this, it's just my 2c to try and help get it right.


----------



## markleake (Jun 10, 2018)

@ModalRealist. Also, just a quick answer to your question about science... statistics sit in that odd space between a science and the arts. Normally I wouldn't say they are in and of themselves a science though. You can measure a huge number of things in a scientific way, but the things you are measuring don't have to be easy concepts. In other words, what you are measuring can be people's thoughts, ideas, opinion, emotion, etc., and it can still be measured scientifically.


----------



## ModalRealist (Jun 10, 2018)

@markleake, and my concern was that you misconstrued what was valuable in the methodology used here. You seem to think we can get a statistical measurement of the _libraries _qualities, whereas all that's being measured here are expressed responses of individuals' judgments about those qualities. Since the value of a library to a composer (and to the people listening to the composer's work) is ultimately in the gestalt of the finished mockup,* and since the experience of listening to a mockup (or any music) is one in which by default the various qualities are heard as a unified whole, there's no problem in measuring peoples' judgment of, and measuring it in terms of, that gestalt impression.

*Speed and ease of use, etc., are also obviously factors, but not ones that can be investigated through a listening test, blind or otherwise.

Your core claim was that the range of phrases and the indeterminate qualities being assessed:


markleake said:


> introduce a whole raft of measurement issues that confound the result.


But this just isn't the case, _unless you are trying to measure the individual aspects. _Compare a case in which we are examining oil paintings rather than sample libraries. If we asked "which is the best painting?" it's obvious that the results wouldn't tell us (for example) which painting had the _best colours. _Similarly, this test won't tell us which example people think has _the best rendition of fast passages. _The weighting that participants give to colour/fast passages is unknown, as is how they judge what makes for good colour/fast material. But that's part of the point of a test like this: we're not measuring against a standard set down by an individual or a committee, but rather allowing listeners to respond idiosyncratically to the material they hear. At the end of the day, the reason we're _blinding _here is to remove bias on the listeners' perspective. The double-blinding is borderline superfluous (and if you think it isn't, spell out _exactly _why, and exactly what the _double _blind has added).



markleake said:


> @ModalRealist. Also, just a quick answer to your question about science... statistics sit in that odd space between a science and the arts. Normally I wouldn't say they are in and of themselves a science though. You can measure a huge number of things in a scientific way, but the things you are measuring don't have to be easy concepts. In other words, what you are measuring can be people's thoughts, ideas, opinion, emotion, etc., and it can still be measured scientifically.



I didn't ask any questions about science or the status of statistics. I asked a largely rhetorical question about what this exercise is trying to measure: namely, the subjective opinions of VI-C users on what sounds best when it comes to violin recordings (or more precisely, faked violin recordings). What we'll have is _objective _data about these _subjective _judgments. There's nothing wrong with the test format for this purpose. Thus, my tongue-in-cheek point that your criticism ultimately targets what we're measuring, and not really the method of measurement. Now, I haven't assumed that you aren't an emeritus professor of statistics; equally, perhaps you shouldn't assume that I need your gestural account of "where" statistics is, or so on. Even moreso if you're not going to distinguish between measuring "people's thoughts" and measuring "people's reports of what they are thinking."

Anyway, I maintain that there's nothing deeply flawed at all in Garry's exercise here. To the contrary, there's more effort than is really necessary being put in, given the aims that were set out. If there's any sticking point of note, it's the volume of submissions, not the passage being listened to or the qualities being judged. A score-based voting system (as opposed to a top 5/10/n system) almost entirely bypasses this issue. If randomised presentation of the entries is impractical (which it surely is) the simplest solution to "too many to listen to" is for Garry to arbitrarily divide the master set into subsets with roughly equal representation of given libraries in each set, and for these to be released as separate batches for scoring. But frankly, ifthe results (average score) can be presented alongside the raw number of responses (not scores, just quantity of responses per rendition) one could pretty safely ignore the issue and just release the whole lot as one batch.


----------



## markleake (Jun 10, 2018)

@ModalRealist. Well, all that is fine and dandy. But I feel you are a bit defensive here? I think you miss my points by getting lost in the detail. To simply what I am saying further...

The end result may not mean all that much, because it is effectively mashing together lots of different personal tastes about lots of different/competing factors. (Which is fine, BTW, as long as people know.) It's the old question of... OK, so it's the best, but the best at _what_?

And testing all submissions as one big lot is probably testing people's concentration or willingness to participate as much as anything else.


----------



## Casiquire (Jun 10, 2018)

I think we actually get a great feel for the usability of a library of there's more than one entry and they all sound great.


----------



## Garry (Jun 12, 2018)

*** UPDATE ***

An update on the ‘competition’. Given all the issues lately that have been raised on VIC, I’ve decided not to proceed personally with the competition, but do want to make sure that the files that were promised are made available for everyone’s use, as originally intended. I also want to ensure there is an opportunity for others to take this over, if they choose to do so. I’ll explain my reasons below, but before that, I’ll outline how I plan to proceed that enables people to continue with the competition, or just to receive the files, as they choose.

*PLAN*:

Tomorrow, I’ll release a Logic file, containing all 78 tracks that I received. These will not be normalised or perfectly lined up in time, but they WILL be blinded, and this should suffice for people's own personal review (you all have volume faders!). I can only release as a Logic file, as that’s the only DAW I have. I’m sure others who have more than 1 DAW will be happy to save in other formats, so that it’s available to everyone. If after this, there are still people whose needs cannot be met, I will make available all of the raw mp3 files (the raw files will be UNblinded, hence the reason for doing this later).
If there are people who are particularly keen for the competition/blind voting part of this to go ahead, please contact me, and I will send the Logic file only to you; you can then go ahead with organising the remainder of the competition, with whatever rules/additional criteria you feel would be best.
Assuming no one identifies as wanting to take over the competition/voting, etc part, then the day after tomorrow, I’ll release the unblinding. Everyone can then choose to review the files blinded if you wish, or unblinded - whatever you prefer.

*REASONS*:

For me, the competition, with all its flaws, was the least important part of this. The ‘competition’ was just fun to motivate engagement.
What was meaningful, was to have as many as possible of the major libraries used to produce a diverse set of melodic lines, that gave the possibility to compare them directly, that gave a meaningful, if imperfect, means for comparison, stripped of prior biases. For this purpose, it matters only what criteria YOU personally are prioritising as important, and it doesn't matter how or if this lines up with the choices others make. We achieved that, and I hope this is useful to people, now and in the future. It can easily be improved: if some people feel a library wasn’t well represented by the entries submitted, there is nothing to prevent anyone, (ideally adhering to the same rules), from sharing a new and better version with this community.
I expect this competition to be contentious. People will have their own expectations, biases, requirements, and some members will express these productively. Others will not, and will instead blame everyone involved, will focus on any detail they consider to render the whole process meaningless, or will jump on tribal loyalties from previous battles that have little to do with the ‘competition’. Whilst I would like to think that the competition would stimulate open, transparent discussion, this is the internet, and whilst this forum is no worse than others for flame-throwing (and considerably better than most), the likelihood of it degenerating is high, and I’ve decided that personally, I don’t wish to participate in this.
I think this forum needs a contentious discussion right now like it needs Trump to take over as moderator!

*THANK YOU*

To all those that contributed, in terms of productive discussion of ideas, submission of entries, and general support and encouragement, thank you. I think we have collectively produced something that is useful to the community. I hope it can be the start of something the community can improve upon. Indeed, to those who submitted criticisms, there is now an opportunity to actively redress any of the problems you identified, and act upon them by doing it better. If you are able to do it better, and go on to do so, then I reserve the biggest ‘thank you’ to you.

*Peace*.


----------



## markleake (Jun 12, 2018)

Oops... sorry folks and Garry. I didn't intend that at all, if it was me. I'll hang my head in shame and retreat into the corner.


----------



## bigcat1969 (Jun 12, 2018)

An unfortunate but understandable quasi-end to a fun idea. Hopefully someone will decide to pick up the ball and jog with it. It might be interesting to have the mp3s posted unblinded so we could hear them and make our opinions with maybe less controversy than voting.


----------



## boxheadboy50 (Jun 12, 2018)

As a newcomer to both this forum and the world of composing with samples/composing to picture, I'm bummed that this has fizzled out.

However, I completely understand why it's taken this new direction. I was less interested in a "competition" and more interested in learning the sound of different libraries in context.

I'm still very excited for the Logic file. I own very few libraries and I think a blinded test will prove beneficial for both me personally/educationally as well as my wallet.

Thanks for putting this together, @Garry!


----------



## rsampaio (Jun 12, 2018)

As a passive observer I agree with you @boxheadboy50. Reading this thread and also looking through all of the other things going on in tandem in the forum has lead me to decide on remaining quiet and just check in every once in a while.

Thanks @Garry for the time you've put into this so far. Too bad the bickering and nitpicking took a nice idea and sucked all the fun out of it.


----------



## eli0s (Jun 12, 2018)

@Garry , I suggest the use of an OMF file export, so we may use it in different DAWs.


----------



## Arviwan (Jun 12, 2018)

+1 for OMF !


----------



## markleake (Jun 12, 2018)

I do feel partly responsible for wearing Gary down, given my comments further above. Maybe? I have to read between the lines on his post to draw that conclusion. After re-reading the thread, I can't see that anyone said anything very controversial. There were some early argumentative posts, but they were removed, so that looks to be all sorted. The rest to me just looks like a friendly exchange of ideas. Of course there's the recent drama threads, but that stuff didn't seem to creep into this thread, so I don't think @rsampaio that you should feel intimidated into not participating, it's just normal forum noise. Anywho... apologies again if I contributed to derailing the thread by suggesting that the ranking of libraries part would be a bit "meaningless" to some people, to quote Garry. I tried my hardest to say I think the exercise itself is NOT meaningless.

I'm very happy to help also. I have limited experience with this stuff, but I'm at least able to help balance the stems, group them into logical comparisons, etc.

+1 to exporting to OMF also.


----------



## Garry (Jun 12, 2018)

Seems I should clarify my reasoning around the change of direction. Whilst recent comments of how this "just seems like it won't give much useful information to anyone" were mildly irritating coming at this late stage, this wasn't the cause; to my mind, @ModalRealist had already effectively addressed the concerns raised, and needed no further comment from me.

I was disturbed by the recent events on the other thread with Daniel, in which I was directly involved. I completely stand by my comments in discussion with him, and felt that doing what I could to prevent rumours starting against a developer whose contributions to the community I valued, was a worthy cause to try to defend. However, the impact on him and how he responded should be upsetting to anyone, but particularly those directly involved, and it certainly was to me. I'm extremely glad he's received the show of support from the community that he has since, and it was good to see people unite behind this; a difficult situation which I thought @Mike Greene handled well. But the outcome on Daniel still shook me up, and as a result also made me reflect on this thread, and to consider how contentious the results of the competition were likely to be, however well intentioned, and the potential implications of that.

Inevitably, some people would not have been happy with the results once revealed, and at that point, I have no doubt how quickly things could degenerate. Not just because this is the internet and so this is invariably how things go, but because Daniel's situation served as a reminder that there are lives behind these posts; that there can be real impacts, and that our actions, however well-intentioned and individually justifiable, can have consequences beyond what is apparent to us as forum contributors. The potential implications of a developer coming at or near the bottom of any user comparison list is real, and has the potential for tangible consequences in the real world. Given the experience with Daniel (again, I stand by my comments with him, but that doesn't mean I don't regret the impact on him), I decided I don't want to be responsible for lighting the flame that leads to further drama here, but more importantly, further consequences that I can't predict. This was not the purpose of the competition. I find the current situation with developers perplexingly lopsided: for the most part, customers cannot experience the product before purchase, have to rely on vicarious demos, and cannot return or resale afterwards products that can accumulate to thousands of dollars. This was an attempt to redress that balance a little: so that we the customers are armed with a little more information prior to purchase. But on reflection, my feeling is that we can do that, without the unintended effect being to single out individual products, and the stress this could cause those developers and their employees, that recent events sensitised me to.

So, it was with this in mind that I opted to discontinue the competition and voting, and simply provide the information that was collated, so that users can choose to use it, or ignore it, on an individual basis. I felt this was the best way to be true to the initial intention, and the interest and support that was shown by the community in this effort, whilst avoiding any negative consequences, as much as possible. After posting the files, I plan to take a long break from the forum, as I was truly shaken by this recent experience.

Finally, with regards to practical questions about the format, my (limited) understanding is that OMF is not supported by Logic Pro X (according to this), and would anyway lose track names (according to this). I don't see OMF as on option in the export options in my version of Logic Pro X, so, I'll plan to share the Logic file as well as the AAF file. Again, if this isn't enough, I'll later share the raw files, but since these are unblinded, I'll do this after releasing the blinded information, so that those who wish to, can still review the files blinded.

Hope this clarifies.


----------



## Chris Hein (Jun 12, 2018)

Doesn't a Logic project include a folder with plain audio files?
Where is the problem to import that audio in any other DAW?
The files just have to be named correctly like 01, 02...

Thanks Garry for investing so much time on this.

Chris Hein


----------



## markleake (Jun 13, 2018)

Fair enough @Garry. I'm looking forward to the files then.


----------

