# I created two mockups of the same piece, which is "better"?



## blaggins (Nov 4, 2022)

I recently wrote a thang for the Ryan Leach October 2022 Scoring Competition, entirely in Dorico and using Noteperformer as the only playback engine. I then used to "test" two different libraries by creating a mockup in each, to see which one had the better sound, better workflow, etc. I realize it's only one piece and won't give me the full picture of either of the libraries' pros and cons, but I thought it was an interesting exercise still.

If anyone is willing, I'd love some thoughts on the two mockups. Which one is better? More convincing? (Thoughts on the composition itself are also welcome, it's certainly not going to win any awards but it's a bit of an improvement over what I've written so far at least... still got a lot to learn I must say and I wouldn't mind some feedback).

I did re-use the same basic MIDI between each mockup but there's a great deal of tweaking that has gone into both. I drew custom velocity and CC curves for each independently, in some cases I nudged notes here and there for timing, chose from among whatever articulations the libraries had to get as close as possible to the original composition. I also have slightly different processing chains on each depending on what I thought they needed, etc.

The two versions "Version 1" and "Version 2" are posted as MP3s in this thread. I've also posted the original Noteperformer version to YT in case anyone wants a reference to the compositions "intent":


----------



## liquidlino (Nov 4, 2022)

I've already said before how much I love this piece. I'm guessing one of these is Synchron Prime.

Version 1
- Bit muddy/dark
- Woodwinds sound a bit stale/lifeless
- Brass crescendos are good
- Woodwind runs are meh
- Strings stick out a bit during runs/arpeggios.

Version 2
- More integrated sound
- Woodwinds and strings merge together nicely
- Woodwinds sound nice
- Very clear and clean sounding
- Woodwind runs / arps sound great
- String runs sit better in the mix
- Brass sounds way better than Version 1 overall - less mud and more grandiose power

Gosh. I hope Version 1 isn't VSL, that will kill me. I'm pretty sure V2 is VSL though.


----------



## Rob (Nov 5, 2022)

No doubt, version 1 sounds better, more natural to me. More like I expect an orchestra to sound. Nice composition by the way...


----------



## mybadmemory (Nov 5, 2022)

Great composition! But neither of the mock-ups sound particularly believable or lifelike to me. They both have a certain old school midi-ish quality to them. What libraries were used?


----------



## Henrik B. Jensen (Nov 5, 2022)

I’d start by using much less reverb.

Try using so little that it _only just_ weaves the individual tracks together.


----------



## blaggins (Nov 5, 2022)

liquidlino said:


> I've already said before how much I love this piece. I'm guessing one of these is Synchron Prime.
> 
> Version 1
> - Bit muddy/dark
> ...


Thanks @liquidlino! I hear what you mean about the runs and strings issues with version 1. There might be a few more sensible tweaks I can do, i might work on it some more. (I have actually already spent quite a bit of time tweaking the fast passages but I agree that they are not amazing yet).


----------



## blaggins (Nov 5, 2022)

mybadmemory said:


> Great composition! But neither of the mock-ups sound particularly believable or lifelike to me. They both have a certain old school midi-ish quality to them. What libraries were used?


All shall be revealed soon. I didn't want to say so upfront so as to not bias people in the way they vote.

Also I tend to agree with you about the MIDIsh vibe. Do you have any suggestions of things I could try to improve?


----------



## blaggins (Nov 5, 2022)

Henrik B. Jensen said:


> I’d start by using much less reverb.
> 
> Try using so little that it _only just_ weaves the individual tracks together.


Funnily enough I was actually afraid that I had used two little extra reverb. I'll try using less and see what happens. Right now I have bus sends to cinematic rooms at somewhere between -6 and -8 DB.


----------



## Henrik B. Jensen (Nov 5, 2022)

blaggins said:


> Funnily enough I was actually afraid that I had used two little extra reverb. I'll try using less and see what happens. Right now I have bus sends to cinematic rooms at somewhere between -6 and -8 DB.


I bypass reverb, then add it in again and check if the reverb is audible to me. If so, I dial reverb back a bit. I continue to do this until the reverb is as little as needed to tie the tracks together.


----------



## blaggins (Nov 5, 2022)

I made some minor adjustments (improvements I hope) to the Version 1, mainly to fix a few balance issues where either the strings or the brass was poking out too much. I've also decreased the reverb sends across the board per @Henrik B. Jensen 's suggestion. I updated the mp3s in the original post.


----------



## Henrik B. Jensen (Nov 5, 2022)

It’s much easier to hear clearly what’s going on now IMO  

The compositions are strange for me. Parts here and there I like, but they are so chaotic overall - constantly changing, seemingly (to me) lacking structure.


----------



## Living Fossil (Nov 5, 2022)

Hi @blaggins , i voted option 3, and here's in short why:
The main problems are:

1) the balance between the instruments. Without going into details, the different sections aren't balanced.

2) the positioning. Honestly, working with spatialisation and reverb is a hugely complex thing.
Simple answers and receipts won't work. You really have to merge the different soundsources into a plausible picture.

Right now, the sound of the mockup reminds of a cubistic painting of Picasso, where the proportions are pretty much off. But I write this as an absolutely well spirited advice...
Unfortunately, there aren't (real) shortcuts in life.

You should start with having some reference tracks of orchestral recordings you like.
And then, I would start with two sections and balance them to each other - in regards to the volume and to the spatial appearance. Then add a third one, then a 4th one. Etc...
And also modify things.

With regards to reverb: for this combination of soundsources I guess you will need at least 3 reverbs with different depths that you combine in different ways for different stems.
One overall reverb won't do it for sure.


----------



## Henrik B. Jensen (Nov 5, 2022)

I think it can be easy to get confused here. First I suggest using as little reverb as possible, then @Living Fossil above suggests using at least 3 reverbs with different depths in the post above  But it is not contradicting:

For depth, you can create 3-4 buses, each with a different reverb setting on them. This is what will give the illusion of depth and @Beat Kaufmann has an explanation of how it is set up here:






Orchestra Music with Samples, Tutorials and Presets for VSL


Again and again the question arises how to distribute instruments of an orchestra on stage - especially in the depth. Once this has been solved (maybe also graphically), the next question immediate...




www.beat-kaufmann.com





What I talked about is reverb (tail only, no Early reflections)* on the master bus *to glue the different tracks together/glue the different sample libraries in the piece together. For me, and this is personal preference, others have different preference, but for me, like I said above, as little as possible, because this way it will be easier to hear exactly what is going on in the piece + one can’t as easily “hide” bad programming etc. when things are easily hearable in contrast to when too plentiful reverb is masking things.

Hope this post is understandable, otherwise ask and I will try to explain better.


----------



## Living Fossil (Nov 5, 2022)

Henrik B. Jensen said:


> What I talked about is reverb (tail only, no Early reflections)* on the master bus *to glue the different tracks together/glue the different sample libraries in the piece together.


At this point (of the mockup) it makes no sense to even think about "glueing" things together.
As written, balance, positioning etc are completely off.

First, you have to get things into perspective.

Second, you have to see if it's a good thing to use an additional reverb.
And – third – if yes, if it's not a better idea to leave some elements out of this reverb.
That's not about "personal preferences" but rather about getting a mix that works.

In this case some elements are overly wet and some are overly direct.


----------



## blaggins (Nov 5, 2022)

Living Fossil said:


> Hi @blaggins , i voted option 3, and here's in short why:
> The main problems are:
> 
> 1) the balance between the instruments. Without going into details, the different sections aren't balanced.
> ...


Thank you for the feedback @Living Fossil, and please no worries. I am very much taking this all as well spirited advice/constructive criticism. It's extremely helpful to me to have this kind of direct feedback highlighting what is wrong. It's how I can learn.

I have read what you wrote a few times and I *think* I can hear what you are talking about. There is a pretty obvious realism gap between either of my mockups and pretty much any real orchestral recording I pull up (been listening to a lot of Poledouris and Kilar lately for reference). This much I already knew but I was also assuming it had to do with the samples and/or the programming I've done. But having said that, I haven't really been able to put my finger on what the biggest differences are, or how to fix them in my mockups. 

To me, real recordings (referencing Conan and Dracula soundtracks here) have this sense of the instruments being at once more clear but somehow also further away. Like there is a sense of a larger space, but it's not awash in reverberation, and I can still hear a lot of definition of each instrument/section in most real recordings... I'm trying to describe the things I hear to see if you think I'm on the right track for hearing what you are hearing. One of the hardest things for me at this stage is just being able to hear the right things so I can go make a correction to the mockup (you can't hear what you can't hear right?). Am I on the right track here? 

This brings me to a huge question though. In both mockups I'm using orchestral libraries that advertise themselves as "recorded in place", "pre-panned", "multi-mic", "pre-balanced", etc. and I'm using a blend of the various mic positions available in the player and just adding a bit of extra reverb on top of everything to "blend things together" as is so often suggested. Are these decca tree plus wide/outrigger/whatever mic positions not actually enough to get the right sound stage/imaging? Is it common to do a huge amount of processing to even those libraries that have a lot of hall baked in (OT, BBCSO, Synchron, etc.?) Or have I done some other terrible thing to ruin the placement of everything? For the record I have not touched any of the default panning as provided out of the box.


----------



## blaggins (Nov 5, 2022)

Henrik B. Jensen said:


> I think it can be easy to get confused here. First I suggest using as little reverb as possible, then @Living Fossil above suggests using at least 3 reverbs with different depths in the post above  But it is not contradicting:


I will admit to slight confusion on this front  I see what you are saying though...



Henrik B. Jensen said:


> For depth, you can create 3-4 buses, each with a different reverb setting on them. This is what will give the illusion of depth and @Beat Kaufmann has an explanation of how it is set up here:
> 
> 
> 
> ...


Am I right in assuming that Beat wrote this mainly about the silent stage VSL instruments? Is this kind of workflow still the best one to use with samples that are recorded "in situ"? I mean recorded in stereo with the correct positioning of each orchestral instrument on the stage? I am assuming that a stereo downmix of the decca tree *should* capture the correct spatial character of each instrument so long as the player sat where they usually do in a recording session, or no?


----------



## Living Fossil (Nov 5, 2022)

blaggins said:


> This brings me to a huge question though. In both mockups I'm using orchestral libraries that advertise themselves as "recorded in place", "pre-panned", "multi-mic", "pre-balanced", etc. and I'm using a blend of the various mic positions available in the player and just adding a bit of extra reverb on top of everything to "blend things together" as is so often suggested. Are these decca tree plus wide/outrigger/whatever mic positions not actually enough to get the right sound stage/imaging? Is it common to do a huge amount of processing to even those libraries that have a lot of hall baked in (OT, BBCSO, Synchron, etc.?) Or have I done some other terrible thing to ruin the placement of everything? For the record I have not touched any of the default panning as provided out of the box.


That's a good and not really easy to answer question.
Sometimes the pre-panning works fine, sometimes it doesn't, even if it's the same library.
Also, one important thing is that a balance that is off even slightly, may change the perception in a dangerous way.
That's why I would start there. If the level of the strings is 2dB too low in a place, it can be enough to cause damage to the overall perception of the different depths.
Also: the perception of reverb – no matter if baked in or added – is unfortunately highly dependent on the context. Personally i prefer to focus on those elements of a reverb that really are relevant to the perception. Also, i often like to work with ultra-transparent additional reverbs (like Nimbus) if I think the position of different groups to each other is off. And I also often use additional positioning plug ins (like Precedence) and lower the Room-/Ambience- Mics.
Which of course comes with lots of experimentation since the context dependency.

In the case of your track you could solo some string parts and some brass parts as a starting point.

The amount of overall reverb is on the rather wet side in the moment, however, I'm somehow missing those informations of the reverb which allow to position the instruments in space. 

But it's great that you are putting effort in it, because from my experience while it takes time it's also beneficial – both to the perception of reverb as also to the results.


----------



## Living Fossil (Nov 5, 2022)

And P.S.: and while i opted for 3, I don't think both "suck"....  
"There's room for improvement" would be more accurate, since there are also many things in these mockups that are really well done...


----------



## mybadmemory (Nov 5, 2022)

blaggins said:


> Also I tend to agree with you about the MIDIsh vibe. Do you have any suggestions of things I could try to improve?


Hard to say without knowing what libraries were used (since all of them are so different and handle different things well). In contrast to Living Fossil, I don't really have a problem with the sound or balance though, at least not of mockup 1 (nr 2 sounds less natural to me), but rather with the fact that certain phrases (mostly the fast melodic ones) either sounds like you played something that the library or articulation couldn't handle, or that you simply used the wrong or too few articulations for the phrase. You obviously also chose a style that is very hard for sample libraries to replicate, most of them being much better at repeating ostinatos and slow lyrical legatos, than this kind of faster agile playing.


----------



## Henrik B. Jensen (Nov 5, 2022)

blaggins said:


> I will admit to slight confusion on this front  I see what you are saying though...
> 
> 
> Am I right in assuming that Beat wrote this mainly about the silent stage VSL instruments? Is this kind of workflow still the best one to use with samples that are recorded "in situ"? I mean recorded in stereo with the correct positioning of each orchestral instrument on the stage? I am assuming that a stereo downmix of the decca tree *should* capture the correct spatial character of each instrument so long as the player sat where they usually do in a recording session, or no?


VSL Silent Stage instruments, Samplemodeling instruments and similar are a special case. I don’t know enough about how to position them, so I will concentrate on samples with reverb “baked in”.

Examples are Spitfire‘s AIR Lyndhurst stuff, VSL’s Synchron instruments, Cinematic Studio Strings and so on.

There you can use algorithm reverb to push the instruments back in the room, regardless that they already have reverb “baked in”.

Check this video by Beat Kaufman:



He takes a bunch of different algorithm reverbs and demonstrates what parameter to adjust to push stuff back.


----------



## blaggins (Nov 5, 2022)

I don't mind revealing it (I think it's been long enough anyway).



Spoiler: Libraries revealed...



Version 1 = BBCSO Pro. Currently at a blend of a little bit of close mic, lots of tree, and a bit of outrigger on most instruments. Perc has no close mic I think and the strings I just left on Mix 1 because I thought the more direct in-your-face sound would be good. I am currently re-evaluating if I chose the best mic positions though...

Version 2 = VSL Synchron Prime (on Demo at the moment). Much more limited microphone positions but I have opted to use the "wide" perspective on most everything



I didn't have much experience using either library from before, even though I've owned one of them for a long time, I haven't actually used it much having done pretty much everything I've ever mocked up in SSO up till now.


----------



## mybadmemory (Nov 5, 2022)

I guess my primary feedback would be that it sounds like you're playing the fast melodic phrases with a sustain patch, rather than with an appropriately accentuated legato making the transitions sound a little too soft and blurry for a fast aggressive piece like this. This goes for both strings winds and brass. I'd try to either use articulations with a more accentuated attack, like a marcato legato if one is available in the libraries you use. And if not, just overlay a short patch on top of the longs/legatos you use, and find a balance between the two where they gel together to fake a more accentuated attack.


----------



## blaggins (Nov 5, 2022)

mybadmemory said:


> I guess my primary feedback would be that it sounds like you're playing the fast melodic phrases with a sustain patch, rather than with an appropriately accentuated legato making the transitions sound a little too soft and blurry for a fast aggressive piece like this. This goes for both strings winds and brass. I'd try to either use articulations with a more accentuated attack, like a marcato legato if one is available in the libraries you use. And if not, just overlay a short patch on top of the longs/legatos you use, and find a balance between the two where they gel together to fake a more accentuated attack.


The faster passages were 100% the hardest/most annoying to try to get "right". Everything was either way too mellow using legatos, totally unrealistic using longs, or not at all what I intended if I just used stacattos. I'm just going to go ahead and talk openly about the libs, at this point anyone having read this far will probably have also read the spoiler...

BBCSO didn't have anything reasonable for fast aggressive runs so I opted to layer in the stacatto (at pretty low velocity) under the legatos, just in one or two places in the violins. I didn't try this trick with any other section though so maybe I should go back and layer more stacatto-type arts in places where the legato attacks need a bit of a boost.

VSL had a marcato start legato but I didn't feel like it made much of a difference, though the instruments were much more agile anyway so I didn't think the overall effect was that bad.

How do you usually approach faster passages that need a bit of bite?


----------



## liquidlino (Nov 5, 2022)

blaggins said:


> I don't mind revealing it (I think it's been long enough anyway).
> 
> 
> 
> ...


I was right! Shame that the first one was BBC though... Normally I really like mockups from BBC...


----------



## mybadmemory (Nov 5, 2022)

Hah! I thought the first one was BBCSO Pro! And yes. These faster lines, with a lot of variation between note lengths, jumping between spicc, stacc, marc, legato, and longs are certainly always a challenge.

With BBCSO Pro in particular I’d use the extended legatos for almost everything, only overlap certain notes, and make great use of the baked in stacc overlay for the rest of them. And then use a lot of close mic, layer soloists on top of sections, and perhaps layer even more short notes on top of legatos where needed.


----------



## blaggins (Nov 5, 2022)

liquidlino said:


> I was right! Shame that the first one was BBC though... Normally I really like mockups from BBC...


Yeah me too (or at least I like mockups that other people create using BBCSO, I haven't had the best luck with it myself). I made a few changes from when I original posted it that I think improved things quite a bit and I haven't quite given up on it either. I'm going to try and re-balance the sections and see if I can get a more coherent positioning out of it by using different mic positions. Maybe that will bring it closer together?

Now the Spitfire player and the inconsistent timing... hoo boy. That I do not like.


----------



## Living Fossil (Nov 5, 2022)

mybadmemory said:


> With BBCSO Pro in particular I’d use the extended legatos for almost everything, only overlap certain notes, and make great use of the baked in stacc overlay for the rest of them. And then use a lot of close mic, layer soloists on top of sections, and perhaps layer even more short notes on top of legatos where needed.


The problem with fast figures is that you need lots of not-perfect intonation, which BBCSO (as well as the other SA libs inside of their player) don't offer. The right amount of detuning (which e.g. Modern Scoring strings let you control via CC) often helps a lot...


----------



## blaggins (Nov 5, 2022)

Living Fossil said:


> The problem with fast figures is that you need lots of not-perfect intonation, which BBCSO (as well as the other SA libs inside of their player) don't offer. The right amount of detuning (which e.g. Modern Scoring strings let you control via CC) often helps a lot...


Iiiinteresting, this is the first I've heard of this but it makes 100% sense. The Synchron player has a tuning humanization setting which I have not tried to play with yet, maybe that can get me there. I am also wondering if I should crank the timing humanization for VSL (maybe just here and there) to get a less "precise" performance out of it...


----------



## Living Fossil (Nov 5, 2022)

blaggins said:


> Iiiinteresting, this is the first I've heard of this but it makes 100% sense. The Synchron player has a tuning humanization setting which I have not tried to play with yet, maybe that can get me there. I am also wondering if I should crank the timing humanization for VSL (maybe just here and there) to get a less "precise" performance out of it...


For timing imperfections I prefer not perfectly quantized Midi... I don't like it when samples have too much rhythmical freedom (Spitfire...  )
But detuning is sometimes a game changer; also when it comes to fast(er) melodic lines in octaves.


----------



## liquidlino (Nov 5, 2022)

blaggins said:


> Now the Spitfire player and the inconsistent timing... hoo boy. That I do not like.


Which is why I don't buy Spitfire any more. Love the sound (Except for BHCT/SStW), hate the usability.

I'm secretly hoping Spitfire will nail it with the new Abbey Road collection, get both great sound and great editing...


----------



## liquidlino (Nov 5, 2022)

Living Fossil said:


> The problem with fast figures is that you need lots of not-perfect intonation, which BBCSO (as well as the other SA libs inside of their player) don't offer. The right amount of detuning (which e.g. Modern Scoring strings let you control via CC) often helps a lot...


Ah! Is that why the VSL works so well - the humanization of tuning in Synchron player that changes for every note... makes sense!


----------



## blaggins (Nov 5, 2022)

liquidlino said:


> Ah! Is that why the VSL works so well - the humanization of tuning in Synchron player that changes for every note... makes sense!


To be totally clear I haven't done any timing or tuning humanization for Version 2 (Synchron Prime) just yet. I'm thinking I might take another pass at both mockups though to implement some of the stuff that's been discussed in this thread.


----------



## blaggins (Nov 6, 2022)

Henrik B. Jensen said:


> VSL Silent Stage instruments, Samplemodeling instruments and similar are a special case. I don’t know enough about how to position them, so I will concentrate on samples with reverb “baked in”.
> 
> Examples are Spitfire‘s AIR Lyndhurst stuff, VSL’s Synchron instruments, Cinematic Studio Strings and so on.
> 
> ...



I finally had a chance to watch through the video. Thanks for the link that's actually an incredible set of resources for knowing which parameters should be where in order to push instruments back. Lots of different reverbs were covered too!

I'm still a little bit confused whether that's something that I should be doing with my libraries though. The violin and percussion samples in @Beat Kaufmann's video are very closely recorded and sound pretty dry to me. I have the equivalent of close-ish microphones of course but I also have room mics and with BBCSO lots of even further away perspectives, outriggers, ambient mics, gallery mics, etc. Could I not achieve the same effect of pushing the instruments back into the room by decreasing the amount of close mics I use and increasing the amount of the others? It's a serious question. I don't know how much reverb processing I should be doing with a library like BBCSO which already includes so many microphone perspectives. I think I've read that there are some pretty famous mock-ups that were created with bbcso that don't use any kind of additional reverb at all, although now I'm searching for it I can't actually find the reference...


----------



## Henrik B. Jensen (Nov 6, 2022)

blaggins said:


> I finally had a chance to watch through the video. Thanks for the link that's actually an incredible set of resources for knowing which parameters should be where in order to push instruments back. Lots of different reverbs were covered too!
> 
> I'm still a little bit confused whether that's something that I should be doing with my libraries though. The violin and percussion samples in @Beat Kaufmann's video are very closely recorded and sound pretty dry to me. I have the equivalent of close-ish microphones of course but I also have room mics and with BBCSO lots of even further away perspectives, outriggers, ambient mics, gallery mics, etc. Could I not achieve the same effect of pushing the instruments back into the room by decreasing the amount of close mics I use and increasing the amount of the others? It's a serious question. I don't know how much reverb processing I should be doing with a library like BBCSO which already includes so many microphone perspectives. I think I've read that there are some pretty famous mock-ups that were created with bbcso that don't use any kind of additional reverb at all, although now I'm searching for it I can't actually find the reference...


Microphone positions can indeed be used to create depth (if any suitable mic pos are available in the library of course)

Always do that if available instead of adding “fake stuff” like a reverb plugin.

BBCSO Pro literally has a ton of mic positions, just beware it quickly eats up RAM and increases load times of your project too.

PS. Yes, it’s a great video by Beat 🙂👍


----------



## Henrik B. Jensen (Nov 6, 2022)

There is some pretty stunning stuff out there which people have done with BBCSO Core or Pro as the sole library.

This is probably my favorite:


----------



## mybadmemory (Nov 6, 2022)

With BBC Pro I’d start using either Mix1, Mix2, or Tree as a base, and then add close mics where you want detail, and outriggers, ambient, and spill where you want to push things back. 

And you can of course add a tiny amount of tail reverb as glue on top of everything, but there is no need for creating your own space or sense of depth with a library like this.


----------



## QuiteAlright (Nov 6, 2022)

I thought both sounded artificial, and while I can't necessarily explain how to fix either of them, I can say that both sounded disconnected to me. The timing of everything didn't sound very human to me.


----------



## blaggins (Nov 7, 2022)

That Batman mockup is astounding @Henrik B. Jensen. I had not come across George Soulas's channel before, there are quite a few amazing BBCSO mockups that he's done. Pretty inspirational as far as what BBCSO Pro is capable of... I had for quite a while now been regretting buying BBCSO since I immediately ran into computer resource issues, shorts timing issues, super duper long load times for the player, and overall it just seemed to be much more of PITA to use than I was expecting. The demos sold me on it but then the experience of using it was a major letdown.

All that being said, I don't want to so easily give up on a $500 library, and I've been working more on my "Version 1/BBCSO" mockup. At this point I'm feeling like there is a life there and realism that is missing in the Synchron Prime version. I'll probably revisit the Synchron version again to see what else I can do to breathe more realism into it, but IMO right now the BBCSO mockup is much better (although I hate to admit it because I REALLY preferred the Synchron Player workflow)  I'm very happy to entertain suggestions on how to improve the Synchron version, to me it feels a bit dull next to the BBCSO one.

Changes for the "improved" Version 1 mockup (uploaded to the original post as well):
* Edited a few passages to try and be more idiomatic of the instruments playing
* Added a little bit of staccato/stacatissimo under some of the legato runs for a little more definition
* Reworked the dynamics of the woodswind runs to have bigger swings and breathe more
* Changed practically ALL of the microphone blends
* Removed pretty much all external reverb, all that is left is mic positions

I am using a blend of (sometimes) a tiny bit of Close mic, lots of Tree, a fair bit of Outriggers and a little bit of Ambient. I've also added a little bit of Gallery to most instruments although now it's sounding very "big" so I may have overdone it a bit.


EDIT: actually I took out the Gallery mics, in retrospect they were adding a bit too much reverb


----------



## Henrik B. Jensen (Nov 7, 2022)

blaggins said:


> That Batman mockup is astounding @Henrik B. Jensen. I had not come across George Soulas's channel before, there are quite a few amazing BBCSO mockups that he's done. Pretty inspirational as far as what BBCSO Pro is capable of... I had for quite a while now been regretting buying BBCSO since I immediately ran into computer resource issues, shorts timing issues, super duper long load times for the player, and overall it just seemed to be much more of PITA to use than I was expecting. The demos sold me on it but then the experience of using it was a major letdown.
> 
> All that being said, I don't want to so easily give up on a $500 library, and I've been working more on my "Version 1/BBCSO" mockup. At this point I'm feeling like there is a life there and realism that is missing in the Synchron Prime version. I'll probably revisit the Synchron version again to see what else I can do to breathe more realism into it, but IMO right now the BBCSO mockup is much better (although I hate to admit it because I REALLY preferred the Synchron Player workflow)  I'm very happy to entertain suggestions on how to improve the Synchron version, to me it feels a bit dull next to the BBCSO one.
> 
> ...


Those BBCSO Spitfire recordings really are something else. My goodness that library sounds good!

I listened to this new example and while I‘m just a hobbyist etc., I think it‘s starting to sound like a SO playing in a real acoustic space. That’s the impression I get although I haven’t listened closely for instrument positions (too tired). Just listened to the example like I was listening to a “normal” classical piece and thought what I mentioned before 

It is very wet though, as you yourself also suspect.

Yes, Batman BBCSO composer did a great job! There are some other really good people on Youtube. I might post some links later.

PS. It’s good to see you are taking in the advice people give you and then use whatever of it you think will help your composition.


----------



## mybadmemory (Nov 7, 2022)

blaggins said:


> That Batman mockup is astounding @Henrik B. Jensen. I had not come across George Soulas's channel before, there are quite a few amazing BBCSO mockups that he's done. Pretty inspirational as far as what BBCSO Pro is capable of... I had for quite a while now been regretting buying BBCSO since I immediately ran into computer resource issues, shorts timing issues, super duper long load times for the player, and overall it just seemed to be much more of PITA to use than I was expecting. The demos sold me on it but then the experience of using it was a major letdown.
> 
> All that being said, I don't want to so easily give up on a $500 library, and I've been working more on my "Version 1/BBCSO" mockup. At this point I'm feeling like there is a life there and realism that is missing in the Synchron Prime version. I'll probably revisit the Synchron version again to see what else I can do to breathe more realism into it, but IMO right now the BBCSO mockup is much better (although I hate to admit it because I REALLY preferred the Synchron Player workflow)  I'm very happy to entertain suggestions on how to improve the Synchron version, to me it feels a bit dull next to the BBCSO one.
> 
> ...


Just a first quick listen in my phone but so far it sounds a LOT better! Will get back to it on proper headphones after dinner!


----------



## blaggins (Nov 7, 2022)

I wanted to be fair to the two libs, and since I spent a ton more time tweaking Version 1 (BBCSO) I also went back through the VSL Synchron Prime mockup (Version 2) and tried to improve it some more. I ended up back on the Classic perspective but added a bit of SP2016 on each bus to create a bit more depth. I also removed the "strings doubler" trick since I actually think it was messing up the positioning of the instruments, and with it on I felt like I lost that clarity of the wide perspective that the Synchron stage seems to naturally have (which I like).

At this point I've probably tweaked these two mockups about as much as I've ever tweaked any I've done so far. A good exercise but I don't even know anymore which one I prefer


----------



## liquidlino (Nov 7, 2022)

blaggins said:


> I wanted to be fair to the two libs, and since I spent a ton more time tweaking Version 1 (BBCSO) I also went back through the VSL Synchron Prime mockup (Version 2) and tried to improve it some more. I ended up back on the Classic perspective but added a bit of SP2016 on each bus to create a bit more depth. I also removed the "strings doubler" trick since I actually think it was messing up the positioning of the instruments, and with it on I felt like I lost that clarity of the wide perspective that the Synchron stage seems to naturally have (which I like).
> 
> At this point I've probably tweaked these two mockups about as much as I've ever tweaked any I've done so far. A good exercise but I don't even know anymore which one I prefer


Where are the updated versions?


----------



## blaggins (Nov 7, 2022)

liquidlino said:


> Where are the updated versions?


Oops sorry, I meant to say I just updated the original post at the start of this thread with the latest versions.


----------



## liquidlino (Nov 7, 2022)

blaggins said:


> Oops sorry, I meant to say I just updated the original post at the start of this thread with the latest versions.


To my ears now with latest two versions:

1. Much more natural performance, runs etc sound good. Still sounds a bit muddy, and there's a sort of hyper-width to the sound, like some instruments are hard panned left and right. But way better than I remember overall.

2. Not as much life as (1), especially during the intro sections where instruments are more exposed. But no mud, and more clarity overall than (1), and to my ears more natural instrument placement. 

Once it gets busy, I prefer 2 over 1, but I prefer the intro of 1, and now prefer the runs of 1.

I dunno. I might be leaning towards they both suck after all.  In particular the trumpet around 1:04, just doesn't work for me in either example.


----------



## blaggins (Nov 8, 2022)

liquidlino said:


> To my ears now with latest two versions:


Thanks for taking the time to listen through again!!



liquidlino said:


> 1. Much more natural performance, runs etc sound good. Still sounds a bit muddy, and there's a sort of hyper-width to the sound, like some instruments are hard panned left and right. But way better than I remember overall.


I agree it's got a nice element of realism. I'll be honest I can't quite hear the hyper width thing (but also I'm not sure that I'm picking up on as many of the placement nuances as I could be in general. I need to train my ears more.) I haven't done any processing there, it's all just microphone positions out of the box (no stereo enhancers or any such mumbo jumbo). Might be that the outriggers are super wide in BBCSO?



liquidlino said:


> 2. Not as much life as (1), especially during the intro sections where instruments are more exposed. But no mud, and more clarity overall than (1), and to my ears more natural instrument placement.


It does feel more straightforward and precise. I tried to bring as much dynamic movement into it as possible, maybe there are tricks I haven't learned yet but I did what I could. For example, there are massive swings of CC1 and CC11 all over the place, lots of micro velocity adjustments to get more realistic lines, and I have the timbre adjust automated throughout (swings of 0-127 in quite a few places). The end result doesn't feel as dynamic as the BBCSO mockup but I'm not sure what else to try. The timing humanization settings and tuning humanization didn't seem to have much of an effect when I tried those.



liquidlino said:


> Once it gets busy, I prefer 2 over 1, but I prefer the intro of 1, and now prefer the runs of 1.
> 
> I dunno. I might be leaning towards they both suck after all.  In particular the trumpet around 1:04, just doesn't work for me in either example.


The trumpet is terrible in both. I think the VSL prime one edges out the BBCSO one for me, but neither of them can play convincing super fast notes. The trills in both are... not great... which is especially bad in BBCSO since they recorded a trill articulation, it should be decent! Anyway, I've been listening to a lot of mockups lately and some of my favorites seem to use modeled brass instead of sampled brass, that's a whole can of worms.

In terms of workflow though, there is no contest. The BBCSO mockup using 4 microphone positions (but no real microphone automation) takes 9 minutes to load and Cubase ends up using almost 40GB of RAM for the 22 instruments I have in the mockup. I crashed Cubase 3x while making the mockup by accidentally trying to change the microphone blend using automation on too many BBCSO tracks at a time, and I had to restart my computer each time to unfuggle my soundcard drivers. The player randomly disregards CC11 automation and I sometimes have to draw in a curve instead of a single point in order to set the expression. I can't play back the whole piece without bouncing at least some of the tracks. All those beautiful microphone positions but in the end I'm afraid to use them because I don't want to lock my computer up minutes at a time waiting around for the player to do it's thing.

The VSL Prime version uses like 8GB total, and although the time stretching causes audio dropouts, without stretching there are zero issues with playback. Cubase loads the project in like 30 seconds.


----------



## blaggins (Nov 8, 2022)

> The timing humanization settings and tuning humanization didn't seem to have much of an effect when I tried those.


Actually I have to take this back. I was making assumptions about how it should work so I didn't hear when it *was* working. Anyway, tuning humanization can have have a very big effect (too big of an effect is quite easy with it I think). So I went back AGAIN and added some tuning and a tiny bit of timing modifications (CC25 and 26) to the VSL version. Sigh. This will never end. Uploaded to original post.


----------



## swinkler (Nov 9, 2022)

Fascinating comparison here and timely for me. Been a VSL/SE user for quite some time and recently got Spitfire BBCSO Core because of the "broadcast quality" sound. I do like how it sounds despite the marketing terminology. Anyway I'm painfully discovering some of the same things which are tradeoffs between lightness on system resources and easy playability (VSL) vs. the richer sound but less playability and synchronization of the parts. 
So I'm wondering how a blend of the two would work? Have you considered combining these libs at all?


----------



## blaggins (Nov 9, 2022)

swinkler said:


> So I'm wondering how a blend of the two would work? Have you considered combining these libs at all?


I bet with some work it could be done pretty convincingly. I have not tried myself though.

I did accidentally enable both the VSL and the BBCSO mixes playing simultaneously at some point while working on these mockups, and aside from hitting the limiter pretty hard and everything getting suddenly loud, it actually sounded kinda good. I think it would take work to make it convincing though.


----------



## swinkler (Nov 10, 2022)

blaggins said:


> I bet with some work it could be done pretty convincingly. I have not tried myself though.
> 
> I did accidentally enable both the VSL and the BBCSO mixes playing simultaneously at some point while working on these mockups, and aside from hitting the limiter pretty hard and everything getting suddenly loud, it actually sounded kinda good. I think it would take work to make it convincing though.


In the meantime I found a lengthy thread where someone is combining 3 or 4 libraries in an elaborate Dorico template. One of his demos sounded fantastic and made me rethink (again) my workflow to exclusively use Dorico. I suspect it's a ton of work to tweak expression maps and such but may be worthwhile.

EDIT: Check out the discussion in Notation 
Dorico playback: samples vs noteperformer​


----------

