# RAM and orchestral libraries



## Virtual Virgin (Jul 12, 2017)

How much RAM would it take to load up the entire Orchestral Tools' Berlin line of instruments (Strings, Brass, Woodwinds, Percussion)?

I love the sound of these sample libraries, but is a template full of them useable on 32 or 64GB of RAM?

How do EastWest Hollywood, CineSymphony and Spitfire bundles compare in terms of RAM usage for an entire orchestra?


----------



## Ruffian Price (Jul 12, 2017)

By default, Kontakt loads the first 60kB of every sample and tries to stream the rest from disk once playback starts (you can increase that up to 240kB in the options and I recommend that for higher voice counts to reduce HDD strain). So if you know the amount of individual samples in a library, you can make an educated guess for its RAM footprint.
Hollywood Orchestra uses the PLAY engine; version 5 introduced a "sample cache" function which allows you to set the streaming-RAM balance for every drive. I've got it set to 0 (PCI-E drive) and the RAM footprint for a simple orchestral template (full string section, percussion set, legato instruments for all brass sections and two solo woodwinds) is a little over 1GB. Once I got it to 3 (I think that's the default) and reloaded sample pools the plugin reported 1.5GB (streaming monitor got crazy high once I started playing something though, would definitely murder a standard HDD), but the DAW display showed around 2.5.


----------



## Architekton (Jul 12, 2017)

Ive read somewhere that when you load all Berlin articulations, youll use near 64gb or RAM. But take that with grain of salt...


----------



## bbunker (Jul 12, 2017)

Just reporting my own observations on RAM in Play - I would go over that 1.5GB just with the string legatos alone, for the 6 Layer versions at about 400MB per iteration. For what I'd call a 'normal' articulation set would be well over 12-15GB. And that's using relatively frugal versions when possible.

Also - Kontakt libraries have other overhead besides the sample pool. The Capsule-equipped series like OT's Berlin sets seem to be pretty luxurious in how much RAM they eat up in a purged idle.

In terms of comparison, a lot has to do with what you actually load up - the Cinesamples ones are equipped with fewer articulations - so would you mean comparing only the articulations (and equivalents) that are in CS's, or using a 'full load' of each one?

I have definitely used a whole Orchestra worth of Cinesamples and Spitfire on a 32GB rig without problems. EW Hollywood stuff works in that rig, ram-wise, but dies cataclysmically with the CPU idling loads. Whether that issue has to do with it running from an SSD at 0 Sample Cache level - someone smarter than me would have to tell you.

Berlin I've only got the Woodwinds so I can't comment, but the Multi articulations there eat up RAM like tourists at a Vegas Buffet. I'd go there with 32GB only with pretty severe trepidation.


----------



## TintoL (Jul 12, 2017)

Well, for spitfire, I can tell you that is the most efficient intrument I have after vsl in terms of ram usage. I have all symphonyc series with all articulations loaded Plus all percussion redux plus harp plus harp swarm plus albion legacy plus all chamber strings articulations plus all solo violin loading the tree mic on all for about 80 gb or so with no adjusting of memory foot print in kontakt so I can get the best real time performance. And with no purge at all.

I am also wondering about how much ram it will take a full OT symphonic arrangement as a template because I have seen some threads where they talk about OT engine and samples taking huge amounts of ram and cpu power. The main reason I haven't gone to it.

The gold versions of hollywood brass and strings are not that bad in terms of memory foot print. They both take about 15 gb with almost all articulations loaded.


----------



## Virtual Virgin (Jul 12, 2017)

Ruffian Price said:


> By default, Kontakt loads the first 60kB of every sample and tries to stream the rest from disk once playback starts (you can increase that up to 240kB in the options and I recommend that for higher voice counts to reduce HDD strain). So if you know the amount of individual samples in a library, you can make an educated guess for its RAM footprint.



Increasing the sample preload amount eases stress on the HDD by stressing the RAM? If I am running samples off of multiple SSDs would my ideal setting be lower?


----------



## Virtual Virgin (Jul 12, 2017)

bbunker said:


> In terms of comparison, a lot has to do with what you actually load up - the Cinesamples ones are equipped with fewer articulations - so would you mean comparing only the articulations (and equivalents) that are in CS's, or using a 'full load' of each one?



If I understand you correctly, I would want the comparison of a "full load" of each one to know how much RAM it takes to load an entire library of articulations, though knowing the "per capita" efficiency of each would be nice too.


----------



## colony nofi (Jul 12, 2017)

The settings for Kontakt are not always intuitive... but Virtual Virgin, you are somewhat correct. If you are running samples off SSD then, a lower preload is very possible. (forget multiple SSD's for now - unless its setup in a very specific way it won't impact hugely on performance... SSD's and CPU's will both choke on high voice counts - but the way the SSD is connected to your system will have a greater impact than the actual SSD itself....) 
So - 12 to 18kb preload is extremely possible given a well setup system using SSD's for all samples. This can impact RAM use (in a positive way.)
Each instance of Kontakt takes up RAM as well - which is worth considering if you have massive templates with one articulation per instance. However - there are also ways to reduce that (a lot of the RAM usage is in the internal kontakt database, which I personally get rid of entirely!)
Does that help?


----------



## Virtual Virgin (Jul 12, 2017)

colony nofi said:


> The settings for Kontakt are not always intuitive... but Virtual Virgin, you are somewhat correct. If you are running samples off SSD then, a lower preload is very possible. (forget multiple SSD's for now - unless its setup in a very specific way it won't impact hugely on performance... SSD's and CPU's will both choke on high voice counts - but the way the SSD is connected to your system will have a greater impact than the actual SSD itself....)
> So - 12 to 18kb preload is extremely possible given a well setup system using SSD's for all samples. This can impact RAM use (in a positive way.)
> Each instance of Kontakt takes up RAM as well - which is worth considering if you have massive templates with one articulation per instance. However - there are also ways to reduce that (a lot of the RAM usage is in the internal kontakt database, which I personally get rid of entirely!)
> Does that help?



It does, though I would like more specific information on the RAM required for full symphonic templates depending on the library. For example, the CineSymphony orchestral bundle comes out to about 264GB of samples, but EastWest Hollywood Diamond bundle comes to about 680GB. Given that scripting/coding is not equal from library to library, I don't know how much RAM is needed to run one library or the next.


----------



## X-Bassist (Jul 14, 2017)

Would like to hear other's experiences as well. I use to think 64GB ram would be enough for a decent sized orchestral temple, but recently calculated 80 to 100. Now after going through my templetes and all the stuff I like, I'm wondering if 128GB is enough. Once you find different sets of strings and brass that work for different colors, it's hard to judge how much is needed until the computer chokes. I'd like to prepare for future (larger) instruments on my next computer upgrade.


----------



## markleake (Jul 14, 2017)

I've found myself with CPU problems using Play/Hollywood, whereas I've not had that issue with Kontakt libraries. And I would agree that Spitfire libraries can be very RAM efficient compared to other libraries when using low preload settings or when fully purged.

Not mentioned yet is CSS... I find with all articulations loaded on seperate instument tracks (so each as its own Kontakt instance), and all samples purged, it still uses up a fair few GB of RAM. It's not very RAM efficient at all.


----------



## erica-grace (Jul 14, 2017)

Berlin Brass:

Here, with SATA drives, with Kontakt set to 60kb, with the full brass ensemble loaded plus two extra articulations each that are not in slots by default, with two mic positions, RAM usage is 32.6 GB of system RAM. If you want a third mic position (which you probably do), RAM usage would be at 48.9. A fourth mic position (ok, now we're getting into pro territory here) would be at 65.2. And that's just for brass.


----------



## Virtual Virgin (Jul 14, 2017)

erica-grace said:


> Here, with SATA drives, with Kontakt set to 60kb, with the full brass ensemble loaded plus two extra articulations each that are not in slots by default, with two mic positions, RAM usage is 32.6 GB of system RAM. If you want a third mic position (which you probably do), RAM usage would be at 48.9. A fourth mic position (ok, now we're getting into pro territory here) would be at 65.2. And that's just for brass.



Which brass library are you referring to here? Berlin Brass?


----------



## Virtual Virgin (Jul 14, 2017)

markleake said:


> I've found myself with CPU problems using Play/Hollywood, whereas I've not had that issue with Kontakt libraries. And I would agree that Spitfire libraries can be very RAM efficient compared to other libraries when using low preload settings or when fully purged.



Are you using Play 5 or an older version?


----------



## erica-grace (Jul 14, 2017)

Virtual Virgin said:


> Which brass library are you referring to here? Berlin Brass?



Yes - sorry. I'll add that to my post now.


----------



## Piano Pete (Jul 14, 2017)

I feel the need to ask this as well, but are you planning on taking into account mic positions? Furthermore, you could try to squeeze more instruments in by messing with the voice count, streaming from disk, and other shenanigans. Ultimately, what are you trying to achieve?

If you are trying to load everything that they have, full tilt, you may also want to ask the question of whether you need an additional CPU to help smooth everything out. In my experience, it is sometimes better to have 2 slaves with 64gb than one with 128gb depending on use. Depending on cost of materials, it is sometimes even more cost efficient.

You may wish to contact Orchestral Tools directly, as they would know the answer to your initial question of what their entire library impact is.


----------



## markleake (Jul 14, 2017)

Virtual Virgin said:


> Are you using Play 5 or an older version?


Play 4. I'd mostly given up on using a large orchestra of EW products by the time Play 5 came out.


----------



## Piano Pete (Jul 14, 2017)

markleake said:


> Play 4. I'd mostly given up on using a large orchestra of EW products by the time Play 5 came out.


I have found it to be much more efficient than it was previously. Load times are respectable as well. They used to be horrendous before.


----------



## agarner32 (Jul 14, 2017)

Virtual Virgin said:


> How much RAM would it take to load up the entire Orchestral Tools' Berlin line of instruments (Strings, Brass, Woodwinds, Percussion)?


I only have OT Woodwinds and Brass (plus Muted) and they do require more RAM than any other library I have - way more than Spitfire that's for sure. Even with purged samples you'll need 32 GB for just the regular brass library if you load everything (according to someone at OT). I kept wondering why I was using so much RAM when I purged all samples and finally discovered it was OT libraries.

I'm not overly concerned at this point because I have 128 GB of RAM on my slave machine, but if I were on a single machine with 32 or even 64 I'd have to limit how much I loaded.

Even with the high RAM footprint, both libraries are terrific so it's worth it for me.


----------



## pderbidge (Jul 14, 2017)

colony nofi said:


> The settings for Kontakt are not always intuitive... but Virtual Virgin, you are somewhat correct. If you are running samples off SSD then, a lower preload is very possible. (forget multiple SSD's for now - unless its setup in a very specific way it won't impact hugely on performance... SSD's and CPU's will both choke on high voice counts - but the way the SSD is connected to your system will have a greater impact than the actual SSD itself....)
> So - 12 to 18kb preload is extremely possible given a well setup system using SSD's for all samples. This can impact RAM use (in a positive way.)
> Each instance of Kontakt takes up RAM as well - which is worth considering if you have massive templates with one articulation per instance. However - there are also ways to reduce that (a lot of the RAM usage is in the internal kontakt database, which I personally get rid of entirely!)
> Does that help?


Great thread here guys and you made me realize I never use the Kontakt Database feature so now I can save even more ram usage. Thanks!


----------



## markleake (Jul 14, 2017)

pderbidge said:


> Great thread here guys and you made me realize I never use the Kontakt Database feature so now I can save even more ram usage. Thanks!


I only just turned this feature off the other day also. I never realised it used up memory until I removed all the DB contents and saved a bit more RAM.


----------



## ctsai89 (Jul 14, 2017)

Haha interesting how I keep saying I regret going for spitfire instead of Berlin but whenever ram is mentioned I don't regret it at all. 

With 1 mic for almost all the legato patches plus staccato, pizz, and maybe 1 more for every instrument /patch you will likely survive with 32gig of ram if you aren't using any other applications while composing. 

Berlin.. 32gigs for just brass


----------



## MatFluor (Jul 14, 2017)

I don't regret going for Spitfire - for different reasons - but RAM is a big one.

I don't know myself, but for OT you better go 128 GB than 64. Or get some nice PCIe-SSDs and set Kontakt


----------



## ctsai89 (Jul 15, 2017)

MatFluor said:


> I don't regret going for Spitfire - for different reasons - but RAM is a big one.
> 
> I don't know myself, but for OT you better go 128 GB than 64. Or get some nice PCIe-SSDs and set Kontakt



I Actually don't regret at all. Nothing sounds more realistic than spitfire. Except only the inconsistencies of the brass and the amount delay legatoes and the start of notes has. Of course nothing about it is as slow as css's legato delay


----------



## VinRice (Jul 15, 2017)

I'm running about 220 tracks in Logic which is enough for a basic orchestra plus toys in 32GB. This is mostly Spitfire, individual articulations. I have separate templates however for Chamber, Symphonic, Bernard Herrmann, LCO, Band and Electronic. I end up swapping stuff in and out on most things though so to have everything in one template would be the goal. At the moment this would require a slave machine but I'm thinking that the next wave of Mac Pros (and PC's of course) should be able to handle 1000 track templates internally with all SSD's and 128GB - assuming you don't go the full 'Berlin'.


----------



## Architekton (Jul 15, 2017)

But does Berlins huge amount of samples result in quality and better realism than competitor products?


----------



## MatFluor (Jul 15, 2017)

Architekton said:


> But does Berlins huge amount of samples result in quality and better realism than competitor products?



Quality-wise I would say OT and SF are pretty equal - they sound different, but both are high quality for sure.
OT has a different route, and arguably the better Mock-ups if you use it right. I've heard breathtaking mock-ups from both companies - and I heard very bad ones too - SF tends to sound a bit better on the bad ones.

Point is - if you can handle OT, you can make glorious stuff. If you can handle SF, you can make glorious stuff. Is OT more realistic? Might be - but only as realistic as the composer is able to program it in.

It's the cook, not the kitchen. Both product are top of the line


----------



## ctsai89 (Jul 15, 2017)

MatFluor said:


> Quality-wise I would say OT and SF are pretty equal - they sound different, but both are high quality for sure.
> OT has a different route, and arguably the better Mock-ups if you use it right. I've heard breathtaking mock-ups from both companies - and I heard very bad ones too - SF tends to sound a bit better on the bad ones.
> 
> Point is - if you can handle OT, you can make glorious stuff. If you can handle SF, you can make glorious stuff. Is OT more realistic? Might be - but only as realistic as the composer is able to program it in.
> ...



I think it's very easy to get spitfire to sound realistic as if it's on a level of an university orchestras sight reading for the first time. But to get it sound like a fully rehearsed professional orchestra, make sure you have a lot of time.


----------



## Piano Pete (Jul 15, 2017)

To my ears, OT edges out when it comes to quality (this is also reflected in price/computer demands); however, there is going to be a point where a mock-up is going to sound like a mock-up. They are all great libraries, all capable of making fine quality mock-ups, each have their strengths and weaknesses, and most importantly, they are all dependent on knowing how to use each one. Straight out of the box, none of the libraries on the market are going to sound immaculate. Some assembly is required. 

Between the libraries you mentioned, OT is going to be the most taxing on your system. If you are just getting started, or do not have enough money to build an adequate setup to run it, you may wish to consider something that is a little lighter. If you really feel that OT is the best sounding library to you, you may also wish to set up a general template that has your go-to articulations; then, you can always add additional instruments or articulations when the situation arises, thus saving your setup some strain. What was just stated holds true to EW, Cine Symphony, Spitfire et al. If you decide that you can survive without having everything loaded at once, thus purchasing whatever library you feel to be most suitable to you, you can always add to your computer farm down the road while using and learning said library in the meantime. This could save you some buyers remorse and allow you to acquaint yourself with your tools. 

You are the only one that can determine the quality of a sample library. Go listen to some demos, reviews, and tutorials that showcase them individually. It shouldn't need to be said, but there are some great and horrible representations of all the products: do your homework. 

Ultimately, whatever you decide to purchase and however you work: the music is what is important, not the sample library you are using. Yes, there is a difference between having great tools to work with and crap; none of the libraries listed in this thread are the latter. They will all get the job done. Any one worth their salt can listen through the shortcomings of a mock-up, and to the average listener, if you understand how to use your equipment and are shipping a final product with samples, they will not really care whether you used Spitfire, OT, or any other library.


----------



## ctsai89 (Jul 15, 2017)

Piano Pete said:


> To my ears, OT edges out when it comes to quality (this is also reflected in price/computer demands); however, there is going to be a point where a mock-up is going to sound like a mock-up. They are all great libraries, all capable of making fine quality mock-ups, each have their strengths and weaknesses, and most importantly, they are all dependent on knowing how to use each one. Straight out of the box, none of the libraries on the market are going to sound immaculate. Some assembly is required.
> 
> Between the libraries you mentioned, OT is going to be the most taxing on your system. If you are just getting started, or do not have enough money to build an adequate setup to run it, you may wish to consider something that is a little lighter. If you really feel that OT is the best sounding library to you, you may also wish to set up a general template that has your go-to articulations; then, you can always add additional instruments or articulations when the situation arises, thus saving your setup some strain. What was just stated holds true to EW, Cine Symphony, Spitfire et al. If you decide that you can survive without having everything loaded at once, thus purchasing whatever library you feel to be most suitable to you, you can always add to your computer farm down the road while using and learning said library in the meantime. This could save you some buyers remorse and allow you to acquaint yourself with your tools.
> 
> ...



Wouldn't you think that if OT recorded their stuff in air hall that would be an overkill? I mean those guys program really well.


----------



## Fleer (Jul 15, 2017)

Piano Pete said:


> I have found it to be much more efficient than it was previously. Load times are respectable as well.


True, since EastWest launched Play 5, many users expressed their happiness.


----------



## Vik (Jul 16, 2017)

I have both OT and Spitfire strings, and sometimes I remind myself that I don't have two real orchestras, but two fake orchestras with all the limitations that imply. And since these limitations vary from library to library, many of us don't go for only one library.... library X may do a good job where Y fails and vice versa. And I mainly agree with what ctsai89 has written earlier, but would like to add (regarding "to get it sound like a fully rehearsed professional orchestra, make sure you have a lot of time"), that even with Spitfire (which this was about), enough time won't solve all situations, for a number of reasons - the main being that while SSS shines in a lot of areas, it's also one of the few main libraries that doesn't offer manual/proper control over portamento volume and length/speed - a feature that's a must have for some of us. 

Berlin Strings can't do "everything" either, but is closer. So he bottom line, when discussing all this (including RAM) is IMO: what kind of samples are most important to have. Convincing legatos? A good variety of short notes? Are you mainly making quiet adagios, music for action movies - or both? Will melodic string lines (especially slow ones) play a major role in what you do or do you mainly us ling strings as backgrounds? Will your mockups end up in the final results, or are they only used for the composing/arranging process? Do you use a DAW which can freeze and unload the Kontakt samples or one which cannot? Do you tend to work only with one or max two mic positions, or end up using a lot more? And so on.

SSS specs:

147.6 GB UNCOMPRESSED .WAV
101.1 GB DISK SPACE REQUIRED
(That's with just the main microphones).



BS specs:

129 GB of samples in NCW format (268 GB uncompressed)
The BS specs is only for the main library without the expansions, but includes all mic positions. I have all the BS expansions except one, and my BS samples folder uses 223 gb on that SSD, but wait... that's both including first chairs and the Nocturne violin, so that fact probably isn't of much help actually. But just remember that when comparing the two libraries (how much RAM and drive space each of them need) make sure that you don't compare apples and oranges.


----------



## ctsai89 (Jul 16, 2017)

Vik said:


> I have both OT and Spitfire strings, and sometimes I remind myself that I don't have two real orchestras, but two fake orchestras with all the limitations that imply. And since these limitations vary from library to library, many of us don't go for only one library.... library X may do a good job where Y fails and vice versa. And I mainly agree with what ctsai89 has written earlier, but would like to add (regarding "to get it sound like a fully rehearsed professional orchestra, make sure you have a lot of time"), that even with Spitfire (which this was about), enough time won't solve all situations, for a number of reasons - the main being that while SSS shines in a lot of areas, it's also one of the few main libraries that doesn't offer manual/proper control over portamento volume and length/speed - a feature that's a must have for some of us.
> 
> Berlin Strings can't do "everything" either, but is closer. So he bottom line, when discussing all this (including RAM) is IMO: what kind of samples are most important to have. Convincing legatos? A good variety of short notes? Are you mainly making quiet adagios, music for action movies - or both? Will melodic string lines (especially slow ones) play a major role in what you do or do you mainly us ling strings as backgrounds? Will your mockups end up in the final results, or are they only used for the composing/arranging process? Do you use a DAW which can freeze and unload the Kontakt samples or one which cannot? Do you tend to work only with one or max two mic positions, or end up using a lot more? And so on.
> 
> ...



I haven't used the old legato performance patches but I believe there is control over portamento speed and legato speed as well in those patches just as Mural did. 

But I mainly like the performance legato patch which don't have manual control options for those things. I like it because all the start of the note of the legato are up to how loud you want it to be and easily corresponds to how hard your hitting the keyboard to trigger the start of that note and legato phrase. It's been quite a wishful thinking to want to have the manual control for portamento, 1~50~100% vibrato slider, legato speed, etc in the new performance patch. 

But the main problem with not being able to achieve the fully rehearsed professional orchestral sound is that the "Spitfire way" is in conflict of interest with what is needed for us to midi program a track into sounding like a professional fully rehearsed orchestra. Even if I had the legato speed slider, the legato transition lenghts seems to all have been recorded differently for each note. Christian Hanson believes that a library sounds "great" and "real" because he asks the players to play each note or each transition a little bit different from each other. So as you can see, it's more of something that's completely intended. And I do agree with you, because of the Spitfire Way, it may be extremely difficult or nearly impossible to create a 100% reherased realistic sounding professional orchestral midi mockup. I'd say it's good enough at 85%. Maybe we are encouraged to have our pieces rehearsed and recorded by real players after all.....


----------



## Vik (Jul 16, 2017)

ctsai89 said:


> I haven't used the old legato performance patches but I believe there is control over portamento speed and legato speed as well in those patches just as Mural did.


The parameters are there, bit they don't/hardly do anything with the portamento volume/speed.



> It's been quite a wishful thinking to want to have the manual control for portamento, 1~50~100% vibrato slider, legato speed, etc in the new performance patch.


If they implement proper control over portamento speed and volume, even I'll probably update from Mural to SSS!



> the legato transition lenghts seems to all have been recorded differently for each note. Christian Hanson believes that a library sounds "great" and "real" because he asks the players to play each note or each transition a little bit different from each other


 Well, the problem with randomly different transitions, attacks, and even timing is that it will become a lot more difficult to get things right. That's because one both needs to deal with the natural variations in one's own timing etc, and also needs to deal - when editing stuff - with each note in a different way. So while I'm all for keeping human sloppiness and variations, I certainly don't want to have to deal with my own sloppiness, the players' sloppiness, and sloppy (or lack of) manual editing of samples. In other words: I totally agree (with you).

I more and more find myself thinking that I'll rather stick to companies which spend a lot of time getting things right before they release something, and which sorts things out asap if they find that they have released troublesome samples/presets. That's why I stopped buying Spitfire stuff, at least until they get these things together. Mural simply doesn't "work as advertised", and it's too bad they haven't improved this after all these years/with the SSS release(s).


----------



## JohnG (Jul 16, 2017)

Running an entire orchestra from one computer may be possible, but the CPU limits are going to bite at some point. I hear of people running 128 GB on a single computer but I wonder whether they can run everything at once with a dense orchestration using just one machine?

I like flexibility. Compared with the investment of time we put into music, buying an extra slave computer is cheap. For less than $2,000 one can set aside concerns about RAM, and one can also have the ability to add more than one mic position, which can make a meaningful difference in orchestral samples.

That's a lot less than the cost of a single university course. Well worth it.


----------



## Piano Pete (Jul 16, 2017)

JohnG said:


> Running an entire orchestra from one computer may be possible, but the CPU limits are going to bite at some point. I hear of people running 128 GB on a single computer but I wonder whether they can run everything at once with a dense orchestration using just one machine?
> 
> I like flexibility. Compared with the investment of time we put into music, buying an extra slave computer is cheap. For less than $2,000 one can set aside concerns about RAM, and one can also have the ability to add more than one mic position, which can make a meaningful difference in orchestral samples.
> 
> That's a lot less than the cost of a single university course. Well worth it.


I had tried using a single slave for everything, and I found it more optimal to just build additional slaves. Each one can hold 128, but 64 is the norm. What I found to be the most efficient is finding the heavy hitting libraries and just spliting them amongst my entire network. Everything else is logically spread out to fill up the memory. Now the 128 in some stuff is nice that I can have it loaded and not think about it; however, what is loaded on that system is never going to have everything running at once. To decrease load times, I just modularized my setup so that I can add vep metaframes quickly when needed. 

I do not have a reliable enough backup system to keep my slaves running 24/7, so they do not remain online outside of when I am working. Oddly enough, I have found that keeping my synths on my master computer is more efficient when it comes to workflow, and I have not been hindered at all. Diva has been playing nice, I guess. ( I have been too lazy to set up the automation/midi parameters in VEP to host them on any slave computer. Although I need to do this so I can just automate mic positions quicker).


----------



## ctsai89 (Jul 16, 2017)

Vik said:


> I more and more find myself thinking that I'll rather stick to companies which spend a lot of time getting things right before they release something, and which sorts things out asap if they find that they have released troublesome samples/presets. That's why I stopped buying Spitfire stuff, at least until they get these things together. Mural simply doesn't "work as advertised", and it's too bad they haven't improved this after all these years/with the SSS release(s).



You said exactly what I wanted to for my reason to regretting having spent on Spitfire instead of Berlin. They seem to have spent time to get things right and better than most but they didn't put that extra duration of time to polish and perfect their programmings. One of the biggest dissapointment was that they messed up the volume levels of some patches which had me worry everytime I pull up a patch "is this in the right volume level?" and lose my initial inspiration.

Anyways though, I still like SSS quite a lot. Not sure if I'd like Berlin's sound as much as I would with Spitfire but for sure I would get much less riff raff from midi mockups with Berlin stuff - with enough RAM


----------



## NoamL (Jul 16, 2017)

You better have 80+GB for running Orchestral Tools.

It depends how many microphones you want to load and how your template is set up.

I'm working on a mockup now that has the following loadout -

Cinematic Studio Strings - 16 GB
Berlin Woodwinds - 30-40 GB
Berlin Brass - 50-80 GB

Unsure of the exact totals because I have to load the brass sections one by one on my poor laptop to bounce stems!

Of course, the new Berlin Inspire fits on a laptop and you can load everything inside 8GB. It's also a far more comprehensive set of samples than Albion, and better quality overall than EWQLSO/HWO/Symphobia/TheOrchestra. So if you're starting out, and have RAM concerns, I think Berlin Inspire is really a killer starting library.

The advantage of Berlin Brass: there are so many microphones you can really craft your own sound. Spitfire's microphone set up is very basic. You can adjust the close-tree-ambient mix to create the amount of depth you want (I believe on recent libraries they have just introduced a "distance slider" that automatically remixes the CTA mics), and you can use the close and outrigger microphones to pan things around a bit while keeping the main stereo image from the tree. But you can't change the _character_ of the sound that much: it's gonna sound like AIR. With Berlin Brass, you can get everything from a close mic'd brass band to a scoring stage to a hall sound, with different combos of the six mics.

The disadvantage of Berlin is not the RAM, it's that the workflow is A BEAR. Programming 12 woodwind instruments and 11 brass instruments...


----------



## storyteller (Jul 16, 2017)

There are a number of recent threads where we discussed the details of ram requirements for different libraries. But the summary of the stories in those posts is that it boils down to workflow.

*Workflow 1: *To have everything loaded up, ready to play by clicking on your desired track, you will need quite a bit of ram. @NoamL gave a good example in the post above. You can run your template via host and slaves, or on one DAW. Either way, you will also potentially run into CPU issues with the voice count depending on complexity of passages, libraries used, and number of mic positions used.

*Workflow 2: *To have everything loaded up and ready to play, but stored in a disabled template will significantly reduce the ram requirement. In this workflow, you will enable the tracks you want to use, then when finished, freeze/render them (which unloads the ram footprint). The rendering is usually processed really quickly (as an offline process) and only adds a small nuance to the workflow. You also have the benefit of not having as many potential CPU/latency issues (due to rendered audio vs realtime sample playback), and can get by with very minimal ram (probably 16gb minimum, 32gb is terrific here, 64gb is perfect here) but will run into a few additional considerations. These are:

*Consideration 1: *Depending on DAW, your ability to setup a full disabled template may still be track limited (e.g. ProTools). Cubase has been improving here and may not suffer the bugs it had in the previous version (I haven't checked the latest version). Reaper will handle disabled templates flawlessly. Logic, however, is still missing some features that free up ram when tracks are disabled. I'm unsure of the disable capabilities of the latest versions of Sonar and Studio One. I converted over to Reaper last year, largely for this reason.

*Consideration 2: *Depending on your template, you will need to consider how you prefer to enable/disable portions of your template in order to make sure you have enough Ram. For example, I prefer to work in sections. So, I try (when possible) to enable all of the tracks in a particular section (e.g. strings, brass, winds, etc) when working on those parts. Afterwards, I freeze them and move on to the next section. Depending on how you setup your template, this can still consume a significant amount of ram, but is not nearly as daunting as workflow 1.​I'd wager that most people here (including many of the professionals with multi-slave rigs) still use some form of a disabled template to help keep latency low. Hope that helps a bit!


----------



## galactic orange (Jul 16, 2017)

storyteller said:


> *Consideration 1: *Depending on DAW, your ability to setup a full disabled template may still be track limited (e.g. ProTools). Cubase has been improving here and may not suffer the bugs it had in the previous version (I haven't checked the latest version). Reaper will handle disabled templates flawlessly. Logic


Were you going to add more info about a Logic setup here? The downside for Logic users such as myself is that freezing saves CPU but doesn't relieve the RAM load. So if I'm using OT libraries I'm basically SOL until I get a new PC.


----------



## storyteller (Jul 16, 2017)

galactic orange said:


> Were you going to add more info about a Logic setup here? The downside for Logic users such as myself is that freezing saves CPU but doesn't relieve the RAM load. So if I'm using OT libraries I'm basically SOL until I get a new PC.


Whoops! I updated my post. Looks like those last few sentences got deleted when I was formatting the post! Ha. Thanks for catching that.


----------



## Virtual Virgin (Jul 17, 2017)

ctsai89 said:


> But the main problem with not being able to achieve the fully rehearsed professional orchestral sound is that the "Spitfire way" is in conflict of interest with what is needed for us to midi program a track into sounding like a professional fully rehearsed orchestra. Even if I had the legato speed slider, the legato transition lenghts seems to all have been recorded differently for each note. Christian Hanson believes that a library sounds "great" and "real" because he asks the players to play each note or each transition a little bit different from each other. So as you can see, it's more of something that's completely intended.



This baked-in humanization seems like it could go either way depending on what it is that's being written. 
The control freak in me certainly doesn't like the idea of mismatched tails or random articulation performance. 
At the same time, if things get too robotic they tend to betray their synthetic construction. 
I do however tend to think it is easier to work backwards from a clean source to make it dirty, rather than a dirty source to make it clean. 

The type of comments I've seen so far about Spitfire give me the impression of an actual sloppiness in dynamic mapping, editing and looping, not just an aesthetic choice.

I have noticed that the demos I've found on the Spitfire Symphony Orchestra are a bit broad and the material chosen does not showcase precision. I haven't found Spitfire demos for classical mockups. I'd love to hear some Holst or Dvorak.


----------



## Virtual Virgin (Jul 17, 2017)

NoamL said:


> With Berlin Brass, you can get everything from a close mic'd brass band to a scoring stage to a hall sound, with different combos of the six mics.



Have you programmed small chamber music with Berlin? How does it fare (both in using solo articulations and for a more intimate mic mix)?


----------



## ctsai89 (Jul 17, 2017)

Virtual Virgin said:


> This baked-in humanization seems like it could go either way depending on what it is that's being written.
> The control freak in me certainly doesn't like the idea of mismatched tails or random articulation performance.
> At the same time, if things get too robotic they tend to betray their synthetic construction.
> I do however tend to think it is easier to work backwards from a clean source to make it dirty, rather than a dirty source to make it clean.
> ...



Yes indeed very much lots of truth in what you said.

I'll share you my Holst's mockups using Spitfire Orchestra but I swapped solo trumpets for Chris Hein's solo trumpets and sometimes trombones and horns as well. I also used Albion ONE for the fast staccatos in the beginning (which sounded quite thin compared to SSS)

This is the mockups I did few months back so I haven't posted the final versions of them yet, solo horns are too loud as of now for the slow section, and too much reverb on the trumpets I believe. I also did not use bowed legato whenever needed.






By only using the tree mics, I was using up about 25gigs from logic alone (I have 32gb RAM total) and the rest of it goes to google chrome/other apps.


----------



## MatFluor (Jul 17, 2017)

ctsai89 said:


> Yes indeed very much lots of truth in what you said.
> 
> I'll share you my Holst's mockups using Spitfire Orchestra but I swapped solo trumpets for Chris Hein's solo trumpets and sometimes trombones and horns as well. I also used Albion ONE for the fast staccatos in the beginning (which sounded quite thin compared to SSS)
> 
> ...




I would say that a lot of massaging is needed - It sounds like you pulled in a MIDI file. If you hadn't written it's Spitfire I would not have guessed it - My Orchestra is full Spitfire and it sounds very different to this.

On the note of "clean vs. dirty as the starting point" - I like the Spitfire approach, of course, you get it the way the Spitfire Team thought it was good - fortunately it's also the way I would've done it (if I had more experience it might be different. I like the sound the way it is, and I don't think it's sloppy at all - I don't know SCS for that matter, but it also depends what is meant by "precision". You can achieve great things with e.g. VSL, you just have to invest time to it. Spitfire and OT have different philosophies I believe, Spitfire on the side of "deadlines approaching" and OT in the middle of them. In that sense - When you have time to work on it, Spitfire would be my choice, but I guess OT or VSL would be far better. If you need to deliver in a certain timeframe, I would go for Spitfire.

Yes, I'm not a fanboy, but I am very happy with it. I'll see if I have time to make a classical mockup (a few bars that is) to show off my template and full-spitfire orchestra. Since I really think these examples above are not how the Spitfire Symphony Orchestra sounds


----------



## ctsai89 (Jul 17, 2017)

MatFluor said:


> I would say that a lot of massaging is needed - It sounds like you pulled in a MIDI file. If you hadn't written it's Spitfire I would not have guessed it - My Orchestra is full Spitfire and it sounds very different to this.
> 
> On the note of "clean vs. dirty as the starting point" - I like the Spitfire approach, of course, you get it the way the Spitfire Team thought it was good - fortunately it's also the way I would've done it (if I had more experience it might be different. I like the sound the way it is, and I don't think it's sloppy at all - I don't know SCS for that matter, but it also depends what is meant by "precision". You can achieve great things with e.g. VSL, you just have to invest time to it. Spitfire and OT have different philosophies I believe, Spitfire on the side of "deadlines approaching" and OT in the middle of them. In that sense - When you have time to work on it, Spitfire would be my choice, but I guess OT or VSL would be far better. If you need to deliver in a certain timeframe, I would go for Spitfire.
> 
> Yes, I'm not a fanboy, but I am very happy with it. I'll see if I have time to make a classical mockup (a few bars that is) to show off my template and full-spitfire orchestra. Since I really think these examples above are not how the Spitfire Symphony Orchestra sounds



I played everything in with a keyboard, didn't do much dragging the midi editting except I drew the CC#1 in. Thanks! What library would you have guessed that I was using if I didn't say I used mostly SPitfire with Chris Hein Brass? Don't tell me it sounded like sibelius/finale play back.. or I'll


----------



## MatFluor (Jul 17, 2017)

ctsai89 said:


> I played everything in with a keyboard, didn't do much dragging the midi editting except I drew the CC#1 in. Thanks! What library would you have guessed that I was using if I didn't say I used mostly SPitfire with Chris Hein Brass? Don't tell me it sounded like sibelius/finale play back.. or I'll



Honestly, it really sounded a bit mechanical 

I'll try the beginning few measures of Mahlers 6th since I have the score at hand atm.


----------



## ctsai89 (Jul 17, 2017)

MatFluor said:


> Honestly, it really sounded a bit mechanical
> 
> I'll try the beginning few measures of Mahlers 6th since I have the score at hand atm.



yes please post when you're done  can't wait


----------



## ctsai89 (Jul 17, 2017)

@MatFluor Does this one also sound mechanical to you? I used all 3 mics in this one.


----------



## TintoL (Jul 17, 2017)

I don't know if you guys ever heard or saw this. This video made a big hype in the forum when it came out.
It was actually in the main page of the spitfire audio's blog page.
The video has the audio matching a real concert video of Holst's Mars.

All was done with spitfire's tree mic. OF COURSE I DIDN'T DO THIS. NO WAY I CAN DO THIS.

It's Carles Piles awesome work. So yes, there are classical pieces done with spitfire.


----------



## ctsai89 (Jul 17, 2017)

TintoL said:


> I don't know if you guys ever heard or saw this. This video made a big hype in the forum when it came out.
> It was actually in the main page of the spitfire audio's blog page.
> The video has the audio matching a real convert video of Hols't Mars.
> 
> ...




Yea I saw this video about a year ago and it got me obsessed with SPitfire. 

Carles also did a mockup with EW hollywood on Jupiter, I didn't like that one very much. 

But this Mars with Spitfire, pretty good.


----------



## MatFluor (Jul 17, 2017)

ctsai89 said:


> @MatFluor Does this one also sound mechanical to you? I used all 3 mics in this one.




This sounds better - but I think it's less the mics (as shown 2 posts above ) but more the programming side - quantization etc.


----------



## TintoL (Jul 17, 2017)

ctsai89 said:


> Yea I saw this video about a year ago and it got me obsessed with SPitfire.
> 
> Carles also did a mockup with EW hollywood on Jupiter, I didn't like that one very much.
> 
> But this Mars with Spitfire, pretty good.


Yeah, haha, this video got me obsess with it too. Which is one of the main reason's I choose Spitfire. Your versions are quite good too. It's soo much work to play each part for such a demanding piece. IMHO I just think they sound probably a bit too wet. I don't think they sound mechanical, but, there is some expression missing from the dynamics.

I remember Carles and Blackus saying that a ton of the reality in a performance is in the mod wheel. I am still trying to learn it. So, don't take my opinion as a professional one.


I really don't know how the hell Carles was able to show such clarity in that piece with just a tree mic. Why is it he didn't do a screencast.... He should do it. Haha...


----------



## ctsai89 (Jul 17, 2017)

MatFluor said:


> This sounds better - but I think it's less the mics (as shown 2 posts above ) but more the programming side - quantization etc.



thanks! haha I did that one much later than I did the Jupiter. Yes I had gained experience and dragged the midis to better spots on this one. And this was overall a lot less to deal with because it's only strings.


----------



## TintoL (Jul 17, 2017)

thereus said:


> Incredible mockup of Mars. I can't watch it with the video, but the sound is fantastic if I look away


Watching it makes me feel how mortal I am. Crap. And he is a supper mega accomplish CG artist working in weta workshop. 

It is quite impressive. I am not even sure if the Berlin series have something like this to show. I've never seen it in the forum. It will be good to know if someone has done it


----------



## ctsai89 (Jul 17, 2017)

TintoL said:


> Watching it makes me feel how mortal I am. Crap. And he is a supper mega accomplish CG artist working in weta workshop.
> 
> It is quite impressive. I am not even sure if the Berlin series have something like this to show. I've never seen it in the forum. It will be good to know if someone has done it





Ok so after listening to that youtube video again, I have finally remembered why I chose Spitfire over Berlin initially.... because Berlin hadn't have had their brass released at the time I was craving a full orchestra at my finger tip...... oh and I dind't want to spend too much on RAM and more cores of CPU either.

The strings in this though, very well done imho. I just didn't like how he raised woodwind's volume when they had the melody parts. I would've kept it naturally balanced as it is and only used cc#1.


----------



## TintoL (Jul 17, 2017)

ctsai89 said:


> Ok so after listening to that youtube video again, I have finally remembered why I chose Spitfire over Berlin initially.... because Berlin hadn't have had their brass released at the time I was craving a full orchestra at my finger tip...... oh and I dind't want to spend too much on RAM and more cores of CPU either.
> 
> The strings in this though, very well done imho. I just didn't like how he raised woodwind's volume when they had the melody parts. I would've kept it naturally balanced as it is and only used cc#1.



Wao, I haven't seen this before. It's quite impressive. Nevertheless the sound of both are very comparable. 

I still like the results of that spitfire sound a bit more. But, of course those are completely different pieces.


----------



## Piano Pete (Jul 17, 2017)

ctsai89 said:


> @MatFluor Does this one also sound mechanical to you? I used all 3 mics in this one.




Nice job. To me, there are two things that stood out to me that may help you down the road:

1) Quantize seems to be hit a little hard. If you must quantize, try letting some doubled instruments play a little early or later. You can do this either by not hitting the button (Whilst quantizing other instruments), adjusting randomization/quantize strength percentage (depending on DAW), or by manually bumping stuff around. This also can be said for doubled instruments' cc data. -- Edit -- I forgot to add this, but some samples need to be started earlier than when the beat actually occurs. This would be no different than asking a french horn player to play a little before the beat due to the length of pipe and the nature of their instrument. 

2) More shaping could be done to give the notes some more life. The celli and bass in your Tchaikovsky stood out to me instantly. Their articulations seem very jagged and straight. Even with the most furious marcato, due to the nature of using a bow, there are going to be nuanced cresc. or dim per note especially at bow releases or when it would otherwise stop (staying on the string or changing direction). (For moments that you would have a really jagged stopped noise, some tailored reverb helps adding the tails created by a room; otherwise, it will get synthy very quickly). My suggestion would be to try layering articulations, adjusting note lengths, and tailoring volume/expression changes per note. Once you do that, you can even try changing them up and then ordering them according to a musical phrase. Add direction whenever possible! For my mockups, I usually spend more time on string programming than anything else. It takes a bit more time, but it is definitely worth it. You usually start doing similar things for specific sounds, and once you start figuring that out, the process speeds up. The note fidgeting is library specific, as they all have different commands.

I do not know what musical instruments you have available to you physically, but if you get the chance, try listening to someone play or find a chamber group to listen to. Even now, when working on string parts, I grab my violin to listen to what I am doing, feel what I am doing, and try to figure out what parameters I need to adjust to match that in a DAW.


----------



## ctsai89 (Jul 17, 2017)

TintoL said:


> Wao, I haven't seen this before. It's quite impressive. Nevertheless the sound of both are very comparable.
> 
> I still like the results of that spitfire sound a bit more. But, of course those are completely different pieces.



haha I'm surprised you didn't see that one. I was mass looking for realistic VI mockups last year and I have also found this one:


@Piano Pete Cool you play the violin. I play the cello and yea Spitfire's performance legato patch is the closest to what's able to make me feel like I am actually play a string instrument when I'm using the patch with a keyboard. 

One of the reasons I got into the whole midi mockup business was because I wanted to make my own midi mockups for cello concertos with myself actually playing the cello accompanied by realistic sounding VI's.


----------



## Virtual Virgin (Jul 17, 2017)

TintoL said:


> I don't know if you guys ever heard or saw this. This video made a big hype in the forum when it came out.
> It was actually in the main page of the spitfire audio's blog page.
> The video has the audio matching a real concert video of Holst's Mars.
> 
> ...




Quite impressive, especially the ending. A couple spots give it away for me: @2:29 with the flute/violin runs, and @0:00 coming from niente. The noise floor seems too low for the dynamic that we are supposed to be hearing.

What library are the drums from? I don't see an orchestral percussion library in with the Spitfire Symphony Orchestra.


----------



## Virtual Virgin (Jul 17, 2017)

ctsai89 said:


> I'll share you my Holst's mockups using Spitfire Orchestra but I swapped solo trumpets for Chris Hein's solo trumpets and sometimes trombones and horns as well. I also used Albion ONE for the fast staccatos in the beginning (which sounded quite thin compared to SSS)
> 
> This is the mockups I did few months back so I haven't posted the final versions of them yet, solo horns are too loud as of now for the slow section, and too much reverb on the trumpets I believe. I also did not use bowed legato whenever needed.
> 
> ...




I like the first one here best and I like the dynamics you pulled in the repetitions (which has to be metered to not lose the effect of the intensity).

The introduction however does have that mechanical sound to it. The phased 3 note grouping does indeed seem like a mechanistic device in the first place, but with the strings hitting here it does sound quantized and I think could use some offsetting to make it flow better. You picked a tough one though! The excitement of that syncopated line in the brass with timpani is hard to capture. I still can't perform that theme and it's subsequent permutations properly.


----------



## TintoL (Jul 17, 2017)

Virtual Virgin said:


> Quite impressive, especially the ending. A couple spots give it away for me: @2:29 with the flute/violin runs, and @0:00 coming from niente. The noise floor seems too low for the dynamic that we are supposed to be hearing.
> 
> What library are the drums from? I don't see an orchestral percussion library in with the Spitfire Symphony Orchestra.




If i remember correctly he used spitfire redux percussion. By the way, he used a midi file if i remember correctly and literally drew every curve for automation. Awesome work he did.

Those moments you mention i don't hear them. I guess my ear is not that trained.

By the way, what do you mean with the noise floor, if i may ask?

Thanks in advance.


----------



## TintoL (Jul 17, 2017)

ctsai89 said:


> haha I'm surprised you didn't see that one. I was mass looking for realistic VI mockups last year and I have also found this one:
> 
> 
> @Piano Pete Cool you play the violin. I play the cello and yea Spitfire's performance legato patch is the closest to what's able to make me feel like I am actually play a string instrument when I'm using the patch with a keyboard.
> ...



Wao, awesome. Quite moving. Thanks for sharing.


----------



## Virtual Virgin (Jul 17, 2017)

TintoL said:


> If i remember correctly he used spitfire redux percussion. By the way, he used a midi file if i remember correctly and literally drew every curve for automation. Awesome work he did.
> 
> Those moments you mention i don't hear them. I guess my ear is not that trained.
> 
> ...



You can usually hear some "air" just before the orchestra starts, like a breath, some light noise.


----------



## Virtual Virgin (Jul 17, 2017)

TintoL said:


> By the way, what do you mean with the noise floor, if i may ask?
> 
> Thanks in advance.



And "noise floor" is the technical term for the sum of all noises (unwanted sound to engineers). So this includes self-noise in the signal path made by the electronics themselves (mics, cables, ground hum, pre-amps, tape hiss etc.) as well as room noise.


----------



## X-Bassist (Jul 17, 2017)

galactic orange said:


> Were you going to add more info about a Logic setup here? The downside for Logic users such as myself is that freezing saves CPU but doesn't relieve the RAM load. So if I'm using OT libraries I'm basically SOL until I get a new PC.



Actually GO your not SOL. Have you considered hosting the instruments in VE Pro inside logic? You can disable the channels after you freeze a track and it will free up ram, all working on one machine. On my system it also allowed me many more instruments before filling up ram, since ve pro uses resource ram and cores more efficently. For about $200 it really helped my system, and now that I'm finally adding a slave it's an easier expansion process, adding to my host machine.


----------



## galactic orange (Jul 17, 2017)

X-Bassist said:


> Actually GO your not SOL. Have you considered hosting the instruments in VE Pro? You can disable the channels after you freeze a track and it will free up ram, all working within logic on one machine. On my system it also allowed me many more instruments before filling up ram, since ve pro uses resource ram and cores more efficently. For about $200 it really helped my system, and now that I'm finally adding a slave it's an easier expansion process, adding to my host machine.


Thanks. Yeah, I'm considering going the VE Pro + slave route since it's the cheapest way for me to expand my setup and keep Logic. I'm still on the fence about whether to do that or stay all Mac with a new system loaded with RAM. But either way, I'll look into VE Pro for use even on one system. Some accounts from VE Pro users claim that getting everything setup and working is a headache. I'll have to look into it more to see if that will be true for me too or not.


----------



## Piano Pete (Jul 17, 2017)

galactic orange said:


> Thanks. Yeah, I'm considering going the VE Pro + slave route since it's the cheapest way for me to expand my setup and keep Logic. I'm still on the fence about whether to do that or stay all Mac with a new system loaded with RAM. But either way, I'll look into VE Pro for use even on one system. Some accounts from VE Pro users claim that getting everything setup and working is a headache. I'll have to look into it more to see if that will be true for me too or not.


It took a bit for me to iron out some kinks, but there are plenty of resources available to get you going to set it up  Even without using any slaves, I found that I got performance optimization in both Logic, Cubase, and Pro Tools when using it on my pcs and mac.


----------



## X-Bassist (Jul 17, 2017)

galactic orange said:


> Thanks. Yeah, I'm considering going the VE Pro + slave route since it's the cheapest way for me to expand my setup and keep Logic. I'm still on the fence about whether to do that or stay all Mac with a new system loaded with RAM. But either way, I'll look into VE Pro for use even on one system. Some accounts from VE Pro users claim that getting everything setup and working is a headache. I'll have to look into it more to see if that will be true for me too or not.



If you have a large templete already set up it can take some time, but multis in Kontakt can be saved and opened in VE Pro. I also took the time to switch everything into a komplete kontrol. It has made all my intruments very efficient and I'm able to purge (unload) instruments that were unpurgable before like play, omnisphere, or best service engine. Probably the best performance increase for the price.


----------



## Piano Pete (Jul 17, 2017)

X-Bassist said:


> If you have a large templete already set up it can take some time, but multis in Kontakt can be saved and opened in VE Pro. I also took the time to switch everything into a komplete kontrol. It has made all my intruments very efficient and I'm able to purge (unload) instruments that were unpurgable before like play, omnisphere, or best service engine. Probably the best performance increase for the price.



I would recommend making a blank, pre-routed metaframe template, so that it is easy to fill out the template and add things as needed down the line. I figured out what the maximum amount of outputs/patches I can load per library per metaframe. It has helped.


----------



## NameOfBand (Jul 24, 2017)

Hey all, looked through this thread and can't seem to find an answer: How much RAM is required to run Berlin Orchestra? Will a 128 GB machine be able to handle it? I did some research and it seems it might just work. But it seems ridiculous; less than 500 GB of samples (although compressed) requires 128 GB of RAM. I tried to work out a ratio for how much RAM one needs per GB sample. Sample memory-wise, for the OT orchestral libraries it seems to be around 10 - 20 % of compressed sample size needed in RAM (at 6 kb buffer in Kontakt). So 100 GB of samples seems to need 10-20 GB sample memory at 6 kb. The percentage depends on what kind of instruments it is however ofc. Add VEP/DAW/other programs, Kontakt instances, and Capsule instances (!), and it all adds up to a lot! I'm thinking of buying a slave, but to be able to ONLY run Berlin Orchestra on it, is it worth it (if I'll even fit all of it in RAM!)?


----------



## Virtual Virgin (Jul 25, 2017)

I'd also like to see some comments on Cinesamples for RAM usage. 
Does anyone have an estimate on loading the entire Cinesymphony bundle?


----------



## Sami (Jul 25, 2017)

One thing i don't completely get: if you have enough ssds, is there still eventually going to be a point where you run out of ram? I.e. Is it the size of the template that is causing the ram issue because eventually you load so many instruments that the ram overflows?


----------



## TintoL (Jul 26, 2017)

Virtual Virgin said:


> And "noise floor" is the technical term for the sum of all noises (unwanted sound to engineers). So this includes self-noise in the signal path made by the electronics themselves (mics, cables, ground hum, pre-amps, tape hiss etc.) as well as room noise.


Thanks so much for the answer. It will be great to add this noise samples to a piece to increment realism.


----------



## storyteller (Jul 26, 2017)

Sami said:


> One thing i don't completely get: if you have enough ssds, is there still eventually going to be a point where you run out of ram? I.e. Is it the size of the template that is causing the ram issue because eventually you load so many instruments that the ram overflows?


The way every sampler engine presently works is that a small portion of each instrument must be preloaded into RAM. The remainder is streamed from the SSD through the RAM buffer as it is being played in realtime. So, in short, no matter how fast the hard drive is, ram must be used with the present incarnation of samplers (e.g. Kontakt, UVI, etc). The hope is that in the future, samplers will take advantage of the fastest SSDs available instead of buffering a certain portion of the sample into RAM.

So - as a real world example, if you have an instrument that takes up several gigs of hard drive space due to articulations, round robins, velocity layers, etc, the sampler must load the basic components of each instrument (in Kontakt this is the scripting, images, knob graphics, etc) in addition to a small block of each sample it expects to play at any point with the instrument open. Even in a fully purged state, this is the minimum loaded into RAM. In the Berlin Series by Orchestral Tools, the woodwinds will consume somewhere in the neighborhood of 12GB - 13GB (BWW main + Exp A) with every articulation loaded as separate NKIs and being fully purged. When you play a note, additional RAM is used beyond this baseline. 

Also, don't forget your OS ram usage. OS usage scales based on ram available, so in a 32GB template, your OS could consume between 5GB and 9GB. Your DAW also consumes RAM. ProTools consumes the most among DAWs. Reaper consumes the least. ProTools (depending on its mood) could consume up to 4.5GB in a blank template. Reaper will consume about 300kb. Big difference. Every other DAW is somewhere in between. This leads to the concept of disabled templates versus having everything loaded on a slave....

In that scenario, a slave (or even your main DAW with enough RAM) can have everything preloaded and active which will use a very sizable chunk of ram depending on your instruments. In a disabled template, you can right click on a track and "activate it" when needed and "deactivate it" or "freeze it" when it is not needed. When a track is loaded, but "disabled," ram usage is completely emptied from use. However, the DAW may use a small amount of memory to remember that this stuff is empty. Reaper does this extremely well. I think Cubase has now worked out the bugs in their DAW functionality. Logic does not empty the ram... ProTools purges the ram, but is buggy on the process. There are alternative methods to working with it. I'm unsure how DP, Studio One, or Sonar handle this functionality these days. Before loading up on ram in hopes of solving a problem for a template, you may want to explore bottlenecks more thoroughly. Even in a huge slave setup, it can still require freezing and disabling tracks for it to run at a low enough latency for realtime composition. Hope this helps!


----------



## Count_Fuzzball (Jul 26, 2017)

storyteller said:


> Even in a huge slave setup, it can still require freezing and disabling tracks for it to run at a low enough latency for realtime composition. Hope this helps!



Legit question, and I'm probably missing something, but if you have a bunch of slave machines, loaded to the gills with pre-loaded samples in RAM, where does the additional latency come in?
As you said, the biggest bottleneck there would be the SSDs not being able to stream data fast enough, but that would require midi being sent to trigger the sampler to stream from hdd.

I fail to see how having a bunch of samples in RAM can contribute to RT latency? :o


----------



## storyteller (Jul 26, 2017)

Count_Fuzzball said:


> As you said, the biggest bottleneck there would be the SSDs not being able to stream data fast enough, but that would require midi being sent to trigger the sampler to stream from hdd.



This is not what I said at all (assuming you are referring to my last post). But I do want to clarify because I can understand how it could have been interpreted that way. In my post, I was talking about why RAM is required by samplers and why the fastest SSDs available today do not compensate for the demand samplers have for using RAM with different instruments. In theory, there is still a difference in speed between RAM and the fastest SSDs... but it is minimal... In many ways, Kontakt sort of bypasses ram already when you purge a template... but it does not completely bypass ram. It loads and plays back the samples in realtime. So, for every note played, it answers the question to where the sample is stored and then plays it back. For efficiency, Kontakt "remembers" each sample played by storing a portion of it in ram for additional requests. So it is really more of a question of how many questions/answers is Kontakt being required to supply simultaneously.

That said, most newer SSDs can accommodate the demand to stream everything back adequately using the lower buffers in Kontakt as long as the number of tracks is not too complex. However, it is true that SSDs can be a bottleneck depending on how demanding your composition is on the metric of IOPs per disk... or, more likely, a bottleneck will be based on how efficiently Kontakt can be asked and respond with playback of samples from the disk in realtime. If you are running numerous tracks with one VI (e.g. Cinemorphx or something like that in Kontakt) alone on one drive, you will certainly have to increase your buffer at some point to compensate for dropouts due to samples playing back from one single SSD. Even seemingly simple VIs like EDNA Earth will require Kontakt's buffer to be increased when using more CPU-intensive patches - especially when your track counts begin to creep up. And when that happens, it increases the buffer across all Kontakt instances. I'd argue that is buggy programming with EDNA, but it is still an issue in numerous Kontakt instruments. So to mitigate any IOPs issues, you can spread your sections out across SSDs, or architect your disk load/usage based on your own workflow. I personally run four SSDs with orchestral sections divided out across those disks over thunderbolt 2.



Count_Fuzzball said:


> Legit question, and I'm probably missing something, but if you have a bunch of slave machines, loaded to the gills with pre-loaded samples in RAM, where does the additional latency come in?...I fail to see how having a bunch of samples in RAM can contribute to RT latency? :o



It really depends on your setup. CPU becomes the bottleneck at some point - which will affect latency. I'm sure there are a variety of setups among composers here that will be happy to give their success/challenge stories. But, for example, if you are using VEPro, you will be sending midi out over lan and likely routing each track back into your DAW. There is a minimal amount of latency introduced from VEPro (both going and coming) since midi is routing over LAN and audio must be returned, but is the most usable for composers with slave setups. However, when you get into high track counts, your DAW still has a limit to the number of simultaneous audio tracks that are open and being streamed. Using Reaper as an example, if a track is muted, it will remove part of the CPU overhead on having "potential audio" flow through it. Most DAWs do not have that feature though and will consume CPU cycles when audio is routed back in. So many composers have to choose how to route audio back in. Does it travel back through a single stereo track channel through VEPro and function like a typical Kontakt instance? Does your VEPro setup submix/reduce the number of audio streams before it returns to the DAW (e.g. only send a stereo strings back into the DAW instead of V1, V2, Viola, Cello, Bass, etc. back)? Does your slave output to another location rather than your DAW (e.g. a ProTools stem rig for mixing)? What about surround tracks? Do you want realtime effects playing back? Does your DAW also handle playback of your video? Yes? That eats up CPU cycles. No? You'll need another PC to your rig. So these are among the questions to ask when thinking about the setup.

So really, you can have a great rig without introducing crazy latency. But you also have to genuinely understand your routing and find peace with decisions that may enable that latency to remain low. As I'm sure you know, low latency equals higher CPU usage. Offloading your instruments will definitely reduce the CPU overhead during playback on your DAW, but if you plan on having a template of 1000's of tracks live and active in a DAW with the audio routed back in per track, you will see some of the issues discussed above. Hope that helps clarify.


----------



## Sami (Jul 27, 2017)

Anyone have experience with CineSamples and Spitfire having used a full symphony orchestra (that'd be CineStrings/Brass/Woods/Perc or Spitfire Woods/Brass/Strings/Perc respectively)? How much ram would a full set of articulations/mics use?


----------



## Piano Pete (Aug 8, 2017)

Count_Fuzzball said:


> Legit question, and I'm probably missing something, but if you have a bunch of slave machines, loaded to the gills with pre-loaded samples in RAM, where does the additional latency come in?
> As you said, the biggest bottleneck there would be the SSDs not being able to stream data fast enough, but that would require midi being sent to trigger the sampler to stream from hdd.
> 
> I fail to see how having a bunch of samples in RAM can contribute to RT latency? :o


Storyteller kind of hit the nail on the head here, but the other large factor can be the network connecting everything together. Cables and routers can cause issues, although very unlikely, and the largest pain in the rear for me tends to be network drivers/ports within the computers themselves. That is why whenever I work on networking, I always make sure to have some incense burning just in case .


----------



## Piano Pete (Aug 8, 2017)

NameOfBand said:


> Hey all, looked through this thread and can't seem to find an answer: How much RAM is required to run Berlin Orchestra? Will a 128 GB machine be able to handle it? I did some research and it seems it might just work. But it seems ridiculous; less than 500 GB of samples (although compressed) requires 128 GB of RAM. I tried to work out a ratio for how much RAM one needs per GB sample. Sample memory-wise, for the OT orchestral libraries it seems to be around 10 - 20 % of compressed sample size needed in RAM (at 6 kb buffer in Kontakt). So 100 GB of samples seems to need 10-20 GB sample memory at 6 kb. The percentage depends on what kind of instruments it is however ofc. Add VEP/DAW/other programs, Kontakt instances, and Capsule instances (!), and it all adds up to a lot! I'm thinking of buying a slave, but to be able to ONLY run Berlin Orchestra on it, is it worth it (if I'll even fit all of it in RAM!)?



I cannot remember if I had posted this earlier or not, so I will add this.

This has been stated a lot, but RAM is not the only factor to take into account. For the sake of argument, if you can load and have the entire OT library, or really any high end sample library for that matter, blasting away on a single computer you are going to most likely have a bottleneck somewhere where it really counts: your CPU. For the price of building a killer machine with 128gb of ram, you are most likely better off, and it will end up costing about the same, building two separate slaves totaling in 128gb ram (you can also allow build them to be upgraded to 128 each, or even just have one capable of holding 128gb to save some cash). This gives you 2 additional cpus to spread the strain across. In this scenario, and what I have found great results with, dividing your libraries/samples based on use and impact can prevent issues down the road. Strings and brass tend to be things that I prefer to separate across cpus from the start. They are used a lot and can be extremely taxing with extraneous use. By doing this, they will each take up the bulk of their individual cpus without stepping on eachothers' toes.

If you are dead set on a single whammy computer, which is perfectly fine, loading every instrument with every mic position most likely will not work. Again, I do not know the exact numbers off the top of my head, and I am currently too scrapped for time to be doing that number crunching. I believe someone had previously posted Spitrefire's and OT's impact in their set up. If you load the majority of your go-to instruments/patches, a single 128gb will most likely serve you very well (with a decent cpu). I highly doubt you are going to have 128gb worth of instruments slamming away at your cpu for extended periods of times. With this use, 128gb is most likely going to be very comfortable. Even more likely, you are probably not going to use every single articulation for each instrument. You will most likely find the set that you use on a constant basis and then load situational ones when needed. In this case you have plenty of growing room with a single 128gb computer. To load the entire library, mic positions and all, and have everything going at once will most likely cause issues at the CPU even if you could load everything. At that point, how helpful is it to have everything loaded?

I am a big advocate for people to have the tools they need to work efficiently. I'll admit that in my template I have every orchestral instrument that I could want at my finger tips; however, I do not have every articulation for said instruments loaded at the start! I find I can do most things with, at maximum, 6 different articulations per instrument (Not counting divisi or doubling patches), and if I need a specific articulation, I know where to find them. This is where a modular setup is helpful to quickly add things when needed! I also do not keep every single ethnic instrument loaded from the get-go; however, I do have presets setup to quickly add them to a given project. This saves room for additional mic positions.

Finally, I feel it should be said, although it has been heavily suggested here and elsewhere, that one may not necessarily want to sacrifice on the quality of a slave's cpu just to afford a set of ram that will squeeze an additional 64gb into a single box. It would be horrible to have a a computer that can load all of these samples without being able to output a fraction of them. I am of the idea that you are better off purchasing two slave computers, and it is ok if one has a beefier cpu than the other. I would rather have two cpus and two sets of 64gb over a single cpu and 128gb.


--Edit---

Heck, even a single decent slave with 64gb of ram is going to be fine for most situations. You can keep all of your meat and potato stuff going.


----------



## Virtual Virgin (Aug 8, 2017)

Piano Pete said:


> I cannot remember if I had posted this earlier or not, so I will add this.
> 
> This has been stated a lot, but RAM is not the only factor to take into account. For the sake of argument, if you can load and have the entire OT library, or really any high end sample library for that matter, blasting away on a single computer you are going to most likely have a bottleneck somewhere where it really counts: your CPU. For the price of building a killer machine with 128gb of ram, you are most likely better off, and it will end up costing about the same, building two separate slaves totaling in 128gb ram (you can also allow build them to be upgraded to 128 each, or even just have one capable of holding 128gb to save some cash). This gives you 2 additional cpus to spread the strain across. In this scenario, and what I have found great results with, dividing your libraries/samples based on use and impact can prevent issues down the road. Strings and brass tend to be things that I prefer to separate across cpus from the start. They are used a lot and can be extremely taxing with extraneous use. By doing this, they will each take up the bulk of their individual cpus without stepping on eachothers' toes.
> 
> ...



How do you have your SSDs configured? Are you using any M.2 PCIe drives?


----------



## Piano Pete (Aug 8, 2017)

Virtual Virgin said:


> How do you have your SSDs configured? Are you using any M.2 PCIe drives?



It is actually pretty funny. In all of my computers I just use 850evo Pros... shocker. They are fairly inexpensive per gig, and I haven't had any funny business with them, always a plus. I got them in bulk so whenever I needed to add another one, which at this point I believe they are all in use, everything was uniformed and I knew what the impact was going to be on my setup. The builds are effectively identical, so maintenance is a breeze. That is the main reason I stuck with the evos. I had some negative experiences with more cost-friendly ssds: the financial savings were not worth it. Furthermore, the evo pros are on sale quite a bit. All in all it is one of those, if it is not baroquen moments...

I did not do any fancy setups with them. I literally just plugged them in. I did not find any benefit to RAID 0 or any other cheeky option. Whatever benefit was to be gained was not worth the time to setup or the possibility for failures down the road proved too detrimental.

I have danced with the idea of getting PCIe ssds, but the prices are not worth the performance increase for me at the moment. My money is better invested elsewhere. I have not even bothered with M.2 since price per gig is ludicrous when compared to something as vanilla as an 850evo pro.

Just like how I mentioned separating heavy hitting, extended use samples across cpus has been effective for me, I did find that doing the same thing with my SSDs has proved worthwhile. Anything that is super intensive is not stored on the same ssd; I just filled in the rest of the storage with less intensive samples and evenly distributed my remaining libraries. If I put my strings and brass on the same drive, things did not run as smoothly as when they were separated. I also had to do some further divisions based on the publisher and how often I use them. With SSDs in particular, I have never been a fan of filling them to the brim, but at this point everything is as efficiently divvied up as I can make it before the diminishing returns of time spent tweaking outweigh the relatively small performance increases.

My current mental fantasy is whether or not I should try to aggregate my network, possibly introducing beefier network cards; however, I have a hate-love, abusive relationship with anything network related. Presently, I do not wish to tempt fate when I have deadlines. Maybe I will try this with computers that are not required to be operational at a later date.


----------

