What's new

VEPro with Logic routing...?

Thanks for sharing all this great info. I too run my studio from a MacBook Pro (2013), and it's definitely an underestimated machine. Colleagues have actually thought I was kidding until they see it in action. I have my ethernet connected to one of the two T-bolt connections ($30 adaptor), and it's a solid connection. Now that I've seen your setup, I'm looking into a big 4K display!
 
Main Mac = MacBook Pro 2014 Retina w/16 GB RAM
(4) SSDs in an OWC TB chassis (2 project drives, 2 samples)
(2) UA Apollos, Philips 40" 4k monitor all via TB
Running OSX 10.11.3, Logic X 10.3.1
VE Pro (32 bit) running EastWest Play for older perc libraries
VE Pro (64 bit) running (9) 16-channel Kontakt multis (144 instruments)
...mostly perc libraries, a bunch of misc. guitars & ethnic instruments

Slave 1 - www.studiocat.com turnkey VE Pro PC slave (4.16 GHz quad i7, 64 GB RAM, 4 sample SSDs)
VE Pro running (28) 16-channel multis (Kontakt, PLAY) - 448 insts total, all demanding libraries; uses about 40 GB RAM fully loaded

Slave 2 - 2009 iMac Core2Duo, 16 GB RAM
VEP running 4 x 16ch older Kontakt libraries (64 insts total)

Slave 3 - 2012 Mac Mini quad i7, 16 GB RAM
VEP running 11 x 16ch Kontakt multis (176 insts total)

Slave 4 - 2010 MacBook Pro 2.28 Core2Duo
VEP running 8 x 16ch older Kontakt libraries (128 insts total)

So, that's 976 instruments loaded across all 5 machines, all in VE Pro 16-channel instances. I also run another ~100 EXS instruments in Logic, and then any project-specific stuff like Omnisphere, Nexus, other VIs, etc. As for processing, my template contains 8-12 effects buses: mostly reverb options (Lex PCM Random Hall, 2C B2, Slate VerbSuite Bricasti, NI RC48, some custom Space Designer stuff), and of course tons of EQ, compression as needed, etc.

All runs flawlessly!

Thanks for sharing your setup. I’m researching and putting together portable rig. Your post is helpful. Do you use any additional hubs or TB docks? Trying to figure out how to route all the Ethernet slaves to one MBP. Is your TB connection flimsy like mine. If mine is touched oh so slightly it disconnects. Appreciate any input you can offer, Thank you!
 
Thanks for sharing your setup. I’m researching and putting together portable rig. Your post is helpful. Do you use any additional hubs or TB docks? Trying to figure out how to route all the Ethernet slaves to one MBP. Is your TB connection flimsy like mine. If mine is touched oh so slightly it disconnects. Appreciate any input you can offer, Thank you!

Sure thing! No TB hubs/docks - each port is daisy chained to its respective stuff: 2 UA Apollos and a 4K display on one, and a 4-bay SSD bay on the other. I do have a ton of USB ports though via a pair of hubs - one of which has a USB 3.0 > Ethernet adapter to talk to the VE Pro network. It all works flawlessly. I got some good quality TB cables from OWC when I set all up, and I never touch it other than when I hit the road/come home, so it’s been great!
 
Hi Whinecellar, would you mind sharing what you use to switch articulations? I'm rebuilding my orchestral template and really like the sound of what you've described. Have previously used VEPRo Outputs to Logic Auxes. It's a very messy way of doing things that doesn't suit Logic at all. I think I understand the structure you're describing, just wondering how you trigger the different articulations in your V1 multi for example, from the single V1 track in Logic?
 
Hi Whinecellar, would you mind sharing what you use to switch articulations? I'm rebuilding my orchestral template and really like the sound of what you've described. Have previously used VEPRo Outputs to Logic Auxes. It's a very messy way of doing things that doesn't suit Logic at all. I think I understand the structure you're describing, just wondering how you trigger the different articulations in your V1 multi for example, from the single V1 track in Logic?

Well, it depends on the library and how I have the respective multi set up, but for the most part I use custom TouchOSC layouts I designed for each library, or actual hardware keys that my aging brain happens to remember ;)

I recently mapped a Behringer X-Touch Compact to be a custom controller for my most-used libraries (Spitfire, CSS) - and I'm loving that. I also remapped several libraries to respond to the same controllers, so for example, CC1, 2, and 11 are on the first 3 faders, and they control the same things in all my libraries now. And the bottom row of 9 buttons all trigger the same articulations, which covers most of my needs.

Hope that helps!
 
That's great, it does help and thanks for replying so quickly. I have composer tools pro on an ipad which I keep meaning to set up properly. I think that might be a good start. Just have to get my head around it!
 
Thanks for this discussion. New VEP user here, with Logic. Trying to figure out the best strategy to setup templates. I don't own a zillion orch libraries yet, so my case may be more simple then others, but I want to set it up right to expand into the future.

One thing about Logic is that multi-timbral handling can be a PITA. AUX channels have different latency then the instrument channels during offline bouncing, for example. Offline bouncing can often only be done by region, etc. AUX channels can't use the track freeze feature which is very handy. So I am leaning towards this mode of one "mixable instrument" per VEP instance in order to avoid any AUX channels coming back from VEP into logic. I'm assuming that's the only way to avoid the multi-out AUX channels, right?

Is there some other operating mode I'm not aware of where less instances can be used to group different instruments together on VEP, that brings the audio into LPX without AUX channels? Whinecellar says he is just mixing everything over on VEP to reduce the number of instances, but its not clear to me how he's mixing stuff. its one thing to mix all the articulations from one instrument, its another thing to premix the entire string section, for example, which I'd rather not do. I want to mix in logic.

Of course this seems like it will lead to a lot of VEP instances... I don't have nearly as many libraries as many of you do, but I can easily see needing a few hundred in order to represent each "mixable instrument" that is available on my soon to be VEP server. In terms of Logic's 256 inst limit, that will quickly become a limitation in terms of setting up a template to work with all available libraries.

So does it come down to simply: if I want more than 256 "mixable instruments" available in a LPX template, I'm going to have to group them together into multis on VEP?

What can be done in VEP to manage a long list of instances better so that its not difficult to find each one you want when working on a project?


For me, my slave PC runs the EW Hollywood orchestra only, 1 instance in VE Pro per instrument with 5-16 articulations in each, addressed by 1 track in Logic Pro, triggered by the SkiSwitcher 3. In VE Pro 6 on my iMac I run Kontakt orchestral stuff only, 1 instance in VE Pro per instrument with 5-16 articulations in each or only 1 if it is a keyswitch patch, which most of them are, addressed by 1 track in Logic Pro, triggered by the SkiSwitcher 3. Instrument tracks of the same family are nested in folders. No auxes, except for hosting reverbs. No Event Inputs. No bloody Multiport layer.
All the rest is directly in Logic Pro.

I am not the workflow police so I am not going to tell anyone they must work this way. But my computers are relatively modestly powered and yet I have yet to see anyone's Logic Pro-VE Pro rig work more smoothly than mine. Also, I have helped a bunch of other composers set up their templates this way here in LA and some over Skype, some of whom are here, and almost all of them like it and stick with it.
 
IME with Logic & VEP, keep it simple with routing: no aux outs from multis - just a stereo out of each. Do all your premixing in VE Pro. If you need to further treat a part separately, just bounce it in place as audio (takes mere seconds) and go to town.

As I've detailed elsewhere, my template is 1000+ tracks, almost all VE Pro multis with just stereo outs, and it runs like butter even on a MacBook Pro with 16 GB RAM thanks to a few slave machines.

Where did you detail elsewhere?
 
So one question, using the approach suggested by whinecellar, how to manage the midi tracks in LPX, particularly in such a way that it might be possible to do offline bouncing? WineCellar can you tell us some more info about how you setup your LPX project with the various VEP instances you have described earlier in this thread?
 
So one question, using the approach suggested by whinecellar, how to manage the midi tracks in LPX, particularly in such a way that it might be possible to do offline bouncing? WineCellar can you tell us some more info about how you setup your LPX project with the various VEP instances you have described earlier in this thread?

Just real quick (plate is overflowing at the moment!) - all I do is have a 16-channel multitimbral instance of the VEP plugin in Logic, which connects to the appropriate VEP instance on whatever slave. Since each of the 16 sub channels is tied to that plugin, offline bounces are no problem. My template is almost all VEP multis; the rest are custom EXS instruments and whatever specific VIs are needed for a particular cue (Omnisphere, Zebra, etc.). Hope that helps. I promise I'll get a video tour of my template done at some point sooner than later!
 
I'm actually wondering what kind of LPX tracks you use for midi regions...when all 16 channels will be submixed in VEP and returned to LPX as a stereo pair?
 
In addition to the above, another question I have is whether you maintain a ginormous LPX template that has pre-created tracks to all of your 1000 instruments that are online...so that you can instantly audition any one of them...or do you create tracks on demand, perhaps using an LPX patch, to connect to the instances you specifically want to work with as you build up a project?
 
@Dewdman42 - forgive me if I'm telling you what you already know - when you create a 16-channel multitimbral instrument, you will see 16 tracks, each of which look like a discrete channel strip, but they all address the same plugin; that's why when you move one of their faders, they all move the same. However, they're each on their own MIDI channel. So when you record a part on, say, channel 4, that MIDI region can be bounced in place by itself regardless of whatever's happening on other channels of that multi. Obviously if you just want to bounce that specific part, you'd solo it first.

And yes, my template is about 750 active tracks, all talking to my VEP network across 5 machines. I hate stopping the creative process to load samples, so everything is ready to go for my entire orchestra, with all options/instruments I'd possibly want from all libraries. It would drive me nuts to create tracks on demand ;)
 
Right gotcha. So the multi-timbral setup that is created by the New Tracks wizard actually creates 16 fader objects inside the environment, all of them pointing to the same underlying inst channel (one of the 254 that are available). Then it creates 16 tracks that are assigned to those 16 faders. Each fader has a different midi channel.. I wasn't sure if you were using that mode or a multi-instrument from the env. Submixing to stereo in VEP does not support using the AUX track region approach to handling multi-timbral that I can figure out. Which is unfortunate since that mode of multi-timbral handling is much cleaner in certain ways, but it depends on using multi-out aux channels.

Do you get this weird problem: If you create, say a 7 channel multi-timbral setup. You get the 7 tracks, but the last track is the one that is associated with the instrument mixer channel, for some silly reason that leads to some inconsistent behavior. The icon on the mixer will never match what you want it to be, it will always match whatever is configured for the 7th track. The name of the track in the inspector on the other hand will match what you edit in the mixer strip (which probably is not what you want) meanwhile the track header itself can be overrided with something meaningful.

Look at the following screen shot.

bug.jpg

Note that I am unable to set the icon of the submix to something that makes sense, it will be whatever the icon is for the 7th track. If you change either one, the is changed also. Note that the track inspector for track 7 shows the name of the mixer channel strip for the submix and NOT the name on track 7 header. We aren't doing it here, but if we did use a multi-out plugin here on the instrument track, then the first mixer channel will be playing the first pair of audio 1/2, but the labels will match the last multi-timbral track always as above and if you select the mixer channel, the last track header is selected also... I find it very confusing actually. Which is why I like to use the AUX track approach normally for multi-timbral midi tracks...but that does end up requiring all the AUX channels, which when I get a slave will mean a lot more streaming channels...which probably isn't wise...

I don't know if you've come across that or dealt with it in some way, the labeling and icon issue or how to associate that mixer channel with say the first midi track rather then the last one, or at least have labels and icons independent of the midi tracks.
 
One workaround I have found for the above is to create one extra multi-timbral track then I need, which can't be done for 16 channels directly with the new tracks wizard. But you can just select the last track and duplicate it. Then change the midi channel to "All" I guess but it doesn't really matter. But anyway, its just a dummy track that will not have any regions on it and should not ever be selected for record while working, but can be labeled and icon'd properly and the mixer strip will then possess these label attributes rather then those of channel 16. I don't know if you or anyone has a better work around this LPX bug, I haven't been able to find one.

I'm looking forward to building a slave machine so that I can have some always-on orch templates with everything ready to audition as you have whinecellar!
 
I want to also comment about the one-instance-per-instrument approach. I messed around with that a while, I was mainly interested for the ability to freeze tracks. And its true, that does provide that option which is very convenient to quickly freeze some tracks. In the end, I feel I would need to bounce them all to audio tracks sooner or later anyway, so I'm not totally sure I care about track freezing.

It seems to me that most of the people attempting that approach are doing so in order to use skiswitcher's stuff which is not multi-timbral (currently). I ended up writing my own articulation id handling scripts for Kirk Hunter, which are multi-timbral, so I don't have that problem. I will probably do the same for EWSO and anything else I purchase in the future.

I do feel that having several hundred instances is much more difficult to manage, versus having 10 or 20 instances, one for each collection of instruments that makes sense. I am also not even slightly convinced that hundreds of VEP instances will perform as well as a few dozen. But I am still testing out 4 basic scenarios and looking for performance comparisons.

  1. Single-instance-per-instance
  2. Multi-timbral, multi-out instance, separate sends per instrument to LPX AUX's, aux track for midi regions
  3. Multi-timbral, multi-out instance, separate sends per instrument to LPX AUX's, new track wizard tracks
  4. Multi-timbral, multi-out instance, submix in VEP to avoid AUX returns, new track wizard tracks.

It hasn't always been consistent, sometimes I am noticing that VEP starts to take a chunk of CPU power if you try to mix stuff there versus just streaming to the AUX's. On the other hand, streaming a lot of channels from a slave will impact network performance. That includes for the single-instrument-per-instance approach. Anyway, just throwing that out there.
 
As I've detailed elsewhere, my template is 1000+ tracks, almost all VE Pro multis with just stereo outs, and it runs like butter even on a MacBook Pro with 16 GB RAM thanks to a few slave machines.

Hey Jim,

I've seen you discuss this in multiple threads - very impressive.

One question that I'm not sure has been asked of you yet about your setup - but I'd love to know the answer - in this 1000+ tracks template, how many tracks can you use simultaneously with your rig? I think you mentioned somewhere that you froze/bounced as you worked - as opposed to having the whole cue firing away midi regions for all of your various tracks employed in a cue.

I've built some fairly ridiculously large templates that would launch, load, and be "online" - but the complexity and sheer "mass" of resources used in having that many routings seemed to hamper the template's success at depth; I could use anything available, but only up to a certain point - and too early in the cue's development - before I hit core issues.

Just wondering how many midi tracks playing into these Kontakt instances in real time (at the same time) you have going reliably.
 
Hey Jim,

I've seen you discuss this in multiple threads - very impressive.

One question that I'm not sure has been asked of you yet about your setup - but I'd love to know the answer - in this 1000+ tracks template, how many tracks can you use simultaneously with your rig? I think you mentioned somewhere that you froze/bounced as you worked - as opposed to having the whole cue firing away midi regions for all of your various tracks employed in a cue.

I've built some fairly ridiculously large templates that would launch, load, and be "online" - but the complexity and sheer "mass" of resources used in having that many routings seemed to hamper the template's success at depth; I could use anything available, but only up to a certain point - and too early in the cue's development - before I hit core issues.

Just wondering how many midi tracks playing into these Kontakt instances in real time (at the same time) you have going reliably.

Well, the whole idea is that spreading the load among a handful of slave machines makes it a realtime viability. I haven't used the track freeze option since it was first made available in the early 2000's. I do "bounce in place" quite a bit as I tend to commit things to audio, but that's generally more about being able to process those regions as separate audio tracks; it's not really about realtime resources.

Now, obviously if I went absolutely nuts and had regions on hundreds of tracks at once, I might run into some trouble - but since I've put a lot of thought and planning into how my resources are allocated, I rarely have trouble playing large cues in realtime. Wish I could give you some specifics, but in general, I'm always amazed at the amount of firepower I have at my disposal in realtime!

Cheers,

Jim
 
Thanks Jim for the clarification. Very helpful. Over here we’re not really strangers to the whole distributed resource concept - going back to a dozen Gigastudios in the machine room back in the day and more recently as many as six beefy VEP slaves for rigs similar to yours. Still, your performance experience seems particularly impressive. Thanks for sharing your expertise with the community!

I’d really like to mimic your setup over here and see if we get comparable performance. So If I may, can I bug you for a few more bits of experience/knowledge?

- In your current 2018 version of all of this (I’m assuming Logic 10.4.1 and VEP6), where are you at with multiprocessor assignment inside your Kontakt settings page? I know you’re doing the defined number of cores in Logic itself (we’ve got ours setting Logic to six cores) and each VEP instance is set to 2 threads. But how many cores are you assigning to Kontakt itself - or do you currently have Kontakt multiprocessor option set to off? In the past, heavily scripted Kontakt programs (like big Spitfire patches) seemed to play much better here with the multiprocessor setting set to “4 cores.”

- what are you currently using for kontakt pre-load buffers settings? I’m assuming you’re on all ssds like we are - but we still find that Kontakt has a few bugs that prevent us from using ultra-low streaming buffers - our happy balance seems most stable at 48k.

- We’re at 256 main buffer in Logic and our VEP instances set to “1 buffer” which seems to be the best balance between latency and horsepower. Where are you currently at in your rig with these?

Many many thanks again!
 
@shsCT - you're most welcome. Here's my current setup:

1. Logic 10.4.0, VE Pro 5.4.16181, various versions of OSX on my 4 Macs, Win 10 Pro on my PC slave
2. Multiprocessor support OFF in Kontakt plugin mode; preload buffer = 12.00 kb (all SSDs)
3. Logic buffer 256, processing threads AUTOMATIC, buffer range MEDIUM, playback & live tracks, 64-bit summing
4. VE Pro Server default thread count per instance: 1 Thread (host machine), 2 Threads (slaves), 2 audio in/out ports on all
5. VE Pro plugins in Logic, Latency = 2 buffers

I think that's all the pertinent stuff... let me know if I missed anything!

Cheers,

Jim
 
Top Bottom