# Why Route Into Pro Tools?



## RichiCarter (Feb 8, 2019)

Hey guys,

Another potentially Noob question to try and plug my knowledge gaps... I've had a dig around online and on the forum but have yet to find a definitive answer. Probably because its so obvious and frankly something I should know...

I know that it is very common in Film Scoring to slave or sync Pro Tools to the main composing DAW (Logic, Cubase, Abelton etc). I know composers like Trevor Morris and Junkie XL both compose in Cubase, but then route into Pro Tools.

What is the purpose of this? Why do so many Film Composers route into Pro Tools from their main DAW, and why can't they print tracks, mix, sync video in Logic, Cubase etc. I've heard that Pro Tools is better to Mix in, however even guys that do not do their own mixing still seem to sync their session with PT...

Again, apologies for the question, but any guidance would be greatly appreciated.

Cheers!

Rich


----------



## chillbot (Feb 8, 2019)

Very curious about this myself. I always attributed it to one of those things that it may be 2% better at 20x the cost, but if money is not an issue this is the kind of stuff you do.


----------



## Jdiggity1 (Feb 8, 2019)

RichiCarter said:


> ...even guys that do not do their own mixing still seem to sync their session with PT...


Most mix engineers still use Pro Tools to work in, so if you are sending your music to get mixed you'll need to deliver a pro tools session.
I imagine that printing into pro-tools feels like the most efficient start to this process.
It's also likely to be a mental/workflow thing for some - a process that many were 'raised into' as being the way you do things.
I know that I too prefer to use pro-tools for certain tasks, knowing full-well Cubase can do it too. Something about 'switching gears' in the brain.


----------



## chillbot (Feb 8, 2019)

Jdiggity1 said:


> It's also likely to be a mental/workflow thing for some - a process that many were 'raised into' as being the way you do things.


I don't know anyone who does things like that, that's weird.


----------



## KerrySmith (Feb 8, 2019)

Pretty simple. There are a few reasons. 

1 - You host the video in Pro Tools. This frees up CPU (assuming it's a different computer) for your composing DAW to do it's thing. Pro Tools (Ultimate?) can also host multiple video tracks, and edit them, so if the editor sends over just one new scene, you can slot it in, while keeping the main movie file intact.

2 - You also have the audio from the video, as well as supplied VO, SFX, temp Music, etc... in the Pro Tools session. You can route this back into your DAW to monitor with, but don't have a bunch of "orphan" VO/SFX tracks that you don't need cluttering up your DAW session every time the editor sends a new cut. 

3 - You have mixes of your other cues (or demos) in Pro Tools, as well. That way if you're working on cue 1m04, which is totally different from 1m03 and 1m05, but which dovetails across both transitions, you can listen to 1m03 and 1m05, coming from the Pro Tools session, and hear how what you're working on fits in. 

All of this stuff feeds into the idea that you can have the whole film and materials open and available, while you switch between cues on your DAW, without having to close, re-open and wait for those elements to become available in each cue that you load up. 

Then we get into the mix thing. 

If you are creating groups of instruments and pre-baking your stem mixes via axes, buses or outputs, you route those directly into Pro Tools, and when the cue is approved, you just hit record in Pro Tools, and all of your stems record into their assigned Pro Tools tracks together in real-time. No doing one stem at a time. Move onto the next cue, keep the same Pro Tools session and set of tracks open (maybe you have 2 sets of tracks if the cues dovetail), Record new cue into the same PT session. Boom. Much more efficient. 

Then you can export the waveforms, or send the whole session to the Mixer. Even if they're not using YOUR Pro Tools session, they can use Pro Tools' "Import Session Data" feature to import all of your time-stamped and sync'd stem tracks into their session. WAY faster for the mixer than sending dozens of files which have to be individually imported and synchronized. Since they charge by the hour, this will make your Producer happier as well. 

Note, this workflow could be applied to DAWs other than Pro Tools as well, but Pro Tools IS a standard. 

This is also all reliant on you providing Timecode and Machine Control sync between your DAW and your Pro Tools-or-other DAW.


----------



## dgburns (Feb 8, 2019)

chillbot said:


> Very curious about this myself. I always attributed it to one of those things that it may be 2% better at 20x the cost, but if money is not an issue this is the kind of stuff you do.



@KerrySmith pretty much sums it up. I wouldn't use PT at all if I didn't need to score to picture.

I like-

-Entire project in one PT session, but many Logic projects, one for each cue. I get nervous when a cue runs too long in Logic, too many tempo and signature changes too deal with when the picture timeline is altered and I need to resolve the music. (yes no music editor here)
-I can put markers in PT and see where I need to score, outside of any Logic project.
-I end up delivering a PT session to the mix, and they simply use the track import into their mix session, no worrying about sync or manually importing each cue by it's TC at the mix end (or editor)
-I like that I get to confidence monitor the music in PT so I hear it the way it goes to mix, no surprises.
-I like that any given Logic project cue can have a complete new set of sounds, as I sometimes need to score 'source' music, aka beat based songs outside of the main score template palette.

Wouldn't do this if I was just writing standalone songs.


----------



## JohnG (Feb 8, 2019)

I do route into Pro Tools. There is absolutely no reason to do this if:

1. You are just starting (and not working on big budget movies / TV or something); 
2. You don't plan to record much live material into your sessions; or
3. You are a songwriter.

Old, arguably obsolete reasons to separate the DAW and recording onto separate machines were:

1. It reduced the strain on the DAW machine;
2. There were more and better mixing tools in Pro Tools than in some DAW software;
3. Some dub stages looked askance on importing files from DAWs, even if they were time-stamped.

I think these three reasons have largely gone away because your computer is probably a ton more powerful, DAW software is better than before, and because some pretty well-known guys don't bother with PT to write now.

Reasons that it still makes sense:
1. All the engineers you are likely to hire know PT best;
2. PT offers zero-latency recording -- players really need that and not every DAW setup can do it;
3. Other reasons that @KerrySmith and @dgburns offered.

PT is a very expensive luxury and (arguably) completely unnecessary if you're just getting started. Definitely better to allocate resources elsewhere for most beginners.


----------



## charlieclouser (Feb 8, 2019)

All of the reasons listed above are spot on. As @KerrySmith said, having a "whole-project" timeline (or at least "whole-reel" or "whole-act") makes it easier to hear how overlapping cues will sound at the overlap point without doing stuff like importing a chunk of the end of the previous cue into the current cue in your main DAW so you can massage the overlap, or other fiddly workarounds - but there's another good reason for having the "whole-project" timeline in ProTools:

Previewing. If directors, producers, etc. want to hear the work in progress at your studio, or if you need to make QuickTime movies to send off for approval, it's quick and easy to do it when you have all of the mixes at the appropriate points in your ProTools timeline. When a director is sitting there, you don't want to be fiddling with your main DAW, stopping and starting and loading each cue one at a time.

Of course, you could make a "whole-project" project in your main DAW for doing previews, and indeed I still do this sometimes on smaller projects when I don't feel like booting up ProTools, but that doesn't give you the ability to hear how the current cue (in your main DAW) will crossfade with the previous / next cues (in ProTools on a separate machine) - so it doesn't solve as many problems as having a separate ProTools rig does.

Another bonus to having a separate ProTools setup is that you can record all of your rough versions into it as you go, so if the director / producer asks, "I seem to remember that demo #2 had a less cluttered feel. Could we hear that real quick?" then you can quickly toggle between a stack of earlier versions, all at the appropriate point on the timeline, all being played against picture.

Yet another handy reason for having an expensive ProTools rig just sitting there is to facilitate recordings done elsewhere. Lets say you're going to record a thousand-piece orchestra at Air Lyndhurst, or just go down the street to a cheap-o studio with a nice drum room to bang on trashcans. You probably won't be doing either of those sessions on Logic / Cubase / Ableton / Studio1 / etc., because at the other studio their key commands might be different, your plugins and sound libraries won't be on their machines, and even if you endeavor to bring all that crap with you, the level of customization and personalization available in almost every DAW other than ProTools usually means it just "won't feel like home" when you boot Logic or whatever on someone else's rig. Even if you just bring your whole primary DAW computer with you, when you plug into their audio interfaces there will be i/o setup differences, you'll need to install the drivers for their audio system, some crap like that which will hold up the action. So now what, are you going to bring your audio interfaces too? Hassle.

But if you just plan on doing all your outside tracking sessions on ProTools, chances are that every studio you look at will have it up and running, hooked to their audio interfaces, patch bays, etc., and ready to go. And, since ProTools *doesn't* let you change key commands, make screen sets, etc. it takes about zero minutes to become familiar with their rig. Usually the worst aspect of that process is finding out they use a mouse instead of the trackball you're used to (so maybe bring one along?). You could be in Dubai and rent a ProTools rig to record in your hotel room and you'd be up and running in minutes.

Also, if you're going to be too busy telling the musicians what to do and how hard to beat on their violins or dulcimers or whatever, you can have the studio's ProTools operator drive the rig while you go out in the room and yell at people (or bang on trash cans). If you try to do the session in your primary DAW on their machine, either you or the studio's engineer will be operating in a "foreign key-command environment" so it won't be all that easy to just say, "step aside and let me drive for a minute", and if you just bring your primary DAW on your own machine, they'll likely be inside your user account and able to read your emails and use your PayPal account to sign up for PornHub Premium memberships for all the studio personnel.

So much smoother to just bounce your reference tracks and a click over to ProTools and just bring those sessions on a portable drive. Record your trash-can session, and bring it home. Now we get to another great feature of having dual rigs: you can do some editing and mixing of those out-of-house recordings inside ProTools and then bounce them across to your main DAW. I love doing this because I can record a whole bunch of takes, and mic positions that I'm not sure if I'm going to use but won't be able to decide about until I'm back home and able to hear everything in my room, in context, and alongside my full range of tracks in my main DAW. Then I can do things like:

- Edit together (comp) a few takes of the outside recordings in ProTools before bouncing them back across.

- Combine a few mics into a single stereo pair inside ProTools and bounce them across to a single stereo track in the main DAW.

- Stack takes inside ProTools and bounce them to a single track in the main DAW.

- Keep every one of the original recordings intact and unedited on disabled tracks in ProTools without cluttering up your main DAW projects, so you can go back and grab other takes / mic positions / etc. somewhere down the road after you've done more work in your main DAW.

Etc. Once you have a setup like that you'll think of a ton of things that you can do which wouldn't be easy or fast if you were working on a single DAW setup.


----------



## JohnG (Feb 8, 2019)

lols @charlieclouser

another tour de force.


----------



## Nick Batzdorf (Feb 8, 2019)

The other argument for having Pro Tools is if you're used to working in Pro Tools.

I use it for everything other than sequencing.


----------



## NYC Composer (Feb 8, 2019)

Am I correct in saying that Pro Tools can only achieve zero (or close enough) latency recording if you use their expensive hardware?


----------



## Living Fossil (Feb 8, 2019)

charlieclouser said:


> Of course, you could make a "whole-project" project in your main DAW for doing previews, and indeed I still do this sometimes on smaller projects when I don't feel like booting up ProTools, but that doesn't give you the ability to hear how the current cue (in your main DAW) will crossfade with the previous / next cues (in ProTools on a separate machine) - so it doesn't solve as many problems as having a separate ProTools rig does.



That's not a real issue, even in multi-session projects. If you have to check out how a cue from another session crossfades with one of the actual session, just import a bounce and place it where it belongs.

Besides from that, while i usually work in different sessions for films, i have one logic session where i just import the bounces to get the overview.
Other than the regular sessions this has only 1 constant tempo (120bpm) and some (very) few instruments that allow sketching ideas "on the fly".

But of course, the pro-Pro Tools arguments are still valid, and a good option to have.
Just not a necessary one.


----------



## charlieclouser (Feb 8, 2019)

NYC Composer said:


> Am I correct in saying that Pro Tools can only achieve zero (or close enough) latency recording if you use their expensive hardware?



Pretty much. With HDX cards you're looking at something like 0.7 milliseconds (that's just the trip in and out through the converters, unavoidable on any system), while best-case scenario with the smallest buffer on HD-Native is like 1.7 milliseconds. So not a massive difference - but of course the latency in an HDX system is always the same, regardless of track count and cpu load, while in a native system everything's got to be just right on a powerful system to get down to 1.7msec. But my PT rig is a bare-bones Mac Pro cylinder 6-core with nothing on it except PT - no plugins even. So I can get pretty low on the buffers. It easily plays back 48 tracks while recording 48 more to the boot drive (!) on the lowest buffer setting. I've even recorded 96 tracks while playing 96, but I can't remember if I needed to change anything. CPU and disc meters are all at like one bar most of the time. With no plugins and just a bunch of 48kHz audio coming off the 900mb/sec boot drive it's not even breaking a sweat.

In my world (HD-Native Thunderbolt with Avid MADI interface on 2013 Mac Pro cylinder 6-core) I'm only monitoring "through" ProTools when printing stems or listening to previous cues overlapping with the current one loaded in my Logic rig, so in practice it's a non-issue. In the olden days some folks would always monitor through PT and/or use it as a mixer for their various stems, but I don't do that.


----------



## charlieclouser (Feb 8, 2019)

Living Fossil said:


> That's not a real issue, even in multi-session projects. If you have to check out how a cue from another session crossfades with one of the actual session, just import a bounce and place it where it belongs.



That's a drag for me because I only leave 4 or 8 bars of dead air at the top of each cue in Logic, so if I want to import a mix of the previous cue to check overlap I usually bounce just the end chunk so I'm not trying to place the previous cue into the current one at like bar minus 200 or something. So it's a little bit of a hassle.





Living Fossil said:


> Besides from that, while i usually work in different sessions for films, i have one logic session where i just import the bounces to get the overview.
> Other than the regular sessions this has only 1 constant tempo (120bpm) and some (very) few instruments that allow sketching ideas "on the fly".
> 
> But of course, the pro-Pro Tools arguments are still valid, and a good option to have.
> Just not a necessary one.



I do this as well, especially on TV series, so I can just avoid booting up the PT rig and also be able to add overlays on top of existing cues really quickly. My whole-episode Logic project is at 120bpm like yours, but mine has the full template of instruments in it, the same template as used in the series, so when I need to add just two more notes of piano it's the same piano sound with the same reverbs, stem compressors, etc. Then I can just bounce that tiny piece and call it an "overlay" and ship it to the dub stage for quick drop-in and it matches right up. Works great.


----------



## gsilbers (Feb 8, 2019)

I did help and see setups for trevor morris, heirtor pereira, steve jablonsky back in the day in remote control and they had pro tools. and like 25 slave pcs due ot the 32 bit ram limitation. 
'back before VEP is was very useful. nowadays as well but not totally needed. its always an extra step that if you are very organized and use templates then its very useful. but these guys not only used pro tools as a slave print machine but also have an extra pro tools setup for mixing. so it goes out of the slave into protools1 with a bunch of pllugins for mixing and doing 5.1 and then into another pro tools to print the stems. pro tools1 would only be in input mode just or process the incoming audio and balance all those slaves. 
so not only does the stems and keep track of long sessions on one time line of a 2hr movie but also you can only do spot fixes. some section of a 2min cue has too much brass dissonance, instead of a whole export, you can only do a punch in for the mix, cross fade it and export it in pro tools. which is not bad on a cue but image several cues with different fixes, then it makes it easier to do these and make them sound good fast. 
so imo its just one way of working. i have several friends who dont use pro tools and do famous tv shows that dont use it and they get by as well. so its just one way of working based on how hans zimmer develop his technology and its integration to post production.


----------



## Nick Batzdorf (Feb 8, 2019)

charlieclouser said:


> Pretty much. With HDX cards you're looking at something like 0.7 milliseconds (that's just the trip in and out through the converters, unavoidable on any system), while best-case scenario with the smallest buffer on HD-Native is like 1.7 milliseconds.



That must be at high SRs, because most converters have about 3ms of latency.

But a lot of audio interfaces have direct monitoring, which is one workaround for people who don't have the PT cards.


----------



## charlieclouser (Feb 8, 2019)

Nick Batzdorf said:


> That must be at high SRs, because most converters have about 3ms of latency.
> 
> But a lot of audio interfaces have direct monitoring, which is one workaround for people who don't have the PT cards.



Well, that number I posted is just what Avid claims - I haven't tested it. Similarly, MOTU claim 14 samples round-trip from analog in > on-board DSP mixer and direct monitoring path > analog out, and that works out to a lot less than 3ms. Again, not tested but in any case, it's all fine by me.

Seems like the only time HDX cards really make sense are when you need one or more of the following:

- Absolutely MASSIVE i/o and voice count. Avid's HD-Native rigs are limited to 64 i/o and 256 voices, and when using third-party interfaces it's only 32 i/o. HDX can go to 192 channels of i/o and 768 voices if you've got the bread - triple what HD-Native can do.

- Fixed, consistent, ultra-low latency figures that are not dependent on cpu load or anything else really, so you can do overdubs on even the most complex projects without fear.

- The lowest possible latency for working with vocalists / musicians who might be thrown off by even the slightest latency in their monitor path - even when the project is already zillions of tracks deep.

HDX is a monster for sure, but really seems to be crucial only if you're Alan Meyerson or outfitting a dub stage, scoring stage, or top-shelf tracking room.


----------



## benmrx (Feb 8, 2019)

@charlieclouser - Fantastic post!


----------



## KerrySmith (Feb 8, 2019)

charlieclouser said:


> All of the reasons listed above are spot on. As @KerrySmith said, having a "whole-project" timeline (or at least "whole-reel" or "whole-act") makes it easier to hear how overlapping cues will sound at the overlap point without doing stuff like importing a chunk of the end of the previous cue into the current cue in your main DAW so you can massage the overlap, or other fiddly workarounds - but there's another good reason for having the "whole-project" timeline in ProTools:
> 
> Previewing. If directors, producers, etc. want to hear the work in progress at your studio, or if you need to make QuickTime movies to send off for approval, it's quick and easy to do it when you have all of the mixes at the appropriate points in your ProTools timeline. When a director is sitting there, you don't want to be fiddling with your main DAW, stopping and starting and loading each cue one at a time.
> 
> ...



LOL! Very well put (and each subsequent reply). I was trying to not write that much.


----------



## Nick Batzdorf (Feb 8, 2019)

charlieclouser said:


> - The lowest possible latency for working with vocalists / musicians who might be thrown off by even the slightest latency in their monitor path - even when the project is already zillions of tracks deep.



Yeah. As I'm sure you know, there are drummers who hate tracking through digital consoles for that reason.

I can't tell the difference - but then I did the world a big favor when I stopped playing drums.


----------



## Nathanael Iversen (Feb 8, 2019)

Avid's HDX claims are completely believable. As a comparison, SSL's Live consoles are pure digital @ 96khz - Dante or their own protocol. You can go in a mic-pre, through the mix engine with any amount of bussing/FX/etc, and back out an analog port in .7msec - even with hundreds of inputs simultaneously. How? DSP with fixed latency running true real-time code with none of the "shared CPU" stuff our DAW's do. They use this at the Synchron Stage at VSL to run hundreds of tracks with less than 1 msec monitoring delay for the musicians - truly impressive. You can't do this with USB interfaces on a laptop (but it does cost a few 100 grand more)....

AVID has been offering that same kind of DSP engine for years with their HD/HDX cards. Yes, expensive. Yes, proprietary. Yes, locked in. But constant, invisible latency on even a full scoring session? There isn't another way that is as good. There's no reason why you couldn't get .7msec latency even with a dozen plugins running in the DSP. 

DSP is one of those things that is just better for audio than a multi-purpose CPU when you need to do a whole lot of tracks, or run imperceptible latency EVERY time, no matter what is going on with processing. Not everyone needs this (maybe not even that many). 

There would be zero financial justification, but running an SSL Live console in my studio would allow me to have ridiculous low latency if all the slaves and DAW just output audio to it for all mix and FX - effectively removing anything but MIDI and tape deck functions in the DAW. With zero processing in the DAW, it could run massive channel counts at 64 buffer. The cost would be in the range of HDX plus a nice mix surface from AVID, which some people do use. So these things are possible - with the law of diminishing returns ever present.

But I totally get why Charlie uses PT.


----------



## Nathanael Iversen (Feb 8, 2019)

Nick Batzdorf said:


> That must be at high SRs, because most converters have about 3ms of latency.
> 
> But a lot of audio interfaces have direct monitoring, which is one workaround for people who don't have the PT cards.


The converters all have less than 3ms of latency. Its the drivers, buffers, and all the rest that get the audio in and out of a general purpose CPU that take up the rest.

DSP is fixed latency, and "always on". If you configure a DSP to route and process 32 channels of audio in a digital mixer, it has dedicated, non-shared, no interrupts calculation for every one of those paths. So it is "real-time" - all the time. No buffers, no drivers. Just constant audio processing. The conversion time swamps this fixed latency, so that is what drives the overall latency.

All the new 96khz live mixers like the Allen & Heath dLive and the SSL Live consoles have fixed .7 msec latency in and out of analog I/O with full processing. The DSP latency is so low, it is negligible, and swamped by the converter latency.

Even my Focusrite Dante card has to have drivers and buffers to deal with the CPU in my DAW - but the Dante network itself - apart from that runs .7ms analog in-to-out. Digital to digital transfer is fractions of a ms. My Dante network runs at max latency of 250 microseconds (.25 ms), for example, up to 128ch simultaneously. Way below any human thresholds. This is all handled by dedicated chips on the DANTE I/O cards in my Midas M32 mixer, the converters, and the PCIe card. It's all the dedicated hardware that runs below the CPU and its interrupt cycle.

We all value all the features and software things. But if anyone wanted to build a pure audio processing station from scratch, it is doubtful they would use a multi-purpose CPU. They'd keep the audio in DSP or FPGAs and only do UI in a Windows/OSX UI..... Which, is kind of what ProTools HDX is.

But those general purpose CPUs have made it possible for almost anyone to have a very capable studio - even if it is not as fast or scalable as the pure DSP approach.


----------



## charlieclouser (Feb 9, 2019)

The only drag about HDX is that the number of third-party plugins that have been coded to actually run on the dedicated DSP chips on the card is pretty small, compared to the zillions that run on the host CPU. AAX-DSP is the format in question, and while every plugin under the sun is available in AAX (native) format, not nearly so many are in AAX-DSP format, severely limiting the choices for plugs that can run at the HDX card's fixed, low latency. This is a drag. 

Of course, the mix engine itself is running on the DSP chips, so all of the routing, i/o, mixing, etc. all happens at the best latency, but many of the more wild, creative plugins are only on AAX and therefore are calculated on the host CPU.

That said, there are tons of "normal" plugins like compressors, eq, etc. which are AAX-DSP compatible, but it's something to be aware of for anyone considering stepping up to the big league HDX systems - it's not an automatic "awesome, every one of my crazy plugins is now running at .7msec latency!" situation. 

Then again, do you really need to record vocals through GRM Shuffler and hear it in your headphones? 

Well, one friend who got HDX just because he's "that guy" *does* want to do that, and he complains about the lack of AAX-DSP plugins all the time.... but partly because he's "that guy", and partly because he (like me) remembers the good ole days of TDM and HD when *everything* ran on the dedicated DSP, so having that caveat suddenly appear on the switch to HDX was a disappointment.


----------



## Dietz (Feb 9, 2019)

Slightly OT sidenote:



Nathanael Iversen said:


> SSL's Live consoles [...] They use this at the Synchron Stage at VSL to run hundreds of tracks with less than 1 msec monitoring delay for the musicians.



Just to avoid misunderstandings: The SSL Live console at Synchron Stage A is indeed used for monitoring only. The actual recording takes place on one of the two analogue SSL Duality-consoles and/or Neve and other boutique pre-amps.


----------



## JohnG (Feb 9, 2019)

There are a lot of ways to get it done.

I guess the main reasons I love PT (even though I find Avid's aggressive pricing grating) are two:

1. When I bring in a player, nobody complains about latency; and

2. Any engineer knows how to use it instantly. That makes the sessions speedy and surprise-free.

That said, if I were starting out, I'd use DP, Cubase, Logic, or something else and devote the extra $$/££/¥¥ to hiring players.


----------



## Nathanael Iversen (Feb 9, 2019)

Dietz said:


> Slightly OT sidenote:
> 
> 
> 
> Just to avoid misunderstandings: The SSL Live console at Synchron Stage A is indeed used for monitoring only. The actual recording takes place on one of the two analogue SSL Duality-consoles and/or Neve and other boutique pre-amps.


It is a beautifully designed setup. Just as good as it gets in every way. I have a lot of respect for the team, the dedication, and the extreme effort that went into creating that facility. Truly world-class, without any exaggeration or artifice. That is an extreme example of just how powerful state-of-the-art facilities can be. The absolute best of digital, used where it makes sense, and analog, again used exactly in its best way. 

The stunning quality of the new Steinway-D illustrates just how good this setup sounds. That is the finest piano recordings I've heard from a technical perspective. People can like or not like the samples, the instrument, or the programming, but the raw recorded sound is exceptionally good.


----------



## ZenFaced (Feb 9, 2019)

For the reasons stated above, to sum up - mixing mastering/post production - pro tools more suited for that if you are sending out to commercial studio. If you are not sending your project out for mixing/post production and you are doing that stuff at home you current daw is perfectly fine for that.


----------



## Nick Batzdorf (Feb 9, 2019)

Nathanael Iversen said:


> DSP is one of those things that is just better for audio than a multi-purpose CPU when you need to do a whole lot of tracks, or run imperceptible latency EVERY time, no matter what is going on with processing. Not everyone needs this (maybe not even that many).



You'd think every good audio interface could have a direct monitoring path just like that? In any case, my 18-year-old Metric Halo 2882 has a direct monitoring path with its own DSP-based plug-ins; it probably doesn't have sub-ms latency, but no one has ever complained - even singers wearing headphones.

As I've posted before, around 2006 I'd given Digidesign my credit card to update to... it must have been the 24-bit PCI TDM system that time. While they were backordered I got cold feet.

Really, really good move, and I have to credit an extremely nice woman at VSL who I won't name for giving me the final push off the train. That was also around the time I got rid of my digital mixer (Panasonic DA7).

With the exception of SoundToys Soundblender and Pitchblender, I've never missed any of it.

Now, it would be a totally different situation if I were a big recording studio like you (Nathan) are talking about, rather than a project studio/office. Then it would make complete sense.

Actually, I bet that same TDM system is now less expensive than a roll of paper towels...


----------



## Nick Batzdorf (Feb 9, 2019)

ZenFaced said:


> For the reasons stated above, to sum up - mixing mastering/post production - pro tools more suited for that if you are sending out to commercial studio. If you are not sending your project out for mixing/post production and you are doing that stuff at home you current daw is perfectly fine for that.



Or, once again, if you want to use Pro Tools for what it's really good at: audio production of all kinds.

I can't imagine doing a heavy-duty music editing or sound effects project in Logic, for example. Actually, I once started a sfx project in Logic - and moved into Pro Tools after about ten minutes.

Same with editing voiceover. It's not that you *can't* do that in other programs, it's that Digidesign really got the interface right.


----------



## givemenoughrope (Feb 9, 2019)

I don’t have a fancy dual rig and have only ever used PT as delivery or editing samples. I understand wanting to bang stems into PT and send it off but missing the plugins/editing audio in Cubase (or whatever DAW) seems like a big thing to give up. Also, all the fancy digital routing seems overwrought when you can just bounce stems in the DAW and drag them into PT.


----------



## enyawg (Feb 9, 2019)

I’m using HD2 (Accel) with Digidesign i/o 192 on Macs and it’s been a great rig for recording musicians in studio and live for many years. Started out with the 888/24 i/o’s.

In the composer DAW world my Reaper/ VEP6 host/ 3x slave rig is perfect for all midi film and chamber stuff.

I’m an old dog that stills monitors everything through PT and/or uses it as a mixer for various stems when critical to director. In hotels it’s headphone monitoring with my host MBP and SSD drives. I’m also a seasoned mastering engineer and use HD2 with TDM plugs which still sounds great.

I agree as stated by others, Pro Tools is ideal when submitting work to commercial studios, as the PT file is a universal standard. Obviously audio stems are also fine, independent of a PT file too; I do both depending on the mix engineers request.

So why route into Pro Tools?
I have delivered for film purely in Reaper with good results with no complaints from commercial studio or director. So it comes down to preference and affordability I guess. I come from a rock/ metal background and already owned Pro Tools HD2 before working in midi composition.

Today I would just use Reaper (with a nice i/o) if composing for film unless I won the lottery as HDX is ridiculously priced.


----------



## charlieclouser (Feb 9, 2019)

givemenoughrope said:


> I don’t have a fancy dual rig and have only ever used PT as delivery or editing samples. I understand wanting to bang stems into PT and send it off but missing the plugins/editing audio in Cubase (or whatever DAW) seems like a big thing to give up. Also, all the fancy digital routing seems overwrought when you can just bounce stems in the DAW and drag them into PT.



All good points. I set up my dual-rig configuration about 20 years ago when the Logic native side was much less powerful and we were still using lots of outboard hardware sound sources, and bouncing to stems inside Logic was but a fever dream. Back then it was ADAT cables linking the Logic rig to the ProTools rig, and it was really the only way to capture multiple stems in a single pass.

Now, of course, we have enough internal busses, and CPU/disc power, in our main writing DAWs to accommodate just doing a real-time record within that DAW to a stack of empty tracks, capturing all of the stems in one pass, but when I first set up my configuration this was unimaginable. Even as recently as early versions of Logic X that only had 64 busses, I couldn't do this because I'd used up all of my busses already and had none available to route stem sub-masters to audio tracks post-processing. (Now that Logic has 256 busses this is possible, but it sure makes a mess in my Environment window!) Because of all the other great things mentioned above that you can do on a dual-rig setup, I've just kept updating both rigs as time went on. 

But for most who don't already have two rigs, it's certainly not a "must-have" in order to do serious work. Having two rigs does make a few things possible that couldn't happen any other way, and it makes a whole lot of other things just plain easier, but it's by no means mandatory.


----------



## Fitz (Feb 13, 2019)

I’m looking into setting this up as well. Does anyone have a walkthrough on how to do this with cubase? Is there a separate program to use in addition to Pro Tools to make it possible?


----------



## JohnG (Feb 13, 2019)

Fitz said:


> I’m looking into setting this up as well. Does anyone have a walkthrough on how to do this with cubase? Is there a separate program to use in addition to Pro Tools to make it possible?



There are many ways to do it, some more efficient and less expensive than others, so I would definitely ask a lot of questions before you spend your money.

For example, if you are constantly recording live players you might want an HDX (card-based) system. If not, maybe the HD Native Thunderbolt will be ok for you. Between those two, there is a substantial difference in configuration and price.


----------



## benmrx (Feb 14, 2019)

FWIW the project I'm currently working on, I'm using a single computer and going the other way around by slaving PTHD to Cubase. It's a bit of an odd duck kind o' workflow as this is for a children's 'mindfulness' app and I'm basically adjusting the VO timing on the fly depending on what's going on at that particular moment with both the music and the VO..., and also adding some light sound design in PT as I go. Because of the nature of this project I can't space out the VO and THEN do the music, and I can't do the music first and then space the VO based off that. It's got to work hand in hand. When I first started doing these I would bounce the VO out of PT and import the .wav into Cubase and work that way. However, it meant that when I was done with a particular script I'd have to bounce the new VO edit out of Cubase and conform the original VO in PT to match the new timing. Huge PIA.

Also, for anyone looking at the idea of a separate PT system and you want surround capabilities...., you don't need the latest version of PT Ultimate. Just buy an old HD license and setup a 'frozen system'. The point here is you don't need a bunch of modern plugins, etc. and PT is one of THE BEST applications out there when it comes to working with different versions. Example..., last week at work I had to pull an old project for a Seattle Mariners Baseball spot. The project was from 2008 done on PT 7.4HD. It opened like a charm, and I was able to import the needed tracks into a current project (import session data) with zero hassle. Point being. You could use an old PTHD license running on an older frozen system and 'up your game' by being able to deliver surround PT sessions when needed with relatively minimal cost and enjoy the benefits of a dual system when needed.

I got my PT10,11,12 HD combo license for $500. I mostly got it so I can work from home when need be (we're PTHDX at work). When my new iMac Pro gets here (10 days!!!!) I'll take my current machine (2009 Mac Pro) and use it as a dedicated PT machine. If you need low latency for tracking purposes just grab an old 192. If you're patient you could probably setup an older PTHD system for around $1K, and that would include the computer, a PTHD license and a 192.


----------



## Geoff Grace (Mar 4, 2019)

This vlog from @christianhenson seems worth adding to this thread if for no other reason than it provides a video example of Pro Tools being used in conjunction with Logic Pro X:



Best,

Geoff


----------

