What's new

[VIDEO] CPU Performance vs. Real-Time Performance in Your DAW

And to the hardware purists out there - someone pointed out that EFI is used in place of BIOS these days. Yes, it is - but I'll probably be calling it BIOS for a long time to come :)
If you want to be really particular it is UEFI or (uEFI) Unified Extensible Firmware Interface) and yes I am sad enough to know that off-by-heart :/

For Macs until recently (I believe they now use the full specification) Apple only used custom-EFI, as they were only utilizing a substrate of the EFI standard

I do understand however that with Windows 8 they began to implement more and as of the release of Windows 10 they use the full spec to support Secure Boot for Windows with Bootcamp

Sad techy here, who lives in the basement ;)
 
What an awesome video Richard!
Thanks for making this subject clear enough for thick headed guys like me to understand!
 
Still the best information on the subject !!! I saw the videos on youtube in the past, but want to thank the author again for the insight.

Unfortunately, in real life - unless you have the budget / time (or technician) to experiment - it's not so easy to implement.
1) You can't base serious decisions on specs only - that is if you (can) have real specs of all the subassemblies/parts and how they're integrated
2) Products are (dis)appearing continuously, and manufacturers change part sourcing / implementation during product lifecycles, etc.

The monitoring tools are a help - unfortunately windows-only afaik :(

BTW : willing to pay for a OS X realtime monitoring tool similar those mentioned in the article/videos !!
 
One of the best technical explanation videos I have ever seen! Hope the author will make more on other topics.

Regarding OS X macs, its important to understand that OS X uses an entirely different mechanism to handle low level drivers then windows. Windows uses something called Deferred Procedural Call (i.e., DPC), and it has to do with the fact that Windows operating system is interrupt driven. There are pros and cons to being interrupt driven, but one of the cons is this symptom called DPC latency. OS X does not use that mechanism at all. I'm not exactly sure right now what OS X does do, but it does not do DPC. So there is no point to worrying about whether your mac has high or low DPC latency since we don't have a DPC latency checker utility..its not relevant. There is no such thing as DPC in OSX. OS X works a little differently.

That being said... There actually is a built in command line utility in OS X for monitoring latencies... when I run it, they are all low enough to not cause any concerns...so I don't know what OS X is doing differently, but knock yourselves out..its called /usr/bin/latency. From the command line you can type 'man latency' to read more about it.

Much of this video is still extremely good information for musicians, regardless of whether you're using OS X or Windows, in terms of understanding that neither computer is an actual real time computer. It operates on buffers and gives an illusion of real time operation, with buffer latency in the sound card being the thing that enables that to happen. We make music in real time, but the computer is processing things and timesharing different components of the system. Very well presented video here and applicable to both platforms, except for the DPC section.
 
Last edited:
Amazing! Thank you so much for this. There has been talk of modifying one's RAM speed in this thread - but I watched the entire video, and saw no such thing on that topic.

Could someone enlighten me as to the "slow your RAM" theory?
 
RAM speed is not an issue for audio. New Ryzen CPU's may benefit from it but the effect for realtime audio is marginal.
 
Finally got around to Watching this. Totally Great! It explains why my 10 year old machine still runs circles around a lot of other machines. The DPC latency is way lower on that machine than my other newer ones.

Great reference to start building my new DAW.
 
Again, pointing out that several tests from multiple users (including a long Gearslutz thread containing Logic project to test RTP) and a Sound On Sound article, etc, disproves the theory that multiple cores does not increases RTP. This is especially true for script heavy Kontakt patches that uses multi-core processing to handle non-audio related instructions. Also, this video is even less actual today when plugin and DAW manufacturers have optimized their software for multicore use.

Would Apple and all specialized PC DAW Building companies lie about this without anyone discovering it? No.

Are there other factors that can affect RTP? Yes, there is. But disregarding those, multiple cores significantly increases Real Time Performance as long as the system doesn't have any other problems.

I linked the SOS article several years ago:

Also this, a mix between old and new stats:

Test are conclusive.

Also, as several other in this thread has pointed out: This video has nothing to do with OSX. I did an extensive test with Mainstage when programming sounds for a musical theatre show last year, that concluded that Kontakt inside Mainstage (running single core because of multi-core utilization conflicts between Mainstage and Kontakt) increased CPU usage by double rather than running Kontakt in multi core outside Mainstage. With the exact same patches. This also disproves the "multi core doesn't affect performance theory, and Mainstage is very realtime sensitive since it's live playback. This was confirmed on 4 different systems.

I don't agree with the conclusion of this video, and there are plenty of sources to back that up. I can't say what the video author's specific template does for this theory, but in all my cases of building orchestral setups throughout the years, there simply one conclusion:

More Cores = Better Performance.
 
Last edited:
unless you need low latency live, especially with a dense plugin stack or a really powerful synth plugin then it doesn’t.
 
Last edited:
Richard/anyone: I understand the all the basic concepts in the video, and it makes sense. My question was in regard to the audio interface

Is it possible that swapping in a newer interface using the same buffer settings (with DAW and all other hardware not changing) may improve how fast audio is processed and sent out of the speakers? That is, does the audio interface have a "clock speed" that may limit its ability to keep up with the amount of data coming in? Or is any working interface "fast" enough to process incoming buffer data such that the limiting factor will always be another component earlier in the chain locking up the CPU?

I run a MacPro 2010 2.93GHz 12-core with Digital Performer 10.11, 64Gb RAM, internal SATA-2 SSDs mostly streaming, stock GPU (Radeon 5770), and an old MOTU audio interface (828 mkIII hybrid with USB2/firewire 800). On a large project (60 instrument tracks + 20 Aux tracks, 8 VIs with ~4-6 instruments each, ~40 plugins using mostly EQ) and using 1024 buffer size for mixing, I will get spikes/audio pops occasionally. I would be curious if a newer interface would be faster/more efficient at processing incoming data vs if the problem is likely to be one of the SSDs vs maybe the USB2 spec is not enough to carry all the data. I also have a PCIe card for the SSDs and a PCIe card with NVMe available to use, but I am not sure if that would be the first thing to consider testing. Would love to hear some opinions. Great thread!
 
when you're talking bout running at 1024 buffer size, I doubt the audio interface has anything to do with the CPU spikes and audio drops you're getting.

Do you run in USB2 or firewire 800 mode by the way? Did you try with firewire to see if it makes any difference? I doubt it does though.

MOTU makes very good drivers for their interfaces. When you're talking about a large buffer, its really not an issue that the audio card would be the bottleneck. when you start going to smaller buffer sizes, then the buss itself could make a difference (ie, USB vs firewire, vs PCI vs thunderbolt, etc.). The faster busses (PCI) can go to smaller buffer sizes without swamping the cpu because they are more efficient at getting the data to the audio interface itself. Pretty much all mainstream audio interfaces, such as MOTU, can absolutely keep up with whatever USB is throwing at it... THAT (the motu) isn't the bottleneck. USB2 might be at lower buffer size settings, but at 1024...I don't think that has anything to do with the cpu spiking and dropouts you're experiencing..

that would have more to do, most likely, without whatever plugins you're using..
 
when you're talking bout running at 1024 buffer size, I doubt the audio interface has anything to do with the CPU spikes and audio drops you're getting.

Do you run in USB2 or firewire 800 mode by the way? Did you try with firewire to see if it makes any difference? I doubt it does though.

MOTU makes very good drivers for their interfaces. When you're talking about a large buffer, its really not an issue that the audio card would be the bottleneck. when you start going to smaller buffer sizes, then the buss itself could make a difference (ie, USB vs firewire, vs PCI vs thunderbolt, etc.). The faster busses (PCI) can go to smaller buffer sizes without swamping the cpu because they are more efficient at getting the data to the audio interface itself. Pretty much all mainstream audio interfaces, such as MOTU, can absolutely keep up with whatever USB is throwing at it... THAT (the motu) isn't the bottleneck. USB2 might be at lower buffer size settings, but at 1024...I don't think that has anything to do with the cpu spiking and dropouts you're experiencing..

that would have more to do, most likely, without whatever plugins you're using..
Your explanation seems to make sense, and I have always had a good experience with MOTU units. After posting, I actually tried playing back the project with a 512 buffer size (normally, once I am in the mix stage I switch to 1024 out of habit, but I wanted to compare) and it still plays back with only one or two stutters over the course of a 10min track. Note that these are not necessarily clicks/pops but just playback errors, most likely due to too many VI tracks at the same time. This is especially true with Falcon, which loves eating CPU cycles. The only real problem is that sometimes making a change to something simple within the DAW software itself (e.g. turning the playback loop on/off) is enough to instantaneously spike the CPU and creates a loud, short burst of static that overloads the Master Fader. With SPAN open, it freezes the display in a "white out" state until I reset it by bypassing and then re-enabling. Aside from being annoying, I worry if this might cause damage to the speakers, even though I usually keep the monitor level fairly low.

It seems that this specific problem may have more with software and the DAW being pushed to the operational limit, the result being a sharp audio burst. I am running lots of Ozone 9 EQ and dynamics inserts, and I first experienced these spikes when bypassing/enabling a particular module, so I thought it may be an issue with the Ozone software. Overall (even at 512 buffer) the performance meters are ~50%, but the change of state (bypass/enable, loop, etc) is enough to momentarily spike everything. But it may just be too many VIs and plugins running already than the fault of any single piece of software or hardware.

(And the interface is using Firewire, with nothing else sharing the bus)
 
Last edited:
Top Bottom