What's new

PC specs for lowest possible latency?

I've got a D15 and a 9900K running at 4.7Ghz on 1.20V. I've got idle temps at 35C. Case is BeQuiet Dark Base 900. It is closer to me than my old cheese-grater MacPro and runs as quietly. I'm sure that I could bump up the voltage and get it a few hundred Mhz higher, but 8 hyper-threaded cores at 4.7Ghz? It's not really a place to complain much, and it is definitely safe, long-term voltages. I ran a bunch of Cinebench runs and some other CPU-crusher type tests to make sure it was stable at full load. The fans get noisy and temps went up to high 70's to just over 80C as the load persisted, but stayed stable. Used as a DAW, it never hits 100% utilization. These CPU's run out of real-time capacity long before they run out of absolute capacity. I can hear the fans kick up when it is rendering a full mockup, but outside that, it just does its thing. If I took some things off of it and put them on sample servers, I'd get back some of my real-time headroom. It is a lot to ask a box to process a ton of dense MIDI data, soft synths, mixing, networking, etc. Eventually, they just say, "Enough!" no matter how many Ghz you have.

That said, Mishabou's report is the second I've heard of someone running giant Xeon systems with 512GB RAM. One of the composers on Spitfire's "Cribs" in Hollywood had also switched to that setup.

The trouble of course, is that none of us know how other people work, how their templates lay out, and how much load they really put on the box and in what way. How dense is the CC data? How many plugins, nested busses, etc. These things all really matter in the loads that sample servers and DAWs experience. The benchmarks give directional guidance, but I think there is a lot of variation in how people use this stuff, even when it sounds like we are doing similar things with similar vendors. For example, chillbot and I both have digital mixers. He mixes on them. I just do headphone mixes and auxiliary I/O on mine, and VEP audio comes back into the box. This puts a very different load on my DAW CPU than his. I think it is easy to talk in general, and expensive to get specific experience (it takes a while to get a template and VEP boxes working to optimal condition).
 
So with a d15 and a good case would it be doable?
Also, any good 2666 memory works for this or you have to go higher?

I'd go with a good 3000 MHz memory, like HyperX, CL14 kit. Can probably be OCd to 3200.

D15 is a great cooler for sure (I have one, both 12cm fans mounted), and a good silent/soundproofed case like Fractal Design should do well.
 
IP over Ethernet, or just plain Ethernet (pick your poison) is the most promising way to get high channel counts at low latency at a reasonable cost. MADI is expensive and complex, PCIe cards are old news, Thunderbolt and USBc are promising, but even smaller niche players than Dante.

In a few years the idea of using multiple Lightpipe based solutions will seem as silly as magnetic tape seems today.

...you mean, the way I work? ["old man" chuckle, followed by extensive choking and phlegm / hacking]

This has been a great thread -- combined with some very helpful PMs from @Nathanael Iversen -- it's reminded me that I am putting up with latency that really is unnecessary, so I'll be on track to do something about it very soon.

Thanks everyone.
 
Last edited:
I run a large and demanding orchestral template hosted in VEP alongside Cubase on my master computer. A good chunk of my ASIO headroom is consumed by simply connecting to each VEP instance. (As an experiment for those running a similar sized template, watch the Cubase ASIO meter drop as you disconnect each instance of VEP). I have to run at 512 to use the number of insert effects etc I need.

I can’t help but think I would get better latency (and the ability to use more CPU intensive VST effects) by offloading *all* of my VST instruments / VEP instances to slaves, and just use my master computer for effects and audio etc, not hosting any instruments.

What do you all think of this hypothesis?

Has anyone flirted with Intel NUCs as a cost-effective way of setting up slaves purely to stream samples from VEP?

Thanks
 
It is almost certainly a true hypothesis. Insert effects trash the CPU on the DAW. The other thing you can do is write with a minimal set of things, then render out to audio and pull it back into a version of your template with all the plugins in. Then your system doesn't have to do it all.

Some pre-mix stuff out on the VEP slave machines for exactly this reason and load their plug ins there.

I don't. The slaves are as simple as possible, and I'll buy more CPU if I need it, or even mix separately on the DAW.

I don't know what large and demanding implies. I've got ~600 tracks here, but there are others with 2-7x that amount, I know. You may be crushing your DAW much more than I am, though I do give it a workout.

Haven't done NUCs. Have thought about it, but I've got the cases and power supplies already, so it is just motherboards and RAM to upgrade, so I never end up doing it.
 
Thanks Nathan

I know that NUCs typically don’t have the same CPU grunt but if all they are doing is streaming they wouldn’t need to I expect.

I figure, instead of upgrading my Master comp, for the same price I could get 2 or 3 NUCs, offload all VST instruments, drop my latency and delay upgrading my main computer by a couple more years.

Or so the theory goes...
 
Strings, winds, and percussion are all on slaves. SampleModeling Brass is on the DAW. Soft synths are on the DAW. I do have some ancillary libraries on the DAW (Metropolis I & II, etc). DAW is 64GB - with the full template loaded it has about 30GB used.
 
The 9900k is a beast! I have been finding it hard not to recommend it based on its price, core count/speed, and performance.
 
To directly answer you question, "Specs for low latency":
- I turn off all ASMedia controllers in the BIOS and only use the ports supported by the SouthBridge chip
- I turn off all onboard sound in the BIOS, and any other hardware controllers that i can.
CPU/MOBO/RAM dimension.

Question: how is the performance of your rig affected with the ASMedia controllers other sound drivers enabled/disabled? While I am all for disabling extraneous processes and devices if they are not going to be utilized, with the recent gear on the market, I have been finding myself hard pressed to go through these sorts of steps lately. Recently, I have been toying over some of the performance stats from recent builds, and I have found myself asking the question, "Do I really need to be going through all these extra steps?" With budget builds, yes, I still see enough of a gain to hit the tweaking and optimization train hard; however, when budget allows for good kit, I just have not been noticing enough of a gain--at this time--to justify the extra work.

Curious about others' experiences in this manner.
 
I've spent so many hours troubleshooting extranous stuff that I don't even test it anymore - I do it before the first time I boot the system into Windows. Besides, if I'm not going to use it anyway, only good things can happen. One less driver is one less driver.
 
Insert effects trash the CPU on the DAW

Nathanael, do you just mean they eat CPU because you're not sharing them, or is there something about inserting an effect that causes it not to use cores efficiently?

There are issues like that (with workarounds) in Logic, but I've never noticed that in VE Pro.
 
I've spent so many hours troubleshooting extranous stuff that I don't even test it anymore - I do it before the first time I boot the system into Windows. Besides, if I'm not going to use it anyway, only good things can happen. One less driver is one less driver.
Ya, that's pretty much what I do haha. I was just curious. Maybe I'll find someone ocd enough that documents all that stuff in detail.
 
Nathanael, do you just mean they eat CPU because you're not sharing them, or is there something about inserting an effect that causes it not to use cores efficiently?

There are issues like that (with workarounds) in Logic, but I've never noticed that in VE Pro.

Only that as you add longer paths for the audio go through, more work has to be done before the audio is ready to put out to the interface. I have a template for "rock band" kind of work. There's a lot of bussing, FX, etc. But relatively few tracks compared to my orchestral template. It runs without any issue in that context.

If I import those tracks and busses to my orchestral template, my CPU load doubles - even with nothing playing.

Obviously, modern processors are extremely capable multi-taskers. If you open Task Manager, you'll find over 10k threads all running "at once" on your DAW. It is amazing that it works at all. But if you have a lot of heavy FX and complicated bussing, you can definitely lower your load by simplifying the routing and FX layers. That's my only point.

In part, this has to account for the significant success of UA and their DSP accelerator products....
 
as you add longer paths for the audio go through, more work has to be done

Right.

As to UA, this sounds like a redux of the big early naughts issue: whether host-based audio could replace systems with add-on DSP. I'd actually given Digidesign my credit card to update to the latest Pro Tools TDM system, but I got cold feet while they were backordered and decided against it. It was a very good decision.

(No dis to UA, of course! I'm just saying that we've been in a totally different world for over 15 years.)
 
Also, I have read that each "audio path" has to be run as a single thread, so the more that you buss things and then buss the busses, then you are making that thread work very hard, and that thread runs on ONE core at a time by definition. This is why the single-core speed of a CPU is so important to us. We run "demanding" threads. If any of the threads get too complex, then the real-time needs can't be serviced anymore. I'm not a coder, so I can't say this is true, but it makes intuitive sense.
 
each "audio path" has to be run as a single thread,

That's what I was getting at when I mentioned Logic's quirks.

And it's why I'm intuitively skeptical of machines that have fewer cores, or more cores at lower GHz.

And why I'm skeptical of computer benchmarks in general, even when they're testing the number of reverbs you can run (because we use computers in a unique way).
 
Top Bottom