# Do I actually need more CPU cores / threads for music software??



## barteredbride (Apr 10, 2020)

I mean, do CPU hungry soft synths, DAWs, 1000 track templates, multiple reverb effects etc, sample libraries and sound cards actually utilize the extra cores and threads in CPUs ?

I always intended to upgrade my AMD Ryzen 1700 CPU (8 cores, 16 threads) and I was thinking of swapping it with the AMD Ryzen 9 3900X (12 cores, 24 threads), but now I´m not so sure.

Obvioulsy this is a big upgrade in CPU terms with the newer technology, but is it likely to make that much difference using music software??

Does having a super fast CPU bring any other benefits to music prodcution??

I´m on Cubase 10, 64gb RAM and using M.2 drives.

Thanks!!


----------



## d.healey (Apr 10, 2020)

> I mean, do CPU hungry soft synths... actually utilize the extra cores and threads in CPUs ?



Some do, some don't. Kontakt does.


----------



## dzilizzi (Apr 10, 2020)

Whenever you ask yourself "Do I need....?" you know the answer is "Yes"  

That said, I know there's a point right now that DAWs and VI players can utilize the cores. If you buy to that point, you may be limited in the future so maybe slightly more? I read that Cubase was at 24?


----------



## Wunderhorn (Apr 10, 2020)

Most DAWs - or at this point I would dare say pretty much all major DAWs - utilize multiple cores including Kontakt. So yes. And if you are hosting plugins in VE-Pro the load gets sometimes distributed even better.
For large templates a stronger CPU is definitely going to be of great benefit. That and adding RAM plus using (fast) SSD drives.


----------



## barteredbride (Apr 11, 2020)

Cooool. Cheers all for your thoughts.

Overall I think it´s gonna make the pc run much faster and as developers look to utilise more cores (to keep up with competitors more than anything) then it´s gonna be worth it over the next year or so.

This latest Ryzen series still works with the AM4 socket


----------



## rgames (Apr 11, 2020)

barteredbride said:


> actually utilize the extra cores and threads in CPUs ?


Yes. But, in general these days, CPU is not the bottleneck.

Do you currently hit 100% CPU usage (not ASIO usage)? That's really rare, but if so, more cores might help. If not, they're unlikely to help.

I think the sweet spot for a DAW these days is around 8 cores at 4+ GHz. I haven't seen much benefit beyond that.

In truth, I haven't seen much benefit for 8 cores over 6 (or even 4 for nearly all projects). I have some intense benchmark projects that I've kept over the years and they all run at the same latency (~6 ms) on a 4-core, 6-core, 10-core and 14-core processor. The CPU usage goes down with number of cores but I'm not CPU-limited on any of them, I'm ASIO (real-time) limited. With the computer in a machine room, you can't tell which one you're using while working.

In my experience, people hit ASIO bottlenecks long before CPU bottlenecks these days.

rgames


----------



## Nick Batzdorf (Apr 11, 2020)

This is on my Mac Pro 12-core (24 virtual cores). I don't have a lot of FX plug-ins active, but I do have a whole lot of instrument ones playing inside Logic:


----------



## dzilizzi (Apr 11, 2020)

What would be interesting to see is comparisons with number of tracks being used, number with effects, and number with active VI's to see how these compare. I currently have never used more than maybe 30 active tracks, though most have been VI. But I know some here may use over 100, so that might make a difference in how things run. Just a thought to include that info.


----------



## Nicola74 (Apr 11, 2020)

rgames said:


> I'm not CPU-limited on any of them, I'm ASIO (real-time) limited


Could you explain a little bit more which is the difference? Thanks!!


----------



## dzilizzi (Apr 11, 2020)

Nicola74 said:


> Could you explain a little bit more which is the difference? Thanks!!


I think he means his audio interface is the bottleneck for him. Neither CPU, nor RAM is maxing out, but is audio interface can't handle all the tracks. Probably something I wouldn't notice since I've never had the chance to run one of those big boards that can handle a lot of tracks.


----------



## shomynik (Apr 11, 2020)

Yeah you do.


----------



## Nick Batzdorf (Apr 11, 2020)

The point I and others have made is that some software uses those cores.

Richard is saying that the resource he runs out of isn't CPU, it's streaming voices.

I personally don't run out of either no matter how hard I try, but I don't load lots of mic positions.


----------



## rgames (Apr 11, 2020)

Nicola74 said:


> Could you explain a little bit more which is the difference? Thanks!!


It's like driving a car through Manhattan. Will you drive faster in a Ferrari or a Toyota? Neither - you'll travel at the same speed because you're not limited by what the car can do. You're limited by all the crap that gets in your way.

CPUs don't handle all the work in a computer and they spend a huge portion of their time waiting for other elements of the system to do their jobs. Handling network traffic, updating the screen with mouse movements, moving the audio buffer into your audio card and thousands of other tasks all "get in the way" of the CPU crunching numbers.

Load up a really taxing project and play it. Now take a look at the CPU usage meter (not the ASIO meter in your DAW, the CPU meter provided by the OS). Odds are it's pretty low unless your computer is 10 years old. It's low because CPUs are so fast nowadays that the "crap in the way" is the limiting factor and your CPU spends most if its time doing nothing.

If you add more cores, they'll spend more time doing nothing as well 

It's certainly true that more cores provide value for some tasks. Video rendering and computational physics are good examples. The difference there is that those are not "real-time" tasks, and real-time tasks in consumer computers (like keeping an audio buffer full) are rarely limited by CPU power these days.

You can certainly create DAW benchmark projects that max out the CPU but they have something like 500 compressors, so it's not a reflection of how we actually use a DAW. So I'm not sure what meaning that provides. I've not seen an actual DAW project that maxes out a CPU in years.

But it doesn't matter what I've seen - you can check for yourself. If you constantly run into CPU limitations (not ASIO limitations) then more cores might help. But that's pretty rare these days.

rgames


----------



## Dewdman42 (Apr 11, 2020)

More is always better than less.

But don't compromise clock speed in order to have more cores. Prioritize clock speed over core count.

So more cores can be helpful. I'm quite sure I am benefiting with 12 cores on my 2012 updated MacPro. But the clock speed and instruction set on my older Mac is not that fast even after upgrading the CPU to the fastest one that was ever available. So I really need as many cores as I can. 

More cores means more potential processing. We don't always need all that much for audio work, but what we do need is a faster clock speed in order to get lower latency and handle CPU intensive plugins on a live track.

So prioritize clock speed....then if you want more cores...and you have the $$ for it...then knock yourself out...more is better... But only if you are not compromising clock speed in order to get it.


----------



## Nick Batzdorf (Apr 11, 2020)

rgames said:


> It's certainly true that more cores provide value for some tasks. Video rendering and computational physics are good examples. The difference there is that those are not "real-time" tasks, and real-time tasks in consumer computers (like keeping an audio buffer full) are rarely limited by CPU power these days.




Well, look at the screen dump I posted above. That's real time, most of the cores are working, and more will when I add plug-ins while mixing.

By the way, that's on an 11-year-old computer, the same one Dewdman has: 3.46 GHz x 12 cores. It's set at a 64-sample buffer.


----------



## Nicola74 (Apr 12, 2020)

Then which component (or components) of the daw manages the real time? I presume not only the audio interface, nor only the cpu...thanks again


----------



## Tim_Wells (Apr 12, 2020)

Nicola74 said:


> Then which component (or components) of the daw manages the real time? I presume not only the audio interface, nor only the cpu...thanks again


If the CPU is not really the bottleneck, then this ^ is the big question. 

I don't know the answer, but I would imagine things like the motherboard, the graphics card & drivers (if applicable), and the audio interface & drivers would be critical.


----------



## dzilizzi (Apr 12, 2020)

I have a not so new Scarlett using a USB 2. I see now they have USB 3. I wouldn't be surprised to find it is a bit of a bottleneck, even with proper drivers. I'm also going to guess that using ASIO4ALL as a driver, which I have in the past, has contributed to a lot of the issues I used to have.


----------



## onebitboy (Apr 12, 2020)

dzilizzi said:


> I have a not so new Scarlett using a USB 2. I see now they have USB 3. I wouldn't be surprised to find it is a bit of a bottleneck


USB 2.0 is not a bottleneck. It supports 480 Mbit/s. A 24 bit 96 kHz stereo stream is only around 4.4 Mbit/s.


----------



## dzilizzi (Apr 12, 2020)

onebitboy said:


> USB 2.0 is not a bottleneck. It supports 480 Mbit/s. A 24 bit 96 kHz stereo stream is only around 4.4 Mbit/s.


Did not know that. So no hurry to upgrade to one of the newer models. Thanks


----------



## rgames (Apr 12, 2020)

Nicola74 said:


> Then which component (or components) of the daw manages the real time? I presume not only the audio interface, nor only the cpu...thanks again


The three biggest culprits in consumer PCs are audio cards, video cards and network cards (i.e. the things that operate in real time). After that, chipsets can have an effect but they're usually secondary. Dual-CPU setups are usually bad because there's a lot of real-time performance hit associated with managing traffic between the two CPUs. But those are pretty rare these days.

Components that operate in "real-time" are those that have some time limit on when they have to respond. Your monitor has to update once every 1/60 of a second (if you're running at 60 Hz). If the CPU doesn't crunch the numbers in that time period, you get garbage on the screen. Even if the computer could crunch 100 frames worth of data in a couple seconds it doesn't matter because the missing frame is a problem right now. And the next 100 frames can't be put out to the monitor any faster than 60 times a second. Likewise with audio - in order to keep the sound coming out of the speakers the CPU has to fill the audio buffer on a regular interval. If it gets interrupted (e.g. like the Ferrari moving through Manhattan) then you'll hear pops/clicks regardless of how powerful the CPU is because it's not the CPU that's holding you up.

Compare that to a non-real-time process like video rendering. The CPU can crunch the data whenever it gets the chance. If it can only render 10 frames in this second but 10,000 frames in the next second then that's great. It can dump those 10,000 frames to disk right away and keep rendering. Your audio card (or video card) can't do anything with the ability to sometimes render 10,000 audio buffers (even ignoring the fact that it sometimes has to wait for user input like from a MIDI keyboard). It has to render exactly one buffer in the time frame set by the sound card. It can't "make up" for a missed buffer by rendering 10,000 more in a second or two. With video rendering (and other non-real-time processes) you can "make up" lost time. But not with audio processing.

That's why CPU usage is so low these days - the CPU is mostly sitting around waiting for other things to do their jobs.

EDIT: this is also why CPU benchmarks are mostly meaningless for DAW use. They don't test real-time performance.

rgames


----------



## Dewdman42 (Apr 12, 2020)

Video cards and audio cards are not the bottleneck. As you said, they update on a regular clock and they are very good at doing so and nothing in a video card will cause the rest of your system to be blocked. On the other hand, if the CPU, memory and other components can't keep the audio and video buffers filled on a timely schedule (which is not realtime, but is still a factor of some amount of time the CPU has to fill up each buffer...the cpu and other core processing needs to be able to deliver 1/60th of a second worth of video content to the appropriate buffers in no more than 1/60th of a second. for example. Then you might see garbage on the screen or get an audio drop out. But the bottleneck was definitely the CPU...not the video or audio card.

Your CPU is most definitely the most concerning bottleneck on your system today. Other components such as memory and SSD/HD storage can also become bottlenecks.

Regarding USB2 and US3, USB3 is really not any faster than USB2. But it's wider. More data at once. So it takes just as much time to get the data in and out of an USB2 or USB3 interface, but USB3 can do more at once. So it can handle 48 channels, etc..while USB2 might be limited to less simultaneous channels. One of the reasons a lot of audio manufactures have not bothered with USB3 is because the vast majority of audio users don't need so many channels at once.. Some studios perhaps, but most home studios...don't need it.

On the PC platform there can be certain devices which don't play well on the interrupt driven system of Windows, and this can cause them to block other components in the system from being able to do their job in smooth way. Those devices need to be avoided. This problem does not appear to happen on OSX.


----------



## Dewdman42 (Apr 12, 2020)

Regarding the CPU utilization...

As stated, if a CPU utilization says, for example 70%..that means 30% of the time the CPU is siting around waiting for other components to give it data to work on. That could be slow hard drives, slow memory, or an app that simply doesn't need to keep the CPU busy 100% of the time.

But nonetheless, you still have a situation where faster can help. Other components are also waiting on the CPU to do whatever it does every time it is active. So the faster your CPU, the less waiting other components will have to do also. And it all helps. Yes bringing 70% utilization down to 60%, for example, could have a tangible improvement to what the system is capable of delivering in terms of low latency, track counts and the rest. 

Think of it like this...race car driver..one guy has an engine that can go from 0-60 in 3 seconds, and the other guy can go 0-60 in 2 seconds. Half the time they are braking in the corners. So the engine is only being uses 50%. But the guy with the faster acceleration will win the race.

A computer system has a lot of complex interactions...many components...they are interacting in a non-real time fashion...waiting on each other, communicating with each other when not blocked, etc.. and basically the faster your CPU can do its job and get out of the way, the faster other components, whether they are slow or fast, can get on to doing whatever it is they need to do.


----------



## rgames (Apr 12, 2020)

Dewdman42 said:


> Your CPU is most definitely the most concerning bottleneck on your system today.


Then show us a project that will *not* run without dropouts on one CPU but will run without dropouts on another with otherwise identical hardware and settings. I and many others have tried to make that happen and we can't do it as of about 10 years ago.

Not being argumentative - I'm genuinely curious.

rgames


----------



## Nick Batzdorf (Apr 12, 2020)

rgames said:


> Then show us a project that will *not* run without dropouts on one CPU but will run without dropouts on another with otherwise identical hardware and settings. I and many others have tried to make that happen and we can't do it as of about 10 years ago.
> 
> Not being argumentative - I'm genuinely curious.
> 
> rgames



I had to read that a couple of times.

Are you saying that replacing the CPU with a more powerful one will not make a project that gags the machine run?

If so, that means everyone should buy the slowest CPU that works on their motherboard, right? Reductio ad absurdum, but there are instruments that at least at one point could gag a slower single core, at least in Logic.

I remember Eric Persing posting about that here re: maybe a 2.2 GHz Mac Pro. He recommended the one above that.


----------



## trumpoz (Apr 12, 2020)

Check out the Facebook group DAWBench. 

Vin has been building DAWs for 20+ years and knows a lot more than most. 

If you are on Gearslutz, he goes by TAFKAT.


----------



## trumpoz (Apr 12, 2020)

dzilizzi said:


> I have a not so new Scarlett using a USB 2. I see now they have USB 3. I wouldn't be surprised to find it is a bit of a bottleneck, even with proper drivers. I'm also going to guess that using ASIO4ALL as a driver, which I have in the past, has contributed to a lot of the issues I used to have.



Efficiency of drivers is more important than connection protocol for audio performance. Im on an RME Babyface.... RME drivers are famed for their efficiency.


----------



## rgames (Apr 12, 2020)

Nick Batzdorf said:


> Are you saying that replacing the CPU with a more powerful one will not make a project that gags the machine run?


No, I'm saying that a project that gags on a 6 core CPU will also (likely) gag on a 10 core CPU. That's been my experience, anyway. I have projects that go back 10 years that gagged at less than 5 ms latency. That was on a four-core machine. On a 14-core machine, they also gag at less than 5 ms latency. No difference that I can see.



trumpoz said:


> Check out the Facebook group DAWBench.


The problem I have with DAWBench is that I don't know how to relate it to an actual project. Who puts 300 compressors in a project? The point of a benchmark is to be a common-reference proxy for something else. But what's the something else? It's like measuring how loud a car is. Sure, it's something you can measure but does it relate to performance? Maybe. But probably not, certainly not if it's an electric car. You need to show the link and I've not seen that link made for DAWBench.

An actual project shows you the reality of how CPU makes a difference. Not a non-musical project with 300 compressors. So I don't know what to make of DAWBench. I can't relate it to reality.

rgames


----------



## rgames (Apr 12, 2020)

Also, there's a decade's worth of posts on this forum and others where people ask for help with a new computer that performs no better (sometimes worse) than the old one. I've had that experience as well. That's because the CPU is rarely the bottleneck these days (again, from what I've seen - and I've not seen everything yet!).


----------



## dzilizzi (Apr 12, 2020)

trumpoz said:


> Efficiency of drivers is more important than connection protocol for audio performance. Im on an RME Babyface.... RME drivers are famed for their efficiency.


Yes I bought one last holiday season and was planning on using it for my mobile kit. But since I haven't been doing my usual work travel (not much else to do in a hotel at night), I've been thinking of connecting it up instead of my 6i6. It's just a pain to move everything.


----------



## Dewdman42 (Apr 12, 2020)

I am a huge fan of PCIe audio. its one of the main reasons I'm still on a 2012 MacPro. This is where it makes the difference between being able to have 10ms or 5ms of latency from the audio hardware and related drivers. 

CPU power is more related to when you start building up tracks and plugin-stacks...running video at the same time, having GUI's open at the same time, etc.. All of those things can cause a system to get drop outs sooner with smaller buffer sizes, forcing you to bump up the size of the audio buffer, and then have lots of latency.

Latency is not a problem while mixing though, mainly only while tracking. So its really the live channels that matter here....and that is more of a factor of clock speed then core count.

But more cores will just give the CPU breathing room when you have many tracks, and as I said, other things like Video, GUI's open, etc.

A system can easily get audio drop outs way before the cpu meter is showing 100% utilization by the way. But that doesn't mean that a faster CPU is not helpful.

I don't happen to think that a 28 core monster machine is really ideal for audio production. I can't imagine what kind of monster mix you'd have to be doing to use all that CPU. But maybe if you're using some software tech that is yet to come out that just needs a lot of CPU to do what it does...then you'd be glad you had more processing power in order to a void drop outs with smaller buffer sizes and particularly while tracking in parts on a live channel.


----------



## Nick Batzdorf (Apr 12, 2020)

rgames said:


> No, I'm saying that a project that gags on a 6 core CPU will also (likely) gag on a 10 core CPU. That's been my experience, anyway. I have projects that go back 10 years that gagged at less than 5 ms latency. That was on a four-core machine. On a 14-core machine, they also gag at less than 5 ms latency. No difference that I can see.



I've never run that test, but again, look at the screen shot I posted earlier. More has to be more - up to a point, and assuming you're using software that balances the load between the cores (such as Logic and VE Pro).

But I'm sure you're right that there are other bottlenecks, and if that's the problem, then of course cores won't help.

I agree about DAWBench and all benchmark stuff like that. The main reason is just that I find it annoying. Or I should say that I find it annoying when people prattle on and on about trophy computers, thinking that the more meaningless specs they quote the more intelligent they sound. 

For heaven's sake, add a cheap slave computer to handle your overflow! It wasn't long ago when we were excited about being able to load 1.5GB in an entire machine.

These kids today. Get off my lawn.


----------



## Mishabou (Apr 12, 2020)

rgames said:


> Yes. But, in general these days, CPU is not the bottleneck.
> 
> Do you currently hit 100% CPU usage (not ASIO usage)? That's really rare, but if so, more cores might help. If not, they're unlikely to help.
> 
> ...




How would you get around the ASIO bottlenecks ?


----------



## Nick Batzdorf (Apr 12, 2020)

Dewdman42 said:


> I am a huge fan of PCIe audio. its one of the main reasons I'm still on a 2012 MacPro. This is where it makes the difference between being able to have 10ms or 5ms of latency from the audio hardware and related drivers.



It's true that PCIe has been around for a while, and that makes total sense.

But I'm old enough to remember when thousands of dollars would go out the window every time Steve Jobs farted and there was a new card slot variation. I've gone through that enough times to have sworn never to buy another expensive card again.

The FireWire interface I have (Metric Halo 2882) still works absolutely fine after what, 19 years? I run it at a 64-sample buffer most of the time, never more than 128, and when push comes to shove don't even use direct monitoring.

They have an optional hardware update that among other things makes it connect by USB 3 or Ethernet, and I admire them a lot for coming out with it... although not enough to buy it (because it's $500 and the interface is worth $500).


----------



## rgames (Apr 12, 2020)

Mishabou said:


> How would you get around the ASIO bottlenecks ?


Other than drivers, you're pretty much stuck because 99.9% of computer users don't care about real-time performance, so none of the hardware and OS manufacturers give it much thought (other than those dedicated to audio). Buffer sizes used in YouTube and video editors are huge and are not a practical issue. There are certain chipsets that are better than others (back in the day there was a certain TI Firewire chipset that showed somewhat better real-time performance than others). And certian audio drivers are better (RME comes to mind). Occasionally video/network drivers will screw things up temporarily but they usually get fixed in the next release, so it usually pays to avoid updates if you're in good shape. AMD/Radeon has a reputation for better drivers for audio computers but I've not had that experience. I've used both Radeon and NVidia and have had success and failure with both.

But the truth is that you pretty much get what you get. And pretty much all of it is "good enough" these days. CPU power and latency and other such issues were real problems back in the day. Not any more.

At least not that I've seen. Hopefully @Dewdman42 will post that example project that shows us otherwise 

rgames


----------



## rgames (Apr 12, 2020)

Nick Batzdorf said:


> I've never run that test, but again, look at the screen shot I posted earlier.


I guess I'm not clear what you're showing. It shows what I'd expect - you're not CPU limited. You have tons of headroom. That's consistent with what I've seen for many years.

rgames


----------



## Dewdman42 (Apr 12, 2020)

rgames said:


> Hopefully @Dewdman42 will post that example project that shows us otherwise



don't hold your breath


----------



## rudi (Apr 13, 2020)

As* *@rgames explained with a very good analogy, it's like fast cars being held-up by bottlenecks in traffic. It all boils down to the slowest element in your chain. What that bottleneck is depends on a number of different factors:

- Graphics - modern computers have dedicated or built-in graphics processing units (GPUs) and DAWs are fairly undemanding in that regard, so it's unlikely to be a bottleneck.

- Disk drives - SSDs are pretty fast these days. Depending on how many tracks of either audio, or sample voices you need , you may hit an overload. With audio tracks and samplers (like Kontakt) use chunks of computer memory (RAM). They pre-load parts of the audio or samples from disk into memory so the sound can play straightaway whilst the next chunk is loaded. Generally speaking the more RAM you have the more chunks of audio can be loaded and played at any one time. If you exceed the throughput of your disk drive, you would start to get gaps in the audio and bad distortion.

- Audio cards use a similar technique to the above. They use buffers (another name for a chunk of audio) to play the audio, whilst receive the next chunk of audio ready to play. Because most DAWs pre-mix all their tracks down to stereo or surround mix, it is unlikely that couldn't cope - with one exception... if they use a buffer that is too small they might not be able to process the audio in real time and you'll get bad audio distortion. Different cards can cope with small audio buffers e.g 8, 16, 32 samples whilst others may need buffers as high as 128, 256, 512 or even 1025 samples to ensure a smooth flow. It also depends on the complexity of your project and how quickly your computer can process and send the audio out - see below:

- CPU - some tasks like mixing audio involve adding numbers together. Changing volume during a mix involves multiplying numbers together. Both these tasks are simple for modern CPUs, and are unlikely to tax them too much. Other tasks such as virtual instruments, FXs. eg: EQ, reverbs (convolution reverbs more so), and a myriad other effects, require much more processing power. Some 'modelled' virtual FXs or virtual instruments can be even more demanding.

So it depends very much on what your projects consist of:

- lots of audio tracks or sampler tracks - look for fast SSDs and sufficient RAM.
- lots of FXs and virtual synths - look for a fast CPU.

Of course all these elements interact, depending on what type of project you are working on.
The final link in your chain, the audio card needs to be able to process whatever your DAW throws at it and how often it does so. It is unlikely to struggle unless its audio buffers are too small... in which case increasing the audio buffer size would lighten the load and ensure smooth audio, but at the cost of increased latency if you want to play virtual instruments or listen to FXs in real time.


----------



## colony nofi (Apr 13, 2020)

There's a lot of food for thought here for OP. 

If I get the time in the next few days I'll chime in with some more info on real-time audio and what it means for CPU utilisation, DAW use and the like. Some comments here are close to how things work - but also don't quite hit the nail on the head. It turns out that its complicated and short of going deeper into how a CPU (and mulit-core cpu architecture) works its hard to make it terribly clear.
It's all a dance. For some people like @rgames he see's little need for looking for more cores. For some folk more cores really do help (but not at the expense of core speed!). Its also worth looking into the amount of work a single core can do in a single cycle. Different core depths and efficiencies can have significant impact even on real-time audio usage. Thus one generation of CPU can see 10% benefit over a previous at same core + freq count. However, like @rgames and others have said, the general benchmarks around rarely give the full story for DAW use. There have been occasions where a new CPU core architecture has seen worse real-time audio performance than prev generations, even if for most other things they are much faster. It doesn't happen *that* often - and generally over time there are ways around the limitations / new firmwares / updates that fix things to at least parity.

If you are really interested in this stuff, I do suggest following a lot of Vin's excellent commentary on the DAWBench Project. 

Also understand that only YOU will ever understand how you utilise a DAW. Many of us use things in very different ways, and this impacts greatly on how one experiences different types of computers. Some people see 9990k's as being the bee's knee's for a computer. Others need something more powerful (yes, its not enough for many of my projects, even with all cores at 4Ghtz. For these, something like a 10940 is a better chip, but probably not great value for many others here. Oh - and I still use a Xeon 8-core trashcan for many many projects... which don't use some of my more esoteric workflows and they're more than powerful enough... and a 9990 can beat it on almost all counts. Confusing yet understandable....


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> don't hold your breath


Don't worry I won't. I did the first 10 or 20 times I engaged in this conversation on this forum. But not for the last 100 or so.

All in good fun 

rgames


----------



## Dewdman42 (Apr 13, 2020)

sure its all fun though I think most people on this forum aren't that interested in the mumbo jumbo about computer architecture. They just want simple answers to get on to making music.

But still the misinformation being spread here is not very cool. CPU speed should not be dismissed as irrelevant. Its very much relevant. Maybe you can mix your kontakt projects just fine on 2010 technology and never need to upgrade, but its only a matter of time before software upgrades cause the need for more CPU power in order to keep up. Plenty of people out there, run into audio drop outs that would be best solved with a more powerful CPU. that is not always the case, but it definitely can be the case.

Secondly, Personal Computers are not "real time" systems.

We get an illusion of real time performance only by virtue of the fact that their video and audio cards take buffers of information and feed them out to us at exactly the proper rate (ie, real time) of whatever frames per second video or sample rate audio. They do that very very well also...I think its highly unlikely that any video or audio card would not be able to do that at exactly the real time rate they are spec'd to deliver. Not faster and not slower. Real time is an exact replication of time as we experience it while watching a clock or listening to music.

You could say that the audio card itself is a "real time" device. But its also true that they all have plenty ample real time performance to do what they need to do. You are simply not going to ever find an audio card that can't keep up with the real time performance it is spec'd and capable of doing. If a card is spec'd to play back 192k sample playback, then it will dutifully do that every single time...with every buffer-load of data fed to it. The audio card and its realtime performance of audio to your speaker is not the blocking factor...and real time performance of the audio card has absolutely no impact on the rest of the system. None.

Rather its always the rest of the system, the non-realtime part of the system, that can starve out the audio card by not filling up the audio buffer fast enough.

Everything else that happens in a personal computer is NOT in real time. Its actually operating most of the time at rates that far exceed real time and have nothing to do with real time whatsoever. But in a start/stop fashion at lightening speed. But at that lightening speed of DSP, etc.. various components do block each other waiting for the other to respond. This happens routinely every single microsecond the computer is running there are components waiting on other components to send them information so they can process it. None of them are really sitting there running 100% full tilt continuously.

And sometimes that includes the CPU that is causing other components to wait for it to crunch some data. Sometimes the CPU is waiting for other things.. that's why you will rarely if ever see your CPU utilization pinned at 100%. But also, the CPU causes other things to wait for it while its doing its job. So the faster your CPU, the less that wait time will be...and ultimately will contribute to a more efficient system at filling that audio buffer in time.

It will depend entirely on how CPU-bound your computer activity is. If you leave GUI's open, they need more CPU. If you have lots of tracks and plugins, they need CPU. If you have other applications running on your computer in parallel, they all need CPU. The deeper your plugin stacks, the more likely you are to cause the CPU to starve out the other components, and then need a faster CPU to keep working. Who knows what software upgrades will come out next year or the year after that...which causes more need for CPU to do more work while its taking its turn.

Just because your CPU doesn't say 100% on the meter, doesn't mean that using a faster CPU won't improve the data throughput of the system. That is a vastly over-simplified explanation of what is happening, and particularly not a good explanation if it leads one to believe that the CPU speed is irrelevant.

DPC latency is one interesting area, which is also not realtime performance. But its an area where the video cards, audio cards, network cards can hog the PCI bus and block other things, causing them to wait too long. This has been an issue on MS Windows computers, not OSX. It is a fundamental part of MS Windows interrupt driven paradigm. Generally, it is an issue if a poorly designed PCI card is hogging too much time on the PCI bus and not allowing itself to be "interrupted" in a timely manner. Thus causing other components to sit around waiting for it. All of that happens in the non-realtime realm of time and space also..but still..that is definitely the one way that a bad audio or video (or network) card could cause the CPU to be blocked and waiting, starved of data. Also in that situation, a CPU would never hit its max utilization and putting a faster CPU in it would not make any difference because the DPC latency problem itself would be the thing causing the bottleneck. Perhaps some tests done by some people in the past where putting a faster CPU in made no improvement or difference...this was the cause of that scenario. DPC latency was particularly a problem in the past but is less so now..but still its always possible that in the future someone will release some particular hardware that is not well suited for audio work because it causes a bottleneck on the PCI bus (for MS windows)

However, if you carefully choose your components for an MS Windows computer, then this problem should not exist as a "problem"


----------



## JohnG (Apr 13, 2020)

well, then; that's all settled.


----------



## Nick Batzdorf (Apr 13, 2020)

rgames said:


> I guess I'm not clear what you're showing. It shows what I'd expect - you're not CPU limited. You have tons of headroom. That's consistent with what I've seen for many years.
> 
> rgames



Yes, absolutely - I have tons of CPU headroom even on an 11-year-old machine with specs that people who run benchmark software rather than actual productivity programs laugh at. We agree 100%.

What I'm saying is that the load is being distributed among the cores, ergo there is a definite benefit to having them. That's the question that started this thread.


----------



## José Herring (Apr 13, 2020)

After reading this thread, I am more confused now than ever.

But from what I can gather, provided you have good components, more cores gives one the ability to run more plugins simultaneously, but single core speed hasn't really improve terribly much, so if you have one instance of a resource hungry plugin you will still get cpu spikes regardless of how many cores you have or how high the benchmark numbers are due to the fact that one instance of a plugin will still run on a single core.

So, if you're building a new computer you have to weigh your needs based on how many plugins you want to run simultaneously and how much resources each instances of that plugin will take.

So, if you want to load up on say mixing plugins across hundreds of tracks but each plugin instances has a relatively light demand then you'll want as many cores as you can afford so that all your instances get spread out over as many cores as possible.

Now, there is a slight single core performance hit I'm seeing the more the number of cores increases. So, if you're jam is to make your one Omni patch as big and bad ass as possible for your entire track then you may want to go for more computing power per core which would lead you to 8 cores rather than 16 or 32. Even the old i5 and i7 9600 series still have the highest single core performance that beats all AMD chips.

For composers that use a lot of samples, say all your strings are in one instance of Kontakt or Play then you'll want some high single core performance.

In conclusion, I would surmise that if you're not running a lot of samples or synths on your DAW machine, I'd go for lots of cores on that DAW machine because your daw will be able to spread the load, and I would build one or 2 or more slaves with VEPro for samples and synths that utilize some of the older chips with high single core performance. because it's more cost effective and you'll get more bang for your buck.








PassMark CPU Benchmarks - Single Thread Performance


Benchmarks of the single thread performance of CPUs. This chart comparing CPUs single thread performance is made using thousands of PerformanceTest benchmark results and is updated daily.



www.cpubenchmark.net


----------



## Nick Batzdorf (Apr 13, 2020)

Dewdman42 said:


> Just because your CPU doesn't say 100% on the meter, doesn't mean that using a faster CPU won't improve the data throughput of the system.



That is a good point, although there's a suspiciously high correlation between what you hear and the CPU meter spiking.

I don't recall having seen that on my current machine, but the reason I upgraded from my 3,1 Mac Pro three years ago was exactly that - I was maxing it out.


----------



## Nick Batzdorf (Apr 13, 2020)

josejherring said:


> But from what I can gather, provided you have good components, more cores gives one the ability to run more plugins simultaneously, but single core speed hasn't really improve terribly much, so if you have one instance of a resource hungry plugin you will still get cpu spikes regardless of how many cores you have or how high the benchmark numbers are due to the fact that one instance of a plugin will still run on a single core.



I think that's right, although you have to remember that it's easy to say "GigaHertz" without stopping to think that it's

ONE BILLION HERTZ







So one more is a billion more.

Now, the popular line is that GHz doesn't tell the whole story, but it seems to tell quite a lot of it.


----------



## Dewdman42 (Apr 13, 2020)

Nick Batzdorf said:


> What I'm saying is that the load is being distributed among the cores, ergo there is a definite benefit to having them. That's the question that started this thread.
> 
> I don't recall having seen that on my current machine, but the reason I upgraded from my 3,1 Mac Pro three years ago was exactly that - I was maxing it out.



you and I both needed a CPU upgrade. And we are both benefiting from all the cores too, because even our upgraded CPU's, aren't the fastest single-core performance. But our old upgraded machines do quite well with mixing down decent sized projects exactly because we have 12 cores...and as fast as we could get it. Our machine doesn't compete well on the single core side... (ie, the live channel). Most of the time its fine but every once in a while mine does get a little starved for CPU power. Recent updates to VSL instrument plugins have proven to be quite heavy on the CPU, for example.


----------



## rgames (Apr 13, 2020)

Nick Batzdorf said:


> What I'm saying is that the load is being distributed among the cores


Yes.



Nick Batzdorf said:


> , ergo there is a definite benefit to having them.


No.

If you already have plenty of margin, as you show, then more cores is just going to give you even more margin that, apparently, you don't need. So why bother?

Think of it like this: how strong does your coffe cup need to be? Answer: strong enough to hold coffee, with a bit of margin.

What if you designed a coffee mug that was strong enough to hold depleted uranium, which weighs about 20 times as much as your coffee? Well then you'd have even more margin for the coffee.

But who's going to put depleted uranium in a coffe cup? Answer: nobody. So why design in all that margin? It's a waste of time and money.

The talk of margins in DAWs always comes back to this mythical project that "might one day" require all that margin. But I've not seen anyone get there in an actual music production project for the last decade or so.

We keep pushing for more margin but I can't find anyone who's actually shown a need for it. Again, doesn't mean it doesn't exist. I just haven't seen it.

rgames


----------



## Dewdman42 (Apr 13, 2020)

sorry but that is wrong.

Nick's example was only using half the full count of LogicPro threads because he didn't have that many tracks in whatever he was doing. A larger project would absolutely be use all the cores and with his slower CPU, like mine, definitely benefits from more cores!

Also Nick's meter image was not showing core usage! It is showing the "threads" allocated by Logicpro. At any given moment, there are dozens of processes active on an OSX system with dozens of threads, using all 12 cores. He should be showing the system-wide CPU meter in order to get an impression of the actual core usage.

Any multi-threaded activity will benefit from multiple-cores! DAW's themselves don't always fork as many threads as people think, particularly with only a few tracks playing.

But like mine, his live channel performance will never be "great", though quite decent most of the time and especially considering the age of the machine.


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> You could say that the audio card itself is a "real time" device.


It most certaily is!


----------



## Dewdman42 (Apr 13, 2020)

yes but its also not a bottleneck


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> yes but its also not a bottleneck


Then how do you explain the fact that swapping audio interfaces often allows lower latencies? The rest of the system is identical...???


----------



## Dewdman42 (Apr 13, 2020)

In order to avoid having to write you another long message explaining it, how about this simple explanation... an audio card has parts of it that operate on the realtime output...and parts of it that are very much not realtime. The part that is realtime is exactly the same in nearly all sound card designs with very low fixed latency...less than 1ms. This is the same kind of latency you would see on a digital mixer. The part that causes all the big latency overhead, different for different devices, has more to do with the non-realtime processes that are involved with filling the audio buffer in the card. And that will depend on numerous factors...pci bus vs usb, how well the drivers are written, etc.. None of that is "realtime". That is all part of the system architecture.

Another logical fallacy you make is to assume that just because one sound card and the system its based on, is slower than another...somehow means to you that the Cpu speed is irrelevant. That is fake news!


----------



## Nick Batzdorf (Apr 13, 2020)

rgames said:


> If you already have plenty of margin, as you show, then more cores is just going to give you even more margin that, apparently, you don't need. So why bother?



Why indeed. I'm not bothering because I'm not maxing out this computer - although I will use more of its horsepower when I add a bunch of plug-ins to this project.

Here's the Activity Monitor screen + the Logic meter, whatever they're actually telling you. It's showing 24 threads even though there are 12 cores because of hyperthreading, by the way.

Yes, there's still considerable overhead. But can we agree that there is a benefit to having cores?

Please?


----------



## Nick Batzdorf (Apr 13, 2020)

Also, why is Logic running more threads now than in the first screen shot?

Interesting.


----------



## Nick Batzdorf (Apr 13, 2020)

Mildly, anyway.


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> how about this simple explanation


The best explanation will be that example project where you show that extra cores enabled a project to run where it wouldn't on a CPU with fewer cores. 

Simple, unequivocal, easy to interpret. A vs. B. This one works, this one doesn't.

Until I see that, I'm going to continue to place my bets that number of cores (over 6 or so) don't really matter.

I mean, unicorns might exist too. I can't prove that they don't.

rgames


----------



## Dewdman42 (Apr 13, 2020)

LogicPro's usage of threads is not as simple as is often supposed by the limited information we often see mentioned by users about it. Only engineers at Apple know the exact multi-threaded methods they use. Its not always more efficient to always fork a new thread to accomplish tasks. And certain tasks can't even really be done across multiple threads, or at the least it won't be easy and may be more overhead then its worth. Its quite likely that Logic Pro will never have one mixer channel spread across multiple threads for example. But we also know that logicPro seems to only show a max of 24 threads (on your system). So if you have 100 tracks, then obviously those 24 threads are servicing more than one mixer channel each. They have to queue up the tasks for each mixer channel and take turns doing the DSP for N number of mixer channels on each thread. But I have observed through testing that LogicPro does not always fork a new thread as each channel as added. Sometimes it does and sometimes it doesn't. Forking a thread has costs associated with it, so Apple will determine certain cases where its worth it to fork a new thread, but sometimes it may just add a new mixer channel to an existing thread (just like it would do anyway if your track count got over 24)

Also, plugins sometimes fork their own threads...and LogicPro doesn't even report those on that meter.

And other parts of LogicPro might be forking threads for handling a GUI or something...and I would presume LogicPro would show that as thread activity... but who really knows.


----------



## Dewdman42 (Apr 13, 2020)

Nick the important thing to keep in mind is that LogicPro's meter isn't showing your cores exactly. Its showing the threads. Each thread does not dominate a core. Your system, across all apps currently running, might have 100 threads of execution all happening at the same time. OSX then allocates time on the 12 cores for those 100 threads, they take turns. Your system meter will give you a better idea of how well OSX is utilizing all the cores for whatever you're doing.

LogicPro's meter will show you how much LogicPro alone is using multiple threads, which spreads the work out to more cores, but not a 1:1 correlation.


----------



## rgames (Apr 13, 2020)

Nick Batzdorf said:


> Yes, there's still considerable overhead. But can we agree that there is a benefit to having cores?


More than 6 or so, I don't think so. And I think that's what you're showing.

But here's the real answer: who cares what the CPU meter says? As I've been saying for the 10 years or so we've been having this discussion on this board, load up a project and run it on a 6 core machine and show that it won't run without dropouts. Then put it on a 10 core machine (or 14 cores or whatever) and show that it does run without dropouts.

I've done that and I can't make it happen. If it craps out at 5 ms on 6 cores it still craps out at 5 ms on 14 cores. The min latency (i.e. real-time performance) is the same.

rgames


----------



## Dewdman42 (Apr 13, 2020)

rgames said:


> I've done that and I can't make it happen. If it craps out at 5 ms on 6 cores it still craps out at 5 ms on 14 cores.



define your test better and what you mean by craps out, and keep this in mind while doing it.....

Latency really only matters for your live channel

Only your live channel is using the small buffer. Its quite customary for non-live channels to pre-process much larger buffers ahead of time... They do this behind the scenes. Its really only the live channel that is subject to the low latency you are attempting to achieve without drop outs.

And that one channel is also limited typically by a single thread...which means single-core performance is what matters for that particular aspect of performance....and that particular test. 

the results you were getting in the above were constrained by numerous factors...including your soundard, but also including the Cpu single-core performance of the machine(s) you were using. In that test, multi-core is irrelevant.


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> define your test better and what you mean by craps out, and keep this in mind while doing it


Below is an example of one of the benchmark projects I use and what I mean by "craps out" (that means you hear crackles/pops). That video was made with a 6-core machine that could run no lower than about 6 ms (3 ms sound card + 3 ms VE Pro over Ethernet). I have run that exact same project on 10 core and 14 core machines since then and the minimum latency they could achieve was

_<drum roll>_

about 6 ms. 6 core vs. 10 core vs. 14 core made no difference in minimum latency. They all ran the same.


----------



## Dewdman42 (Apr 13, 2020)

did you read what I just wrote? A test that compares different numbers of cores while shooting for low latency is a meaningless test. The single core performance is what matters for that test. comparing different numbers of cores is irrelevant. That's why you got the same results!


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> Only your live channel is using the small buffer.


Not true in Cubase. I don't use ASIO guard.


----------



## rgames (Apr 13, 2020)

Dewdman42 said:


> A test that compares different numbers of cores while shooting for low latency is a meaningless test.


See above.


----------



## Dewdman42 (Apr 13, 2020)

We just don't agree mate. CPU matters. Cores can definitely matter. Number of cores will not have an effect on latency. If you want to believe otherwise that is your choice.


----------



## Dewdman42 (Apr 13, 2020)

Regarding your video, I have seen it before. Some excellent videos demonstrating audio buffers. A good explanation about MS Windows DPC latency problems. Totally off the reservation describing realtime aspects that simply do not exist in personal computers; and as you have shown, led to the false conclusion that cpu performance doesn't matter...I have to say..you made a nice well done video that is simply wrong in the end analysis.

Peace out.


----------



## rgames (Apr 13, 2020)

OK we'll all wait for you to show us that project that runs on 14 cores but not on 10 

I really *do* want to believe in unicorns.


----------



## rgames (Apr 13, 2020)

I think the next time this discussion comes up on this forum we should just replace the thread with the following:

OP: Does number of cores matter?
Person A: Yes.
Person B: OK - show me how.
Person A: <long explanation>
Person B: OK - but can you just show me A vs. B? It's easy to do.
Person A: <long explanation>
Person B: I still haven't seen the simple example.

Then the thread ends.


----------



## ed buller (Apr 13, 2020)

rgames said:


> The talk of margins in DAWs always comes back to this mythical project that "might one day" require all that margin. But I've not seen anyone get there in an actual music production project for the last decade or so.
> 
> We keep pushing for more margin but I can't find anyone who's actually shown a need for it. Again, doesn't mean it doesn't exist. I just haven't seen it.
> 
> rgames



talk to Chuck Choi at RCP

best

ed


----------



## Nick Batzdorf (Apr 13, 2020)

rgames said:


> But here's the real answer: who cares what the CPU meter says?



I cared when I was bringing my old machine to its knees and wanted to see why.

But of course there's not any reason to worry about computer specs if your machine is doing what you need it to do.

Now: how fast are your SSDs? You need M2 or you're a total sissy with a tiny penis or large vagina.


----------



## colony nofi (Apr 14, 2020)

rgames said:


> OK we'll all wait for you to show us that project that runs on 14 cores but not on 10
> 
> I really *do* want to believe in unicorns.


Hey mate. Just chiming in to offer a screenshot or two... and sorry - this is 14 cores over 6 - and I can potentially test sometime with our 8 core mac as well, but not right now.

This is an immersive installation composition project for the Museum of Victoria's new Indigenous Learning Lab - where I composed, recorded, mixed and outputted all from nuendo using 3 different machines at different times. 14.1 Custom gelenec immersive speaker system. Live vocal and string recordings. Loads of kontakt as well, and tonnes of kontakt that ended up as audio files to help with mixing.

5 mins long.

For the mix, I just opened it on my mac pro trashcan 6 core - at 2048 buffers. Doesn't get close to running. Same thing on the 14 core 10940X with exactly the same base clock rate (3.33) - and it doesn't scratch the sides. I can still run it at 512 buffers and perform new midi in.

I cannot share the final mix right now - I will as soon as the NDA lifts - but this is one of those situations where more cores REALLY helps. 




So this (above) is the top half of the project and below is the second half






And finally other CPU reporting while trying to play back - it plays, but not without stuttering.




So nuendo is taking about 56% of the 6core (12 virtual cores), and the system / other bits and pieces are at 16.5. Oh - and the spacialisation software is turned off, which on its own runs at 65% of this system.

Now, on the 10940X (I'm sorry, that machine is at work and I'm not going into the studio which is under lockdown) the whole thing scales brilliantly across the cores. Both the nuendo session, the SPAT Revolution session and other custom MAXMSP Spat software runs all at the same time - at about 60% at 512 buffers, 14 cores (28 virtual) with windows taking around 6 or 7% on top of that. And I can even run an ambisonic fold-down in real time with head tracking so I can mix without being in the actual space (a virtual replication of the room) on top of that - although that uses another 15% or so of the total CPU of that machine. Scaling brilliantly.

This is a clear case where higher core count really DOES make a difference for music production.

So my take on it is - this REALLY comes down to your own personal workflow. Don't get me wrong - there are many things you are saying that ring true, but it is not as cut and dry as it feels to me like you believe.


----------



## Nick Batzdorf (Apr 14, 2020)

Devil's advocate: my understanding (via someone who talked to a Steinberg developer and knows the cousin of the nurse who was in the emergency room when Richard Gere showed up) is that Cubase/Nuendo isn't as efficient on macOS.

But I've been advocating the other side in this thread.


----------



## colony nofi (Apr 14, 2020)

Nick Batzdorf said:


> Devil's advocate: my understanding (via someone who talked to a Steinberg developer and knows the cousin of the nurse who was in the emergency room when Richard Gere showed up) is that Cubase/Nuendo isn't as efficient on macOS.
> 
> But I've been advocating the other side in this thread.


@Nick Batzdorf 
I’ve occasionally worked with the Hamburg steinberg team. . They’re a lovely bunch of folk. And yes - re performance on pc vs Mac that’s not a rumour at all. 
From what I know about it, it’s a complicated issue. 
However, just for fun Im going to boot up my Mac on windows and see what happens. 
(I already know this session doesn’t work on the 9990iMac at work under OS X either)

From testing I did, there was a 12-18% performance hit running kontakt instruments under OS X on a hackintosh compared to windows on the same machine. But this is going back a couple of years so memory


----------

