What's new

Core Utilization

holywilly

Active Member
I have a quad core Mac Pro 6,1 running orchestral template (VEP 7) and couple synths (Omnisphere, U-he...as instruments tracks) without any voice drop or lag and I’m on Cubase Pro 10.

Very happy with what my Mac Pro can do.
 
OP
Ashermusic

Ashermusic

Senior Member
You're in luck! Because I did.

And yes, they crap out at different CPU usages. One might be 80%, another 50% and another 20%. But they all crap out below 100% usage. So unless you like staring at CPU usage graphs, who cares what the CPU usage is?

In my tests, the one that craps out at 80% CPU usage doesn't do so with significantly more FX loaded or voices playing. But maybe it does for you - but that's a meaningful metric that I don't see above: max number of FX/voices/whatever for a given CPU or DAW or whatever.

Pick a CPU type, or CPU speed or DAW or whatever and measure the point at which it no longer plays back smoothly. That's what we care about. So measure that.

Here's an example of a meaningful comparison: I have a pretty intense project that that I've used as a DAW benchmark for 7 or 8 years. I've run it on 4-core, 6-core and 10-core machines. On all three machines the lowest latency I could achieve is around 6 ms. The 4-core did 6 ms at the highest CPU usage. The 6-core had the next highest CPU usage and the 10-core had the lowest CPU usage.

Did the 10 core provide more FX than the 4-core? Nope. Did it achieve lower latency? Nope. The 10-core CPU is vastly more powerful (as evidenced by the lower CPU usage) but in terms of meaningful metrics like number of FX or voice count at dropouts all three CPUs were exactly the same.

Therefore, voice count, number of FX, etc at dropout, based on my measurements, are not related to CPU usage. QED.

rgames

It's pretty simple and you are making it too complicated.

if on the same machine with the same audio interface set to the same buffer size, I can run, e.g., 30 instances of Omnisphere in DAW A, 22 in DaW B, but only 15 in DAW C, without hearing problems or getting system overload messages, that's significant. It doesn't negate all the reasons one still may choose DAW B or C, but it certainly is a factor.
 

rgames

Collapsing the Wavefunction
if on the same machine with the same audio interface set to the same buffer size, I can run, e.g., 30 instances of Omnisphere in DAW A, 22 in DaW B, but only 15 in DAW C, without hearing problems or getting system overload messages, that's significant.
It certainly is!

And none of the metrics you posted there is CPU usage :)

QED again (with help from Jay).

rgames
 

rgames

Collapsing the Wavefunction
But doesn't the lowest Core Utilization likely mean the less CPU usage?
Maybe - but who cares?

I think you're trying to relate CPU usage to number of synths/fx/whatever. But why bother? Just measure number of synths/fx/whatever.

A basic rule in quantifying something: if possible, measure what you care about. Don't measure something else and assume a relationship.

If you care about number of synths/fx/whatever, measure that (which is what you described above).

If you care about CPU usage, measure that.

rgames
 
OP
Ashermusic

Ashermusic

Senior Member
Maybe - but who cares?

I think you're trying to relate CPU usage to number of synths/fx/whatever. But why bother? Just measure number of synths/fx/whatever.

A basic rule in quantifying something: if possible, measure what you care about. Don't measure something else and assume a relationship.

If you care about number of synths/fx/whatever, measure that (which is what you described above).

If you care about CPU usage, measure that.

rgames
Foergive me Richard, you are a scientist and I just a lowly composer :)

But is there any factor other than CPU usage that would account for a DAW being able to run more? Not being a wiseguy, just seeing if there is something I can learn.
 

zircon_st

Lead Developer
If core frequency / count were not a factor, than tests like DAWBench wouldn't reflect enormous differences (at different latencies) between one CPU and another... in fact the simplest test would be to manually adjust your processor frequency & core usage and see what happens. You can do this in Windows directly, or on a BIOS level. It would be quite easy to set up a real world project and test this way. That way literally nothing about the system is changing, just the CPU.
 

rgames

Collapsing the Wavefunction
But is there any factor other than CPU usage that would account for a DAW being able to run more?
Yes - real-time performance. That's related to the efficiency of the code and is only weakly related to CPU power for recent-vintage systems. As of about 10 years ago real-time performance is the major factor, not CPU performance. That's why you see threads on this board with titles like "Dropouts with Low CPU Usage".

Back in the Windows 7 days on PC you could get decent measures of real-time performance with DPCLatencyChecker. But it's not supported in Windows 10 and not at all on Mac (I don't think). There's LatencyMon as well, which does work under Win10 but I've not found it to be as reliable as DPCLatencyChecker was.

But the best way to measure real-time performance is to just load up a bunch of FX/synths/whatever and drop the buffer until the machine starts to stutter/crackle/whatever.

It's like tire pressure: the shape of the tire is related to the pressure, sure. So you could measure the shape of the tire and infer/assume the pressure.

But why not just measure the pressure?

rgames
 
OP
Ashermusic

Ashermusic

Senior Member
Yes - real-time performance. That's related to the efficiency of the code and is only weakly related to CPU power for recent-vintage systems. As of about 10 years ago real-time performance is the major factor, not CPU performance. That's why you see threads on this board with titles like "Dropouts with Low CPU Usage".

Back in the Windows 7 days on PC you could get decent measures of real-time performance with DPCLatencyChecker. But it's not supported in Windows 10 and not at all on Mac (I don't think). There's LatencyMon as well, which does work under Win10 but I've not found it to be as reliable as DPCLatencyChecker was.

But the best way to measure real-time performance is to just load up a bunch of FX/synths/whatever and drop the buffer until the machine starts to stutter/crackle/whatever.

It's like tire pressure: the shape of the tire is related to the pressure, sure. So you could measure the shape of the tire and infer/assume the pressure.

But why not just measure the pressure?

rgames

OK. good,. so I DID learn something. Thanks.
 

rgames

Collapsing the Wavefunction
the simplest test would be to manually adjust your processor frequency & core usage and see what happens
I did that with Omnisphere a while back - I posted the results somewhere on this board. I manually underclocked the CPU frequency and measured max number of vocies for a given buffer size on 4 and 6 core CPUs.

Drops in CPU speed had much more impact than number of cores. i.e. dropping from 4 GHz to 2.5 GHz (or thereabouts...) caused a huge drop in number of voices but dropping from 6 cores to 4 cores had a pretty minimal effect.

So CPU speed was a major factor but number of cores was not.

rgames
 

Dewdman42

Senior Member
Mac includes a command line utility called "latency" which gives you the same information as Windows DPC latency checker. But the way OSX and UNIX is architected is a non-issue. DPC latency is a windows specific problem that some people run into on their PC's, but Mac's do not suffer from this.

There is really no such thing as "real time performance" on a PC. Nearly everything operates on buffers and takes turns using the CPU's time to get things done. We have an illusion of real time performance only.

A given system has various hardware factors and low level drivers...You have Memory access, disk access, USB access...all this stuff is serviced at some level ultimately by the CPU. And meanwhile you have operating system and software trying to take turns using the CPU also. Its very complicated and way beyond the scope of this forum to try to explain how all that stuff works, but its simply not true to say that CPU has no influence on the getting the data out. Every time you get an audio drop out is because the software running was not able to process everything it needed to process in the space of time allocated for one "buffer". Thus a partially filled buffer gets sent to the sound card. It is not a problem that the data is not sent the sound card, that always happens! The problem is that the software doesn't always fill the buffer in time...which is a cpu bound activity. Inefficient software can often not fill the buffer and result in drop outs sooner.
 

rgames

Collapsing the Wavefunction
Real-time performance is definitely not an illusion! It's a major area of study in computer science.


You are correct that the computer fills buffers. And it must do so within a given time period to ensure that the analog stream going to your monitors is uninterrupted.

That "within a given time period" part is the real-time performance part. If the system can't get the buffer filled in time, you hear crackles/pops/dropouts because there's garbage going out to your monitors.

rgames
 

Dewdman42

Senior Member
wrong, it is an illusion. We rely on that illusion in order to hear real time music come out the speakers, but your PC does not operating in true real time.

The mis information on this thread is getting tiring.
 

Dewdman42

Senior Member
You are correct that the computer fills buffers. And it must do so within a given time period to ensure that the analog stream going to your monitors is uninterrupted.

That "within a given time period" part is the real-time performance part. If the system can't get the buffer filled in time, you hear crackles/pops/dropouts because there's garbage going out to your monitors.
THAT IS PERFORMED BY SOFTWARE USING THE CPU! (and that is most definitely NOT in real time).
 
Last edited:

Nick Batzdorf

Moderator
Moderator
Seriously, I agree with the gist of what Richard is saying - that they all work, so who cares.

I used to care in the days when lack of computer resources was a constant PITA, but these days I don't have to think about performance.
 

Ivan M.

New Member
CPU utilization measurements tell us about the efficiency of a software, and it is a fact about which daw is more cpu efficient than other. However, stritly logically that does not imply the quality of service.

It is reasonable to assume that more cpu efficiency yields better performance (in metrics we care about, which is project size). But to know for sure, for it to be a fact, we have to measure it.

Let's take the assumption: "more cpu efficiency allows us to have bigger projects", and try to prove it wrong, with some mind experiments.

All audio software use double buffering at least, meaning while one audio buffer is being played by the sound card, the software is preparing another one, to be ready immediatelly as the first one finishes playback.
Now, imagine a piece of software that didn't do this. It would generate a buffer, send it to the audio driver, and then sit and wait doing nothing. When the buffer is reproduced, and time comes for a new one, the software calls the plugin instruments, but then the plugins need to go to the disk to load the audio data, so again the software idles doing nothing (waiting for slow I/O operations). I guess the cpu usage of such software would be ultra low, but it would have constant dropouts, and would require a huge buffer size (= huge latency) to avoid dropouts.

Another one, we all know that certain daws have higher base cpu usage. But what we don't know is: where does this usage come from? Is it only related to the project size (more tracks, more cpu usage)? How much usage do new plugins really add? Or is it just a base usage for the correct working of the software, where additional plugins don't add as much. We simply don't know until we measure.

Dismissing the assumption completely is unreasonable, also assuming the assumption is always correct is also unreasonable. :)

edit: typos, grammar
 

Dewdman42

Senior Member
don't over think it. Its a zero sum game. Software can be written efficiently or inefficiently. Its as simple as that. If its inefficient, it will run out of cpu time sooner then a more efficiently coded masterpiece.

Now all that being said, I will say, aside from Cubase 10.0.20(mac), I was able to playback a 100 track orch project on every major DAW without dropouts. Some used more CPU to do it then others, but they all did it. So does it matter? Maybe not. Choose the one with the workflow you like the best. Its not that big of a deal. Well Cubase 10.0.20 was a big deal and needed to be called out on it. But then Steinberg fixed it right away so kudos to them.

But the test is still informational to put out there. Its just data. I'm sorry if your favorite DAW is not the most efficient one, don't take it personally, but it is what it is. Sooner or later the inefficient ones will crap out sooner then the efficient ones. That is a perfectly reasonable assumption, but to your point, unless someone wants to take the time to do a completely thorough test of all DAW's under the same conditions at all buffer sizes, adding tracks one at a time until crap out...we will not find out the absolute answer to that question of which ones can handle more tracks and more plugins. Most of us aren't running into the limit anyway.
 
Top Bottom