Not my results though, a guy on Gearslutz did it.
Of course. What I'm saying is those metrics are not related to the measurements above. Where in the data above do we see the point where the CPU usage gets to 100% and you can't add any more effects? We don't - that point is not demonstrated. You're assuming that the higher CPU usage will result in fewer FX and instruments IF you were to get to that point. But that point is not demonstrated and, more importantly, that assumption is (almost always) incorrect for CPUs from the last decade or so. You hit real-time performance issues long before you run out of CPU power. So CPU power doesn't really matter any more.The more software instruments and FX I can run without issues, the better the DAW. The fewer, the worse the DAW.
Yep. That's a meaningful metric: the point at which dropouts start to occur.it is going to run out of breath eventually and will cause dropouts
As stated multiple times, here and on previous forum threads...Cubase was taking double the CPU % compared to other daws and was not able to play 100 tracks, it ran out of the ability to play the project somewhere around 50-70 tracks. You say there is no correlation, but I think there is. I personally did not have time to do a more rigorous test where I attempted to get each DAW to crap out and see how many tracks each one could do before it started to crap out. Cubase crapped out around 50-70 tracks...with the CPU cranked to double the others.Did your 100-track Cubase project run out of CPU power? If not then it proves the point that you're hitting other bottlenecks first.
A larger buffer size allows the DAW more breathing room to get the CPU crunching the DSP while multi tasking..until it can fill the buffer. if the buffer is small, then it should not take the CPU very long at all to fill the buffer, yet..that is when we get drop outs. That is because of the multi-tasking nature of the computer. A computer will almost never run at 100% cpu...but that doesn't mean the CPU inefficiency is irrelevant. Every time a process gets a slice of time to crunch some numbers, it needs to be efficient, If it gets enough slices of time then it can fill the audio buffer in time to send it. If it doesn't complete the task, then the audio buffer gets sent partially filled....ie dropout. With a smaller buffer, the buffer is smaller, but so is the length of time that the computer has to multi-task with various processes...so there is more possibility that the DAW will not get enough time allocated to it to fill that small buffer.I can certainly make a project that'll cause dropouts. But I can't get CPU usage anywhere near 100% when those dropouts start to occur. The single largest factor for dropouts is audio buffer size, and that drives real-time performance requirements, not CPU performance requirements.
And, as other users in his post were attempting to explain, its not linear. One DAW may be a hog with this load, but continue loading it with more tools then test again. Actually, test at the threshold of drop-outs as suggested above. That, at least to me, is much more meaningful.Not my results though, a guy on Gearslutz did it.
You're in luck! Because I did.I personally did not have time to do a more rigorous test