# 18 Cores for $979! Huge Pricedrop on new Intel Cascade Lake-X.



## ridgero

For anyone who is buying a new workstation soon or right now!









Intel's Cascade Lake-X CPU for High-End Desktops: 18 cores for Under $1000







www.anandtech.com





Thank you AMD!


----------



## jamwerks

Great news! And coming already in November. These will have a capacity of 256gb of ram, which for 98% of users will eliminate the need for having 2 separate machines (and maybe even VEP). I wouldn't be surprised if Steinberg has forseen this and would have an "external rack" (like VEP on just one machine) module for Cubase 10.5. We'll see !


----------



## DANIELE

Yeah, great news!

I was waiting for this to come! Now I can stay with the same mainboard and I'll change from 8 physical cores to 14 or maybe even 18 physical ones.


----------



## BGvanRens

Question is, do I want to wait another month or not, I am still on a 4820k/32GB rig..ran out of SATA 3 ports and been wanting an upgrade for about a year now. I already thought I made the decision to go with AMD, but deep inside I want to have something I am familiar with, which has been Intel for my entire life. So this is great news if you buy either Intel or AMD.

That TDP though and still only PCIe 3.0. Even though it's not really relevant for our kind of workloads. I like the idea of futureproofing as much as possible.


----------



## heisenberg

Good times. Hesring conflicting info on release date. Some say as eatly as October 7th. In any event it will be soon. 

The geek bench scores on multicore are really impressive on all the CPUs in this new lineup.


----------



## pderbidge

BGvanRens said:


> That TDP though and still only PCIe 3.0


True and the more powerful the CPU becomes, the more Power they tend to need although getting less and less power hungry than they used to be. Walk into any server room and you'll hear jet engine loud fans along with dedicated Hvac systems for those rooms. It makes me glad I decided to move my PC to a dedicated location outside of my studio as to not have to spend too much money on crazy water cooling rigs running on Nitrogen just to keep things quiet, lol. I'm glad to see Intel finally getting the message. One thing they can capitalize on against AMD is not only their ability to now go beyond 128gb but their more widespread support for Memory modules (Although AMD has gotten better) and support for technologies like Thunderbolt which is still very widely used in the creative community. These to me are more important than pcie4 at the moment because they are already being used by the current market. It's obvious that the mobo makers for Ryzen are still hyper focused on the gamer community which I think is a big missed opportunity for the extra customers they could pick up from other markets. Not all of us using desktops are living in our mom's basement playing Fortnite all day.


----------



## Andrew Aversa

Power draw is a way over-hyped issue with CPUs in general. Reviewers test them in extreme stress scenarios. In reality, unless you are doing very heavy overclocking and artificial stress tests, you probably *will* stick pretty close to the TDP number.

Case in point: I have a full-tower desktop with a 9900K overclocked (all-core) to 4.9ghz. It's cooled on air with 3 huge case fans. There's a 700w power supply, a GTX1080 powering two monitors, and about 8 USB devices, not to mention 3 M2 SSDs and 5 regular SSDs.

I have my entire computer plugged into a "Kill-a-Watt" power draw meter before it hits the APC, so I know exactly how much it's using. Want to know how much power this uses during playback of a pretty intense music project?

160w. At most. That's the *entire system*.

But wait, wouldn't one of these HEDT 18-core chips use more? Again, you'd be surprised. Here's a power usage chart from Gamers Nexus, showing total-system power usage under an extreme synthetic load (so: way worse than real-world scenarios).



https://www.gamersnexus.net/images/media/2018/cpus/9980xe/i9-9980xe-cinebench-power-nt.png



Here we can see that at stock speeds (which is all you would need for such a chip), the massive i9-990XE w/ 18 cores has a total system draw of only ~270w. Remember: This is during a *synthetic test *that is designed to stress every core to its absolute limit, and even then, that power usage is not very high.

At the end of the day, as long as you have a good cooling solution (and air cooling is better than any liquid solution 99.9% of the time) there is just no need to worry about power consumption.


----------



## AllanH

I'm glad Intel is getting aggressive on the high-end again.


----------



## gsilbers

well, amd seems to be catching up. that epyc seems epic. plus their other cpus


----------



## Symfoniq

For those of us who already have Intel X299 systems (myself included), this is a pleasant surprise, as I can go from 8 cores to 18 cores on the same motherboard with only a BIOS update.

But if I was building a new system, I'd probably still go AMD.


----------



## jamwerks

Looks like 32gb ram sticks are becoming more and more available. I9-10980 X299 + 256gb ram is gonna be the balls!


----------



## rgames

What kinds of projects are you guys running where you see a benefit to 10+ cores? Can you post a screenshot?

I saw a small benefit when I went from 4 to 6 cores but I haven't seen any benefit since moving from 6 to 10 cores. So I can't imagine what advantage would be provided by 18 cores, especially given the drop in core speed. Sure, the CPU meter is lower but I still hit real-time limits long before I hit CPU limits. I can't run at lower latency and export time differences are insignificant with anything over 6 cores.

So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.

Thanks,

rgames


----------



## DANIELE

rgames said:


> What kinds of projects are you guys running where you see a benefit to 10+ cores? Can you post a screenshot?
> 
> I saw a small benefit when I went from 4 to 6 cores but I haven't seen any benefit since moving from 6 to 10 cores. So I can't imagine what advantage would be provided by 18 cores, especially given the drop in core speed. Sure, the CPU meter is lower but I still hit real-time limits long before I hit CPU limits. I can't run at lower latency and export time differences are insignificant with anything over 6 cores.
> 
> So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.
> 
> Thanks,
> 
> rgames



In my case it will be useful because:

1) Reaper is great at multithreading so more core = more parallel processing;
2) At the same frequency per core maybe I will have lower temperatures, so less fan noise and heat in the studio;
3) A polished CPU better than the first generation it is always better.
4) In general useful for other multithreaded operations.


----------



## Mihkel Zilmer

rgames said:


> What kinds of projects are you guys running where you see a benefit to 10+ cores? Can you post a screenshot?
> 
> I saw a small benefit when I went from 4 to 6 cores but I haven't seen any benefit since moving from 6 to 10 cores. So I can't imagine what advantage would be provided by 18 cores, especially given the drop in core speed. Sure, the CPU meter is lower but I still hit real-time limits long before I hit CPU limits. I can't run at lower latency and export time differences are insignificant with anything over 6 cores.
> 
> So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.
> 
> Thanks,
> 
> rgames



+1 to this question, would like to hear some real world stories of people running 10+ slower cores compared to fewer and faster. DAWBench numbers are interesting, but not really a very accurate depiction of a real world situation in my opinion.

For my latest build, I upgraded core speed rather than core count. Went from 8 cores OC @ 4GHz on my old system vs 8 cores OC @ 5GHz on the new one. Seeing a HUGE boost here, nearly 2x in some situations.


----------



## tabulius

rgames said:


> What kinds of projects are you guys running where you see a benefit to 10+ cores? Can you post a screenshot?
> 
> I saw a small benefit when I went from 4 to 6 cores but I haven't seen any benefit since moving from 6 to 10 cores. So I can't imagine what advantage would be provided by 18 cores, especially given the drop in core speed. Sure, the CPU meter is lower but I still hit real-time limits long before I hit CPU limits. I can't run at lower latency and export time differences are insignificant with anything over 6 cores.
> 
> So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.
> 
> Thanks,
> 
> rgames



More polyphony?






But yeah, I'm still doing ok with four core i7 6700K, even pulling off bigger orchestra sessions, but there starts to be limits tho. 10-core or 14-core is now very tempting with the new pricing. Looking forward to see what the Threadripper 3 has to offer and maybe I'll upgrade by end of this year into i9 or TR3.


----------



## Manaberry

rgames said:


> What kinds of projects are you guys running where you see a benefit to 10+ cores? Can you post a screenshot?
> 
> I saw a small benefit when I went from 4 to 6 cores but I haven't seen any benefit since moving from 6 to 10 cores. So I can't imagine what advantage would be provided by 18 cores, especially given the drop in core speed. Sure, the CPU meter is lower but I still hit real-time limits long before I hit CPU limits. I can't run at lower latency and export time differences are insignificant with anything over 6 cores.
> 
> So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.
> 
> Thanks,
> 
> rgames




- All in one ready to use template without disabling instances (no slaves)
- Oversampling master chain plugins without drops
- Going crazy with granular/synth that usually take a lot of power
- In my case, lowering my buffer size (actual is 4096 and my 6 cores CPU is dying)
- Less watt consumption so less heat
- More voices (multi mics without limits)
- Higher memory speed (because of a more recent CPU)
- More power for sample streaming from disk
- Also a solid workstation for my other tools where multithreading is mandatory (Substance, Marmoset, Resolve, Photoshop)
- Because so many cores is cool


----------



## jamwerks

DAW devs should be more & more inspired to optimize programs.


----------



## Cat

The new cpus support up to 256 GB Ram. No more slaves.


----------



## Mihkel Zilmer

Cat said:


> The new cpus support up to 256 GB Ram. No more slaves.



For me personally, RAM is not the bottleneck. In a densely orchestrated passage with 3-4 mic positions loaded on each instrument the machine will choke with plenty of free RAM left over. The voice counts, especially with very complicated scripted legato and fast short note repetitions are in the thousands. A single machine just can't handle it.


----------



## ProfoundSilence

that's not true for me. 5960x here, ram is more of an issue for me at 128gb. 

I'll be waiting a bit still and going 512 gb eventually when its cheap enough


----------



## jamwerks

There's kind of two questions here concerning RAM; how many arts and libraries we want loaded and available, and secondly how many of thoses arts can be processed (played back simultaneously). Even just 32gb of RAM worth of arts could be difficult to play back simultaneously if it were purely a single legato art on 40 different instruments with 3 mic positions open.

For me, the 256gb ram it what I want (like having everything loaded). Just not sure yet if that's doable on one machine or better spead out over two.


----------



## DANIELE

I have 128 GB of RAM and actually they are more than enough for me. I'm leaving from sample libraries going to modeled/hybrid ones so the ram required is really low.


----------



## ridgero

At what point is the CPU the bottleneck with Kontakt?

An 8 Core iMac Pro with 256 GB, does this make any sense in orchestral music?


----------



## rgames

ridgero said:


> At what point is the CPU the bottleneck with Kontakt?


I've not seen that in about 10 years.

And I still haven't seen any screenshots that show a project that produces a CPU bottleneck...!

Benchmarks, sure, but I've not seen a correlation between benchmarks and actual projects. If you have a project with a few thousand voices playing a single block chord then sure, the DAWBench test makes sense. Or if you want to run 500 compressors. But I've not seen anyone write music that uses thousands of voices playing a single block chord or 500 compressors. Hence my search for a project, not a DAWBench test.

In my experience, as of about 8-10 years ago it's real-time performance that's the bottleneck for projects, not CPU performance.

What I have been able to demonstrate is a pretty good relationship between practical perfomance metrics (e.g. latency) and core speed. But not number of cores once you get above 6-8.

Therefore, sacrificing core speed for more cores seems like a bad idea to me. But again, I haven't seen the evidence that says more cores are better and absence of evidence is not evidence of absence.

rgames


----------



## Damarus

PSA: Not all cores are created equal.

You will see more benefit in higher clock speeds than core count (for most situations).


----------



## jamwerks

There may be a difference in core usage, between doing everything directly in Cubase (for example), or doing it with Cubase + VEP. Might be that using VEP (on just one machine of course) would spread the processing load out over all of the say 16 cores ?

But again, the 256gb ram thing is interesting, just need to be sure that a full size orchestra arrangement (all sections, lots of legati, multi mic's) + synths, plus reverbs, comps & eq's, would be doable on one machine (let's say the I9 10980)?


----------



## ProfoundSilence

I would say people thinking cpu is the bottleneck simply dont write the same music or have poor hard drive setups and dont realize that's an issue

I use separate SSDs per family, and when I try to run something with a lot of samples off a 7600rpm hdd it'll spike the CPU to 100 in kontakt but it's not a cpu issue. each ssd still only reads like 500 mb/s

using synths tends to be the culprit for people pushing CPU, but many of us don't use them at all, or use mild ones(or even just synths sampled)

when it comes to orchestral libraries with multiple microphone positions you have to understand that even if you've got a lot loaded into ram we aren't using more than one articulation at once most of the time.

so let's say you load up berlin brass and you've got 3 trumpets at ~ 6 gb or so each worth of articulations + mics loaded in a kontakt multi. you're not accessing 18 gb worth of samples at once - you're using the 1.3gb or so legato patches at most.

so even if your template is 120gb in ram, you're only using a fraction of it at a time as its mostly on standby.

it doesn't take much cpu to run these kontakt patches, but it does take a lot of ram to have enough of a buffer, and good hard drive setups to stream large sample pools.

I had very little impact going from a 4930k to a 5960x in terms of using kontakt patches(although it got a little dicey at ~60gb of ram) but it got much smoother when I upgraded despite no large jump in core speed but a massive jump in multithreading performance.

this is even more obvious when I overclocked to 4.9 ghz with no discernable benefit outside of gaming. default boost for 5960x is 3.5ghz, gaining a whopping 1.4 ghz did nothing for my daw use.

So stop speaking for others with any attempt of authority on the topic, because we dont all use our daws the same way - and will have different hardware requirements for our workflow.

I do mention hard drive issues because it's a common mistake to assume the 100% cpu usage in kontakt is a cpu issue if you dont know much about tech and take it for face value.

again, I'm willing to bet people saying cpu is a bottleneck frequently use synths. 

although many people probably use reverb and plugins to fix problems that using multiple microphones will fix. if I want clarity I turn up the close mics. if i need more tail i turn up the distant mics - which certainly takes less cpu than trying to eq a bunch of presence and multiple reverb sends. So another example where different workflows use radically different resources


----------



## Aaron Sapp

CPU bottlenecks prompted me to get a new PC recently. Between all the playback and processing, I was bringing my old computer to it’s knees.

I’m guessing a lot of people load up massive templates with minimal processing, but I can’t imagine not processing the crap out of everything.


----------



## ProfoundSilence

Aaron Sapp said:


> CPU bottlenecks prompted me to get a new PC recently. Between all the playback and processing, I was bringing my old computer to it’s knees.
> 
> I’m guessing a lot of people load up massive templates with minimal processing, but I can’t imagine not processing the crap out of everything.



good sound sources and using a mic blend. 

even just close + 2 different mics gives you clarity as well as control over depth/ambience. 

with OTs mic setup with srround ab tree I can also decide width from the 4th mic. (similiar to outriggers for sfa) 

but these microphone choices also change the frequency buildups so I'm not using a ton of EQ ever to get a great sound. I think this is because we never hear single decca recordings in good productions anyways these days. if you're going to hire 100 musicians for a score you can afford w few more mics


----------



## Blakus

I'm noticing drastic performance increases jumping from 6800k (6 cores) to i9 9940x (14 cores) with similar clock speeds. However, *only *with the use of DAW features like ASIO Guard in Cubase. Allowing unarmed tracks to be processed at far less demanding buffer sizes reduces the realtime demand significantly, allowing the extra cores to really come into play. When I turn ASIO guard off, the performance is *very *similar to my old system.

The main bottleneck I'm noticing with the i9 9940x now is Cubase's core management, which seems to be on a per track basis. If I overload a single track with too many plugins (therefore a single core), the entire ASIO/realtime performance is bottlenecked. This is easy to avoid if you're aware of it though. While keeping this in mind, and using ASIO guard, I'm able to throw enormous amounts of demanding plugins over large templates that I could never do on my 6 core system, and it handles with ease. But again, as soon as I deactivate ASIO guard, the whole project literally comes crashing down 

I'm eager to see improvements to DAW multicore distribution. Reaper for example, distributes the load on a "per plugin" basis, as opposed to 'per track' - how awesome is that!!

In short - Clock speed is obviously important, but you are doing yourself a disservice to ignore extra cores with DAW features like ASIO guard. I chose the 14core i9 as I found it to be a good balance between clock speed and cores. So far, so good!


----------



## Blakus

rgames said:


> I've not seen that in about 10 years.
> 
> And I still haven't seen any screenshots that show a project that produces a CPU bottleneck...!



Sorry to harp on about ASIO guard, but so many don't understand how game-changing it is.

This is a poor limited screenshot from a while back, but with ASIO guard you can easily max out your CPU before ASIO/Realtime bottlenecks - This is with a large Cubase template (which still ran beautifully by the way, until I wanted to add another beefy compressor). Most of the CPU usage comes from mixing plugins. Also, making sure that intensive plugins are not chained together on one track, which causes a single core to overload in Cubase, which would bring a realtime bottleneck into play. Expecting a system to keep up with hundreds of tracks and plugins in a realtime setting at low buffers is indeed unrealistic. But... ASIO guard means only your armed tracks need to perform at low buffers... haha. Ok I'll stop now.


----------



## Cat

I agree with you, Blakus and generally disagree with what @rgames states about Cpus and computers. 

Incidentally that is the exact CPU that I am eyeballing. A few questions for you, if you will:

- do you use VEPro or you have everything inside Cubase?
- what Asio Guard level (low/med/high) do you use?
- what is the audio interface’s asio buffer set to?


Blakus said:


> I'm noticing drastic performance increases jumping from 6800k (6 cores) to i9 9940x (14 cores) with similar clock speeds. However, *only *with the use of DAW features like ASIO Guard in Cubase. Allowing unarmed tracks to be processed at far less demanding buffer sizes reduces the realtime demand significantly, allowing the extra cores to really come into play. When I turn ASIO guard off, the performance is *very *similar to my old system.
> 
> The main bottleneck I'm noticing with the i9 9940x now is Cubase's core management, which seems to be on a per track basis. If I overload a single track with too many plugins (therefore a single core), the entire ASIO/realtime performance is bottlenecked. This is easy to avoid if you're aware of it though. While keeping this in mind, and using ASIO guard, I'm able to throw enormous amounts of demanding plugins over large templates that I could never do on my 6 core system, and it handles with ease. But again, as soon as I deactivate ASIO guard, the whole project literally comes crashing down
> 
> I'm eager to see improvements to DAW multicore distribution. Reaper for example, distributes the load on a "per plugin" basis, as opposed to 'per track' - how awesome is that!!
> 
> In short - Clock speed is obviously important, but you are doing yourself a disservice to ignore extra cores with DAW features like ASIO guard. I chose the 14core i9 as I found it to be a good balance between clock speed and cores. So far, so good!





rgames said:


> And I still haven't seen any screenshots that show a project that produces a CPU bottleneck...!


----------



## Kony

Blakus said:


> Sorry to harp on


I see what you did there


----------



## Olfirf

Blakus said:


> Sorry to harp on about ASIO guard, but so many don't understand how game-changing it is.
> 
> This is a poor limited screenshot from a while back, but with ASIO guard you can easily max out your CPU before ASIO/Realtime bottlenecks - This is with a large Cubase template (which still ran beautifully by the way, until I wanted to add another beefy compressor). Most of the CPU usage comes from mixing plugins. Also, making sure that intensive plugins are not chained together on one track, which causes a single core to overload in Cubase, which would bring a realtime bottleneck into play. Expecting a system to keep up with hundreds of tracks and plugins in a realtime setting at low buffers is indeed unrealistic. But... ASIO guard means only your armed tracks need to perform at low buffers... haha. Ok I'll stop now.



Yeah, Asio Guard could be a game changer, but it cannot be to me, at this point, unfortunately. You have to load all your instruments in Cubase and with huge templates - even with lots of disabled tracks - this means waiting for several seconds when you hit save! That is what keeps me at VEpro, as all of the template tracks do not have to be saved. Jason Graves nicely explains that in some of is videos.
Now, if Steinberg could somehow exclude those template files from the project file, that could change my mind!
How can you cope with the long saving time, Blakus? Or do you use a small template and load further instruments as needed?


----------



## rgames

Blakus said:


> Sorry to harp on about ASIO guard, but so many don't understand how game-changing it is.
> 
> This is a poor limited screenshot from a while back, but with ASIO guard you can easily max out your CPU before ASIO/Realtime bottlenecks - This is with a large Cubase template (which still ran beautifully by the way, until I wanted to add another beefy compressor). Most of the CPU usage comes from mixing plugins. Also, making sure that intensive plugins are not chained together on one track, which causes a single core to overload in Cubase, which would bring a realtime bottleneck into play. Expecting a system to keep up with hundreds of tracks and plugins in a realtime setting at low buffers is indeed unrealistic. But... ASIO guard means only your armed tracks need to perform at low buffers... haha. Ok I'll stop now.


That's a screenshot of a CPU meter, not a project. I can make a project that'll drive CPU usage that high but it's not one that anyone is going to listen to.

Again, I've made it clear that I'm not saying those projects don't exist. I'm just saying I haven't seen them. And I still haven't...!

And besides, looks like you still have 4% headroom there 

rgames


----------



## rgames

ProfoundSilence said:


> So stop speaking for others with any attempt of authority on the topic


If you're referring to me then I'd say go back and read my posts on this thread. I'm asking you guys to help me understand, which is me saying I'm not the authority.

I'm just asking questions here and looking to satisfy my curiosity - no need to get defensive.

Here's a quote from me from above:



rgames said:


> But again, I haven't seen the evidence that says more cores are better and absence of evidence is not evidence of absence.



If that's confusing, here's a translation: it might be true even though I've not seen any evidence that it is.

rgames


----------



## Symfoniq

I agree with most of what everyone is saying in this thread. The devil is in the details/implementation. There is also some established knowledge that probably needs to change a bit with the times.

I agree with Richard that adding cores doesn't always result in better real-time performance. I also agree with Blakus that things like ASIO-Guard are game-changers for making better use of all available CPU cores. Both points can be simultaneously true.

However, I would _not _make a distinction between "real-time performance" and "CPU performance." The moment you have a digital audio buffer, there is no such thing as real-time. Even at the lowest buffer sizes, real-time is an illusion, and the extent to which that illusion can be maintained depends on CPU performance within a definite amount of time.

But if that's true, you might ask, why do you start seeing audio dropouts while CPU usage is at 30 or 40%? It's most likely because a majority of your CPU's cores are _waiting _on another core.

To some degree, most DAWs these days are optimized for multiple cores. While approaches likely vary, a common one used by DAW developers is to put a track or stack of tracks on a CPU core. So in a (too) perfect scenario where you have six cores and six tracks, it's possible that each track gets its own core. If in our perfect scenario the load on each core remains equal, and what happens on each track/core is not dependent/waiting on what happens on another track/core, then CPU utilization could theoretically approach maximum.

However, this perfect scenario is never realized (and honestly, it's too simple to be accurate). The reality is that we won't have six tracks/cores that aren't dependent/waiting on other tracks/cores. Rather, we might have four virtual instrument tracks that all have a send to a fifth FX track, which has a very complicated chain of inserts. The four instrument tracks and the FX track are all routed to the sixth track, the master output, which has some mastering inserts of its own. While this routing is still pretty simple, we've created some new dependencies between cores/tracks: the FX track/core has to wait on the four instrument tracks/cores to send their audio chunks before it can sum the audio and process it through its effects chain; and the master output has to wait on the four instrument tracks and the FX track to send _their _audio chunks before it can sum the audio and process it through the final mastering effects chain. Because this summing has to happen in a synchronized fashion, and because it's extremely unlikely that all CPU cores are doing the same amount of work, the result is that you are going to have some tracks/cores waiting on other tracks/cores to do their work. To the end-user, this waiting by most of the cores looks like low CPU utilization.

The reality that some cores are almost always going to be doing more work than other cores leads to the common wisdom that less, faster cores are preferable to more, slower cores. After all, if you have a core that isn't working fast enough on your 18-core 3.0 GHz CPU, and you could have bought an 8-core 5.0 GHz CPU instead, it might seem like you should have opted for the latter. What good is 18 cores if 17 of them are waiting all the time?

But here's the thing: unlike ten or even five years ago, high core-count CPUs aren't that much slower than their fastest little brothers. The just-announced i9-10980XE can boost all 18 cores to 3.8 GHz. It can boost the two fastest cores to 4.8 GHz, the next two fastest cores to 4.7 GHz, the next few to 4.6 GHz, etc. With these new high core-count CPUs, you don't have to choose between lots of cores and lots of clock speed. These CPUs will adapt to both lightly and heavily threaded workloads far more effectively than older generations. You are leaving a _little_ bit of clock speed on the table in exchange for a huge increase in cores. But those extra cores still cost a lot of money. So the question is, for DAW workloads that are almost never evenly distributed across cores, are there scenarios where high core-count CPUs might be warranted?

In my experience, absolutely. But you have to be willing to increase the size of your audio buffer. This is the crux of the issue, and the reason why Richard and Blakus are both correct.

I assume everyone here knows this, but here are the ground rules:


By "workload", we mean "any processing that has to happen in order to fill the audio buffer within a definite amount of time." If we run out of time and the audio buffer isn't full yet, we get pops/clicks.
At low buffer sizes, you are giving your CPU a smaller amount of time to complete the workload. In exchange you get lower latency, and thereby a more convincing illusion of "real-time" performance.
Conversely, at high buffer sizes, you are giving your CPU a greater amount of time to complete the workload. The cost is increased latency, or the breakdown of the real-time illusion.

Now, if you look at charts showing CPU performance in audio workloads (Scan Pro Audio has many of them, such as this one), you'll notice that at low buffer sizes, most modern CPUs don't perform _drastically _better than other modern CPUs. At 64 samples, the 8-core 9900K with a 5.0 GHz turbo outperforms the older, 14-core 7940X with a 4.3 GHz turbo by about 23%. But notice that _all_ of the CPUs are within a range of 280 to 520 voices, meaning the fastest CPU still isn't accomplishing even twice as much work as the slowest CPU.

When the buffer size increases to 256, the situation looks very different. The 14-core 7940X outperforms the 8-core 9900K by 31%, and the fastest CPU is accomplishing over four times as much work as the slowest CPU. Increase the buffer size to 512 (and so on), and you can expect the 7940X to further distance itself from the 9900K.

The reason is straightforward: By increasing the buffer size, you are increasing the likelihood that a CPU core can complete the workload before time runs out to fill the audio buffer. Furthermore, by allowing that CPU core to finish its work before time runs out, you are likewise increasing the likelihood that any "downstream" dependent cores can also accomplish their work before time runs out. Simply put, given a greater amount of time, a greater number of cores can do a greater amount of work.

But it would be unrealistic to expect a high core-count CPU to show its full potential at low buffer sizes. With a shorter amount of time to accomplish a given workload, the odds greatly increase that there is going to be a "blocker" (overworked, stumbling core) in your routing that brings the entire project to its knees. But give the CPU more time, and you start to be able to do things with many cores that you can't do with fewer.

This is why ASIO-Guard is such a big deal on high core-count systems. It gives the CPU a _lot_ of time (relatively speaking) to process a workload while reducing perceived latency. With ample time to process work, cores that are waiting/dependent on other cores are less likely to be left out in the dark before time runs out to fill the buffer. For the purposes of "real-time" work, time is breath to the CPU: just as being able to breathe better allows you to exercise harder, a CPU can work harder if you let it breathe more (give it more time). Hence Blakus' screenshot of 96% CPU utilization, which in my experience is practically impossible to achieve with a low buffer setting.

There are some variables that this overlong rant doesn't address, such as waiting on I/O if streaming samples from disk, or "hacking" your audio routing so that too much processing isn't concentrated on a single track (and therefore, possibly, a single core). And of course I've still oversimplified the issue. But I do believe that high core-count CPUs have a place in a composer's virtual instrument-intensive workload, provided you are willing to work at higher latencies or can utilize something like ASIO-Guard to make high buffer settings relatively painless.

But if you really need or want to work at 64 samples, buy the 9900K.


----------



## Blakus

Symfoniq said:


> I agree with most of what everyone is saying in this thread. The devil is in the details/implementation. There is also some established knowledge that probably needs to change a bit with the times.
> 
> I agree with Richard that adding cores doesn't always result in better real-time performance. I also agree with Blakus that things like ASIO-Guard are game-changers for making better use of all available CPU cores. Both points can be simultaneously true.
> 
> However, I would _not _make a distinction between "real-time performance" and "CPU performance." The moment you have a digital audio buffer, there is no such thing as real-time. Even at the lowest buffer sizes, real-time is an illusion, and the extent to which that illusion can be maintained depends on CPU performance within a definite amount of time.
> 
> But if that's true, you might ask, why do you start seeing audio dropouts while CPU usage is at 30 or 40%? It's most likely because a majority of your CPU's cores are _waiting _on another core.
> 
> To some degree, most DAWs these days are optimized for multiple cores. While approaches likely vary, a common one used by DAW developers is to put a track or stack of tracks on a CPU core. So in a (too) perfect scenario where you have six cores and six tracks, it's possible that each track gets its own core. If in our perfect scenario the load on each core remains equal, and what happens on each track/core is not dependent/waiting on what happens on another track/core, then CPU utilization could theoretically approach maximum.
> 
> However, this perfect scenario is never realized (and honestly, it's too simple to be accurate). The reality is that we won't have six tracks/cores that aren't dependent/waiting on other tracks/cores. Rather, we might have four virtual instrument tracks that all have a send to a fifth FX track, which has a very complicated chain of inserts. The four instrument tracks and the FX track are all routed to the sixth track, the master output, which has some mastering inserts of its own. While this routing is still pretty simple, we've created some new dependencies between cores/tracks: the FX track/core has to wait on the four instrument tracks/cores to send their audio chunks before it can sum the audio and process it through its effects chain; and the master output has to wait on the four instrument tracks and the FX track to send _their _audio chunks before it can sum the audio and process it through the final mastering effects chain. Because this summing has to happen in a synchronized fashion, and because it's extremely unlikely that all CPU cores are doing the same amount of work, the result is that you are going to have some tracks/cores waiting on other tracks/cores to do their work. To the end-user, this waiting by most of the cores looks like low CPU utilization.
> 
> The reality that some cores are almost always going to be doing more work than other cores leads to the common wisdom that less, faster cores are preferable to more, slower cores. After all, if you have a core that isn't working fast enough on your 18-core 3.0 GHz CPU, and you could have bought an 8-core 5.0 GHz CPU instead, it might seem like you should have opted for the latter. What good is 18 cores if 17 of them are waiting all the time?
> 
> But here's the thing: unlike ten or even five years ago, high core-count CPUs aren't that much slower than their fastest little brothers. The just-announced i9-10980XE can boost all 18 cores to 3.8 GHz. It can boost the two fastest cores to 4.8 GHz, the next two fastest cores to 4.7 GHz, the next few to 4.6 GHz, etc. With these new high core-count CPUs, you don't have to choose between lots of cores and lots of clock speed. These CPUs will adapt to both lightly and heavily threaded workloads far more effectively than older generations. You are leaving a _little_ bit of clock speed on the table in exchange for a huge increase in cores. But those extra cores still cost a lot of money. So the question is, for DAW workloads that are almost never evenly distributed across cores, are there scenarios where high core-count CPUs might be warranted?
> 
> In my experience, absolutely. But you have to be willing to increase the size of your audio buffer. This is the crux of the issue, and the reason why Richard and Blakus are both correct.
> 
> I assume everyone here knows this, but here are the ground rules:
> 
> 
> By "workload", we mean "any processing that has to happen in order to fill the audio buffer within a definite amount of time." If we run out of time and the audio buffer isn't full yet, we get pops/clicks.
> At low buffer sizes, you are giving your CPU a smaller amount of time to complete the workload. In exchange you get lower latency, and thereby a more convincing illusion of "real-time" performance.
> Conversely, at high buffer sizes, you are giving your CPU a greater amount of time to complete the workload. The cost is increased latency, or the breakdown of the real-time illusion.
> 
> Now, if you look at charts showing CPU performance in audio workloads (Scan Pro Audio has many of them, such as this one), you'll notice that at low buffer sizes, most modern CPUs don't perform _drastically _better than other modern CPUs. At 64 samples, the 8-core 9900K with a 5.0 GHz turbo outperforms the older, 14-core 7940X with a 4.3 GHz turbo by about 23%. But notice that _all_ of the CPUs are within a range of 280 to 520 voices, meaning the fastest CPU still isn't accomplishing even twice as much work as the slowest CPU.
> 
> When the buffer size increases to 256, the situation looks very different. The 14-core 7940X outperforms the 8-core 9900K by 31%, and the fastest CPU is accomplishing over four times as much work as the slowest CPU. Increase the buffer size to 512 (and so on), and you can expect the 7940X to further distance itself from the 9900K.
> 
> The reason is straightforward: By increasing the buffer size, you are increasing the likelihood that a CPU core can complete the workload before time runs out to fill the audio buffer. Furthermore, by allowing that CPU core to finish its work before time runs out, you are likewise increasing the likelihood that any "downstream" dependent cores can also accomplish their work before time runs out. Simply put, given a greater amount of time, a greater number of cores can do a greater amount of work.
> 
> But it would be unrealistic to expect a high core-count CPU to show its full potential at low buffer sizes. With a shorter amount of time to accomplish a given workload, the odds greatly increase that there is going to be a "blocker" (overworked, stumbling core) in your routing that brings the entire project to its knees. But give the CPU more time, and you start to be able to do things with many cores that you can't do with fewer.
> 
> This is why ASIO-Guard is such a big deal on high core-count systems. It gives the CPU a _lot_ of time (relatively speaking) to process a workload while reducing perceived latency. With ample time to process work, cores that are waiting/dependent on other cores are less likely to be left out in the dark before time runs out to fill the buffer. For the purposes of "real-time" work, time is breath to the CPU: just as being able to breathe better allows you to exercise harder, a CPU can work harder if you let it breathe more (give it more time). Hence Blakus' screenshot of 96% CPU utilization, which in my experience is practically impossible to achieve with a low buffer setting.
> 
> There are some variables that this overlong rant doesn't address, such as waiting on I/O if streaming samples from disk, or "hacking" your audio routing so that too much processing isn't concentrated on a single track (and therefore, possibly, a single core). And of course I've still oversimplified the issue. But I do believe that high core-count CPUs have a place in a composer's virtual instrument-intensive workload, provided you are willing to work at higher latencies or can utilize something like ASIO-Guard to make high buffer settings relatively painless.
> 
> But if you really need or want to work at 64 samples, buy the 9900K.


This is the best I’ve seen this information explained. Well said.


----------



## Symfoniq

Blakus said:


> This is the best I’ve seen this information explained. Well said.



Thanks!

Sorry about the length, though. Writing is easier than editing!


----------



## Nick Batzdorf

rgames said:


> So I'm curious what people are doing where these kinds of core counts make a difference for DAW use.
> 
> Thanks,
> 
> rgames



Richard, this is Logic running a bunch of tracks on my 12-core Mac Pro (it has hyperthreading, hence the 24 lanes). I have my big-arse full template loaded.


----------



## rgames

Nick Batzdorf said:


> Richard, this is Logic running a bunch of tracks on my 12-core Mac Pro (it has hyperthreading, hence the 24 lanes). I have my big-arse full template loaded.


That's pretty consistent with what I see with a simliar kind of project on PC. And it shows that you're not anywhere near full CPU usage.

What I'm curious about is what kind of projects people are running where all those bars are pushing up towards 100%.

rgames


----------



## Nick Batzdorf

rgames said:


> That's pretty consistent with what I see with a simliar kind of project on PC. And it shows that you're not anywhere near full CPU usage..
> 
> rgames



That's what I post all the time, in so many words - that people have been trained to keep chasing the next generation of computers.

It is good having some processing overhead in this machine, and the machine is using its cores. I was running into the limit of my previous machine, which was the generation before this one (3,1 8-core Mac Pro). But most people really don't need to keep upgrading - modern machines really do keep up, especially now that we have SSDs.

The one caveat is that I don't run lots of mic positions all the time, and of course each additional mic position doubles the resources you need per program.


----------



## jamwerks

It's true that most composers won't need the latest top of the line cpu's. A couple advantages though: once you add synths (granular) to projects, processing needs can go way up, there's also running video which seems to require some juice, and more power allows lower buffer settings, which is a real plus when playing in lines...


----------



## rgames

jamwerks said:


> It's true that most composers won't need the latest top of the line cpu's. A couple advantages though: once you add synths (granular) to projects, processing needs can go way up, there's also running video which seems to require some juice, and more power allows lower buffer settings, which is a real plus when playing in lines...


That all makes sense. The trouble is that I've tried to demonstrate it a number of times and can't seem to do it...!

For example, I have a large orchestral project that I've used as a benchmark for 6 or 7 years (maybe more - can't recall exactly). It's a pretty typical hybrid orchestral/synth project with some fast-moving lines in the strings/WW and a bunch of effects. I've run it on a 4-core, 6-core and 10-core processor and the minimum latency I can achieve is about 6 ms on all of them. So, from a practical standpoint, they all performed identically. Granted, the 10-core machine ran the project at lower CPU usage, but the point is that they all ran the same project at the same latency with no issues. I don't particularly care what the CPU usage is so long as it's not a bottleneck.

Within this thread and occassionally on others I've seen assertions that CPU could become a bottleneck for the reasons that you describe. Again, that makes sense.

However, if you look through this forum and a bunch of others on the web then you'll see a ton of posts where people look for help with pops/crackles/dropouts at low CPU usage but none (that I can find) where people ask for help with CPU bottlenecks at low ASIO (real-time) usage.

Therefore, in my own experience, and that of the people who post on music-related web forums, it's real-time perfomance that matters the most and the situation you describe, while theoretically possible, seems to occur very infrequently (or not at all, as in my experience).

rgames


----------



## Cat

rgames said:


> That all makes sense. The trouble is that I've tried to demonstrate it a number of times and can't seem to do it...!
> 
> For example, I have a large orchestral project that I've used as a benchmark for 6 or 7 years (maybe more - can't recall exactly). It's a pretty typical hybrid orchestral/synth project with some fast-moving lines in the strings/WW and a bunch of effects. I've run it on a 4-core, 6-core and 10-core processor and the minimum latency I can achieve is about 6 ms on all of them. So, from a practical standpoint, they all performed identically. Granted, the 10-core machine ran the project at lower CPU usage, but the point is that they all ran the same project at the same latency with no issues. I don't particularly care what the CPU usage is so long as it's not a bottleneck.
> 
> Within this thread and occassionally on others I've seen assertions that CPU could become a bottleneck for the reasons that you describe. Again, that makes sense.
> 
> However, if you look through this forum and a bunch of others on the web then you'll see a ton of posts where people look for help with pops/crackles/dropouts at low CPU usage but none (that I can find) where people ask for help with CPU bottlenecks at low ASIO (real-time) usage.
> 
> Therefore, in my own experience, and that of the people who post on music-related web forums, it's real-time perfomance that matters the most and the situation you describe, while theoretically possible, seems to occur very infrequently (or not at all, as in my experience).
> 
> rgames


yeah, maybe your testing project is not demanding enough. I am running a huge template, probably over 20 orchestral libraries, plus tons of percussion and effects, lots of high end reverbs (20 quad stems...). Main computer is a 5-years old Xeon 12 cores. 128 GB Ram, plus VEP 2x external slaves. I can tell you, the computer is struggling a bit. Sometimes when I just add one single Omnisphere patch (latest generation addons) it maxes out and crackles, at 256 buffer. Not wiling to go any higher latency.
Multi -cores cpus are fantastic and very well handled by VEpro, but you need high clock speed for stuff to run within Cubase directly, like Omnisphere (which doesn't like VEPro).. I used to have a 6-cores (at a slightly higher clock) which had a poorer performance. Please note that modern libraries (Kontakt) with all the crazy fancy scripting require way more power than those from 6-7 years ago (that I suspect you are using).
I am using ASIO Guard (without which it would not run well at all), but at the lowest level (so it does not crackle when I switch tracks using VEPro). 

So there is hope that the new CPUs will bring some relief!..


----------



## rgames

Cat said:


> Xeon 12 cores


Have you tried to run it on a regular i7?

I'd be curious to see what you find there because I've found that, in general, Xeons perform worse than the regular i7s when it comes to real-time performance. I'm not sure why but I think it might be due to the generally lower clock speeds on Xeons. I've seen that a number of times over the years: a project that struggles on a Xeon does much better on an i7. However, I'll also say that most of that experience is with dual-Xeon setups which are, admittedly, a different animal. Maybe single Xeons are different - can't say I can recall a direct comparison in that regard.

Cheers,

rgames


----------



## chimuelo

Xeons do run slower but even the faster Xeon E3 1275 v6 w/ C236 chipset gags on the same projects I run with i7 4790k.
The 200MHz difference cannot be responsible for the difference.

I always thought it was the chipsets because I have to really fiddle with timings, driver issues, etc.
Then it still shows PCI Overflow messages which I haven't seen since I used Internal Sound cards on 32bit PCI Slots.

I tried Supermicro, Asus and ASRock cards with the C200 series chips.
I love quality built short trace server and workstation boards, but they just dont work as well as the i7 Chipsets.

This is why I love ASRock Rack motherboards.
They seem to get it because they have server quality boards using i7 chipsets, even the lightweight H97 I have is fast and problem free


----------



## rgames

chimuelo said:


> Xeons do run slower but even the faster Xeon E3 1275 v6 w/ C236 chipset gags on the same projects I run with i7 4790k.
> The 200MHz difference cannot be responsible for the difference.


Yeah doesn't seem like that would make a difference. Regarding the chipsets - don't the newer chipsets cover both Xeon and i7? I wonder if there's any meaningful comparison there.

Somebody just needs to write a real-time OS for DAWs and then all the devs need to port all their software over. I think I heard a rumor that Steinberg or one of the DAW devs did some tests on a real-time OS a while back.

rgames


----------



## JohnG

rgames said:


> Somebody just needs to write a real-time OS for DAWs and then all the devs need to port all their software over. I think I heard a rumor that Steinberg or one of the DAW devs did some tests on a real-time OS a while back.
> 
> rgames



I think that's the crux, Richard; draining the spaghetti* of the background OS might do it all.

It's mind-boggling, without even touching a music program, to see how many processes and services are running all the time in Windows and, presumably, in Mac OS as well. Even when I've turned off everything recommended, the list of non-music-related stuff that's still running seems gargantuan.

* I don't think this metaphor works.


----------



## ridgero

Sorry guys, but whats a real-time OS?


----------



## waveheavy

I see the advantage in processing power with multi cpu cores, but not with realtime RAM storage needed for a large orchestral template. And when does a super large template become too complex which does the same thing in slowing your creativity down as with not having a large template of instruments?


----------



## JohnG

ridgero said:


> Sorry guys, but whats a real-time OS?



It's a fantasy, no doubt, but the concept is an operating system devoted to music, without all the IT needed, say, to fit into a corporate or business environment. The "real time" in the moniker means that all the resources are available to music playback all the time. 

That's by contrast with the many interruptions that our current OS imposes -- interruptions from tasks like scanning the USB ports, scanning for new hardware, maintaining print queues and all the myriad services that a PC routinely needs in a normal work space.

Not sure how closely that corresponds to others' ideas.



waveheavy said:


> when does a super large template become too complex which does the same thing in slowing your creativity down as with not having a large template of instruments?



I guess, "when it does." I use a large template but, like most people with SSDs and all that, the RAM footprint is much lower than the default.

I don't get the "less creativity" idea of a large template. If I want short strings, I often want a heck of a lot of them from which to choose. I don't think music is analogous to the "think of 30 things you can do with a brick, a nail and a piece of string." No doubt some people can come up with 30 or 60, and hats off to them. 

But I don't know that making do with one violin patch is the Path To Musical Greatness.


----------



## Guy Rowland

FWIW...

1. I presume Richard never sees CPU bottlenecks because he only works on sample-based projects. Try throwing some demanding synths and effects in a project, and those bars turn red a whole lot quicker. I’ve had plenty of project meldowns over the years in an inefficient DAW like Cubase, where Asioguard can’t work with VEP.

2. For those of us who use a lot of demanding effects / synths etc, choosing a CPU is all about single core vs multicore performance. This can be a significant blind-spot in DAWbench testing. Very often it’s your single core score that is causing a meltdown - the amount a single channel in your session can process. From my experience, 4.3ghz is a minimum I’d consider at this point, across all cores.

3. Advice I got from Scan is in their experience all core speeds should be fixed. A lot of audio problems can result from different cores speeding up / down.

4. For me I haven’t been near 64gb, let alone 256gb, since going with a VEP disabled template, which has been genuinely game-changing.

With all these provsos, nice that the new range can drop right into an X299 mobo (which I have), I’ll be watching the test results carefully with an eye on the thermals to see if I can get a solid pain-free performance boost.


----------



## DANIELE

As I said before you maybe are not considering that a new polished CPU from the same productive process could do better in a bunch of activities.

I.e. with my actual CPU it is hard to reach 4.5 GHz on all cores and the PC could be unstable and became very hot on heavy processing.

With the new CPU I could do a little OC to reach 4.5 GHz avoiding overheat and since prices are dropped a lot I could buy an high core CPU, so in the end I will have less heat, more cores, more GHz.

I think I will see improvements in every field out there.

This could be valid in my case, where I have a good CPU but a very hot one from the first series of X299 cpus.


----------



## rgames

JohnG said:


> It's a fantasy, no doubt,


There are real-time OSes but none of the standard consumer OSes are real-time. Software that controls an airplane or a rocket needs real-time feedback loops because physics tends not to wait for mouse inputs or hard disk activity... So those development environments are real-time.

The issue with DAWs is that almost nothing requires real-time performance on a consumer PC. The exceptions are sound cards and (to a lesser extent) video cards. Your D/A converter operates in real time, again because the physics of analog sound that goes to your ears requires it: your DAW needs to operate quickly enough to keep the audio stream filled. When it fails to do so you hear pops/crackles/dropouts. Likewise your screen updates at a fixed rate (probably 60 Hz), so the computer needs to keep the video stream filled and when it can't you see tearing (yes, Gsync is an exception... but you get the idea).

How well your computer keeps up with those real-time data streams is called its real-time performance. The meter in your DAW gives you a measure of how close you are to falling behind (it's not a CPU meter it's a real-time performance metet).

Think of it like a train going by where you have to load 30 people in each car and each car can only hold 30 people. If something happens (e.g. some USB driver distracts you) and you don't get 30 people on one car then you're screwed because you can't just send extra people on the next car and besides, they'd be too late. There's a gap. That gap is the equivalent of a pop or click or screen tear.

Non-real-time activities like spreadsheet calcs or video renders don't care if there's a gap - the next car can just pick up where the previous one left off. There's a bit of a delay but in practice it doesn't make a difference because there's no continuous data stream that's interrupted and the delay is too short to be sensed.

Because the vast majority of consumer activities are more like spreadsheets the standard OSes are optimized for non-real-time. For audio activities they just use huge buffers to protect against problems. But those huge buffers come with huge latency which is a problem if you're playing a virtual instrument. That's why DAW users have specialized sound cards with optimized drivers.


----------



## rgames

Guy Rowland said:


> 1. I presume Richard never sees CPU bottlenecks because he only works on sample-based projects. Try throwing some demanding synths and effects in a project, and those bars turn red a whole lot quicker. I’ve had plenty of project meldowns over the years in an inefficient DAW like Cubase, where Asioguard can’t work with VEP.


I've certainly run in to those kinds of issues. Diva and izotope Vocalsynth are probably the most common culprits for me.

But they never cause CPU overload, they cause real-time overload. I can export the track faster than it takes to play back, so that means there's plenty of processing power available to do all necessary calcs faster than required. The problem during playback is that the processing isn't happening exactly when necessary because the CPU is distracted by something else. That's clear evidence of a real-time problem and is consistent with what I've seen on a bunch of other setups.

Does that mean it applies to all? Of course not. But the counterexamples seem to be very hard to come by...

rgames


----------



## chimuelo

Merging Technologies Pyramix actually has been successful at bypassing Windows on a single core of an i7 to use for its mixer.

Its pretty impressive but as far as I know its still just for mixing not recording, but the DSP assisted app is pretty powerful and walked away from AES more than once with award.

Im sure liking Intel having a fire under its ass though.
Ive got 3 x machines in racks I just dont use because ones a GSIF box the other two ancient Bloomfield i7’s.

It would cost me peanuts to fill all three with AMD Ryzen 5 3600 CPU’s.

Id rather have an 18 Core rig than a 18 core CPU.
But they seem to be approaching the same costs which is more than welcome.

Love that CEO from AMD.
CEO’s are suppose to prepare a company for the future, sheez this gal foresaw the future, for 2020 as well, as Intel is still trapped in its 14nm skin.


----------



## Guy Rowland

Chimuelo - actually Pyramix does use its Mass Core for recording. Its beeen a standard fixture for TV studios in the UK - I've never seen PT used for recording, and Reaper is too fiddly. It's never really got traction outside specialist environments though.

[TANGENT] In more recent years the two devices that are now replacing Pyramix in studios here at least are the JoeCo recorder and increasingly VOSgames' Boom Recorder. This latter one is excellent imo, and I own a copy that I use with a MacbookPro.[/TANGENT]


----------



## Guy Rowland

Just had a proper look at the list - the main headling is just cost, isn't it? I see an increase in cores but not a vast one from my olde 7820X to be honest, o/c to 4.3ghz. Essentially its two, maybe four, more cores for the same single-core performance I have now - not to be sneezed at, but I wonder in the real world how useful it is. This current setup of mine has performed very well really, but given the choice I'd go for more single core rather than more multi-core, so its not all that appealing If there was an option to have say 4.8ghz with 8 cores at a sensible temperature, I'd be thinking of jumping.


----------



## Manaberry

Looks good 👀 
https://www.intel.com/content/www/us/en/products/processors/core/x-series/i9-10980xe.html

2933 Memory Type, I've got 3000 DDR4 stuck to 2400 because of my old CPU. The upgrade will be worth!


----------



## chimuelo

Guy Rowland said:


> Chimuelo - actually Pyramix does use its Mass Core for recording. Its beeen a standard fixture for TV studios in the UK - I've never seen PT used for recording, and Reaper is too fiddly. It's never really got traction outside specialist environments though.
> 
> [TANGENT] In more recent years the two devices that are now replacing Pyramix in studios here at least are the JoeCo recorder and increasingly VOSgames' Boom Recorder. This latter one is excellent imo, and I own a copy that I use with a MacbookPro.[/TANGENT]




Thats good news because at AES years ago I thought their mixing scheme was awesome and its Plug Ins, while seemingly proprietary were extremely effective. The slightest tweaks had measurable results.

Do know where I can read up on its Mixers MIDI Functionality?
Im trying to find a live performance DAW using an RME AIO where I dont need t0 buy another 3500 USD DSP Rack.

Thanks


----------



## Symfoniq

rgames said:


> Somebody just needs to write a real-time OS for DAWs and then all the devs need to port all their software over.
> 
> rgames



Too bad BeOS never caught on. The ability of designated "real-time threads" to preempt any other thread made it an attractive platform for audio and video workflows.


----------



## Guy Rowland

chimuelo said:


> Thats good news because at AES years ago I thought their mixing scheme was awesome and its Plug Ins, while seemingly proprietary were extremely effective. The slightest tweaks had measurable results.
> 
> Do know where I can read up on its Mixers MIDI Functionality?
> Im trying to find a live performance DAW using an RME AIO where I dont need t0 buy another 3500 USD DSP Rack.
> 
> Thanks



I'll confess I abandoned Pyramix about a decade ago and never looked back. There was no midi of any kind back in those days of course. It had a few amazing things going for it as a platform, the audio editing still beats anything else I've tried hands-down, PT included. MassCore, I can't help thinking, was more important then than it is now, but clearly there might be a need for it out there in some circumstances.

Sorry I can't be of more help - you better get off to Googling!


----------



## ridgero

New Xeon Family gets a big price cut as well.









Intel Xeon W-2200 Family: Cascade Lake-X with ECC and 1TB Support







www.anandtech.com





Do you think it will have an effect on the new next iMac Pros?


----------



## Nick Batzdorf

Guy Rowland said:


> 2. For those of us who use a lot of demanding effects / synths etc, choosing a CPU is all about single core vs multicore performance.



A few months ago, responding to another thread very much like this one, I loaded a bunch of multis into Omnisphere and tried to make my computer gag.

It wouldn't comply.

No doubt there are people who max out current computers, but my hunch is that they're few and far between.



JohnG said:


> It's a fantasy, no doubt, but the concept is an operating system devoted to music, without all the IT needed, say, to fit into a corporate or business environment. The "real time" in the moniker means that all the resources are available to music playback all the time.



The reason we have all these great tools and instruments at - in the grand scheme of things - very low prices is that companies have been able to leverage mass-market computers. Economy of scale.

There have been several "embedded" systems, and they all failed because people didn't want to pay for them.

I personally am fine with macOS the way it is now, with Logic set at a 128 sample buffer and no issues.

What I would like to see is the equivalent of simul-sync on tape recorders - audio interfaces automatically switching to direct monitoring when you put a track into record.

Apparently that exists as a standard somewhere - I forget the details - but it hasn't been picked up.


----------



## Guy Rowland

Nick Batzdorf said:


> A few months ago, responding to another thread very much like this one, I loaded a bunch of multis into Omnisphere and tried to make my computer gag.
> 
> It wouldn't comply.
> 
> No doubt there are people who max out current computers, but my hunch is that they're few and far between.



I've done it plenty of times, and with Omni too. But to be fair that was a year or so ago mainly, before they fixed the Windows CPU bug. Since then its been massively better.

Right now my system is better than its ever been, but I have to work at 256 - that's a limitation I'm ok with. Big synth-laden projects get very high on the meters, but they're not cracking at this buffer size. a few instances of Omni, plus a few u-He synths and Avengers get things pretty lively, especially with thirsty effects in the DAW on any of these channels.


----------



## rgames

Guy Rowland said:


> but I have to work at 256 - that's a limitation


256 is plenty good.

Acoustic instruments respond more slowly than that, some by quite a lot, and people have done just fine with them.

My primary instrument is clarinet, one of the faster-responding instruments, and it has a latency of 30-50 ms depending on how you're playing and in what register. A buffer of 256 is probably like 15 ms, so at least 2x faster than one of the "fastest" acoustic instruments.

rgames


----------



## Nick Batzdorf

rgames said:


> 256 is plenty good.
> 
> Acoustic instruments respond more slowly than that, some by quite a lot, and people have done just fine with them.



256 doesn't bother me, but instruments with a hard attack speak immediately (percussion, piano, harp, etc.).


----------



## rgames

Nick Batzdorf said:


> instruments with a hard attack speak immediately (percussion, piano, harp, etc.)


I'm going to have to disagree there - acoustic piano latency is 15-30 ms (or more) depending on dynamic. It gets longer towards pp because the mechanism moves more slowly. Also, even after the hammer strikes the string there's time for the vibrations to develop then travel through the air to your ears (5 ms or so, longer for larger pianos and shorter for smaller pianos).



The acoustics of the piano: A Askenfelt & E Jansson: From touch to string vibration: Measuring the timing



Even if you can sense the latency you will adapt your playing to account for it. Latencies on bass strings/winds can easily be 100 ms. And they do just fine.

rgames


----------



## Nick Batzdorf

Again, I’m fine with 256 samples, although I can feel the difference between that and 128 if I A/B them on a piano library. And I agree that you’ll adjust your playing.

What’s more, I’m not sure that the MIDI Note-on is triggered at the bottom of the key travel - it may be before.

But for example drums speak right away. If I play a pad, I’m listening/feeling the stick hitting the pad, not the sample. And there are drummers who have trouble with the typical 3ms trip through a digital mixer, or at least who don’t like it.


----------



## Guy Rowland

I'm ok with 256 too, as I said in my previous post.

My current setup is working well, but not infrequently is on the edge. Everyone's mileage will vary - for me the focus on endless cores is an unworthy cause (see what I did there? Cores / Cause? Never mind). But for those who only use occasional synth patches on relatively undemanding synths, they'll likely not need such a high single core speed. Those using Reaper (and I think Logic) will likely be ok too, they are far more efficient DAWs than Cubase. If you're hosting FX in VE Pro the CPU will go further. For those of us in Cubase frequently using CPU-hungry synths and effects, I'd strongly suggest weighing up those single core speeds very carefully.

So guess what - everyone's systems, workflows and needs are different. I do read a lot of posts that seem to decree what works for them should work for everyone else, and I don't find things so simple.


----------



## woodslanding

It's funny that latency on a piano never feels like a problem, but on a piano sample, I literally can't play fast tempos if latency goes above 128. Jazz solo in 8th notes at 350 bpm? No longer possible. 

It took very little time to find the samples with untrimmed initial silence in the Canterbury Suitcase sampleset, down to about 2 ms. They just felt wrong. Maybe the latency of a piano is somehow different, I don't know. But it never bothers me.

And even if your clarinet can't create a sound from silence in less than 100 ms, transitions between notes must be much faster. Otherwise a run of 32nd notes at 80 bpm would be silent.

So yes, maybe latency doesn't matter to _you_.....


----------



## Luke Davoll

Hi guys. Two things I've not seen talked about here yet and would love some insight. 

1. Junkiexl's setup 
2. Fabio's response to high core count cpus over at Steinberg. 

1. JXL has an i9 7900x 10 core running at 4 GHz. From 3 server units he has hundreds of audio channels coming in, his template consists of thousands of tracks, and he runs plenty of soft synths too. He has a dsp card for offload some fx processing. So, if this is enough for him, why are we discussing 18core CPUs? (i.e. what am I missing?). The 10980xe can boost cores etc, but from what I've read, having all cores running at the same frequency is desirable, and it seems that's what JXL has done too... 

2. Fabio has stated that the MMCSS limit doesn't exist with cubase 10 under Windows 10 any more. However, he did say that many more threads require...what did he say...synchronising all these threads...which would not be beneficial for audio use. So this also makes me wonder again if JXL has the right formula by not having more than 10 cores. 

I'm definitely a noob, but have done heaps of research and am very interested in this. I imagine JXL would have had his assistants try every possible combination until they found the best result for his work flow. I'm not sure what Hans uses. Does anyone know? 

I'd love to hear your thoughts...


----------



## Guy Rowland

Luke Davoll said:


> 1. JXL has an i9 7900x 10 core running at 4 GHz. From 3 server units he has hundreds of audio channels coming in, his template consists of thousands of tracks, and he runs plenty of soft synths too. He has a dsp card for offload some fx processing. So, if this is enough for him, why are we discussing 18core CPUs? (i.e. what am I missing?).



Unless I'm missing something, I think the critical words there are "3 server units". I guess people here are looking at replacing a setup like that with just one crate, which has a myriad of issues connected with it.


----------



## rgames

Luke Davoll said:


> Hi guys. Two things I've not seen talked about here yet and would love some insight.
> 
> 1. Junkiexl's setup
> 2. Fabio's response to high core count cpus over at Steinberg.
> 
> 1. JXL has an i9 7900x 10 core running at 4 GHz. From 3 server units he has hundreds of audio channels coming in, his template consists of thousands of tracks, and he runs plenty of soft synths too. He has a dsp card for offload some fx processing. So, if this is enough for him, why are we discussing 18core CPUs? (i.e. what am I missing?). The 10980xe can boost cores etc, but from what I've read, having all cores running at the same frequency is desirable, and it seems that's what JXL has done too...
> 
> 2. Fabio has stated that the MMCSS limit doesn't exist with cubase 10 under Windows 10 any more. However, he did say that many more threads require...what did he say...synchronising all these threads...which would not be beneficial for audio use. So this also makes me wonder again if JXL has the right formula by not having more than 10 cores.
> 
> I'm definitely a noob, but have done heaps of research and am very interested in this. I imagine JXL would have had his assistants try every possible combination until they found the best result for his work flow. I'm not sure what Hans uses. Does anyone know?
> 
> I'd love to hear your thoughts...


It depends on what you're writing and how you write it. There's no question that it's absolutely possible to get by with a single machine these days. However, if you want to stream a LOT of voices simultaneously (say 5,000+) then you're probably going to need multiple machines because you'll hit real-time performance limits before CPU limits (in my experience...). So a single machine isn't going to cut it even if you have an infinitely fast CPU because the bottlenecks exist elsewhere in the system. It's like adding more horsepower to a car to try to drive through Manhattan more quickly: there's too much other stuff in the way, so that extra horspower won't be used and you won't get through traffic any more quickly than you would in a car with much less horsepower.

If you're running near 100% CPU load then more cores might help. My guess is that JunkieXL doesn't hit CPU limits - he hits streaming/real-time limits - so extra cores won't buy him much. My experience, and that of the studios I've been in over the past 10 years or so, is the same.

The truth is that pretty much any i7 or better CPU is just fine these days. Even if you do hit CPU limits (which I haven't seen in a long time) there are easy ways around that problem.

So, bottom line, it's easy to tell if more cores will help you. If you're not CPU limited then they probably won't help. As I said previously, I've moved from 4 to 6 to 10 cores and not seen much benefit. But I was rarely CPU limited at 4 cores, so I didn't expect 6 or 10 to make much difference.

rgames


----------



## rgames

woodslanding said:


> And even if your clarinet can't create a sound from silence in less than 100 ms, transitions between notes must be much faster. Otherwise a run of 32nd notes at 80 bpm would be silent.


I stated that 100 ms is for bass instruments, not clarinet. And sure enough, if you put 32nd note runs in the low end of bass wind/strings, they're not going to speak very well at 80 BPM. 1/64 notes at 80 BPM are around 50 ms, which is on the long end of clarinet latency. And again, sure enough, 1/64 notes at 80 BPM on clarinet won't get you much in the way of tonal sound.

It's not silence - there's some sound, including the attack transient. But it takes time for the full tonal sound to develop.

rgames


----------



## Luke Davoll

rgames said:


> But I was rarely CPU limited at 4 cores, so I didn't expect 6 or 10 to make much difference.


Thanks for your reply. Really interesting stuff. Do you have a slave pc? And how many audio returns do you have? And total track count in your template? Just trying to work out why JXL has 10 cores, and you're not noticing a big improvement from 4, 6 or 10. 

Cheers!


----------



## rgames

Luke Davoll said:


> Thanks for your reply. Really interesting stuff. Do you have a slave pc? And how many audio returns do you have? And total track count in your template? Just trying to work out why JXL has 10 cores, and you're not noticing a big improvement from 4, 6 or 10.
> 
> Cheers!


Yes - I have two slaves. Each returns about 30-50 stereo pairs via Ethernet (VE Pro). My orchestral template is around 300 MIDI tracks and 150 audio tracks. Note that I use Expression Maps so I only have one track per section with all articulations loaded on that track.

I did notice a small improvement in latency going from 4 to 6 cores but the latency on 4 cores was still fine (around 12 ms). So even though 6 was better, 4 was good enough. But I didn't see any difference in latency going from 6 to 10 cores - I run my standard template at about 6 ms. Same on 6 cores and 10 cores.

But again, I'm not running into CPU limitations so I never expected to see much benefit to higher core counts. I, too, am curious to see what kinds of projects people are running where they're hitting CPU limits with 10+ core machines. I guess ASIO guard might make that happen - I don't use it.

rgames


----------



## Luke Davoll

rgames said:


> I guess ASIO guard might make that happen - I don't use it.



That's because it doesn't play nice with VEPro, right? 

OK, so if you had say 2000 midi tracks, and 300 or 400 stereo returns, do you think you'd run into cpu limits that more cores may help with? And what do you make of the thread synchronisation issue with higher core counts? Does 18 cores simply not make sense? 

I guess I'm asking as I, like perhaps others, have been waiting to upgrade their systems. But with AMD not yet the best choice, either an i7 or i9 is the choice. And wanting to future proof a little as I build my template, adding more returns and tracks may eat up more cpu...

Actually, do you think you'd be able to somehow add a bunch more returns and midi tracks to your template and see how that would affect your CPU usage? Or can anyone contribute to this? Would more cores aid a template with many audio returns and high midi track count? What is the limit of say an i7 9900k in a template like this?


----------



## jamwerks

rgames said:


> Yes - I have two slaves. Each returns about 30-50 stereo pairs via Ethernet (VE Pro).


Hey, what are your buffer settings if I may ask? 256 in Cubase and setting "1" in VEP ?


----------



## rgames

jamwerks said:


> Hey, what are your buffer settings if I may ask? 256 in Cubase and setting "1" in VEP ?


I use 128 samples for the audio buffer and 1 buffer (128 samples) for the VEP network buffer. The total latency is around 6-7 ms though simple projects can run lower. I also have a VEP instance loaded on the master machine. That one has something like 20 stereo returns. Maybe 30... I use that one as a "coupled" instance because that's where I put all my synths with tweaks saved for each project. The slave machines are just sample streamers - I never change anything on those so they're uncoupled.

Otherwise save times become unbearable with many hundreds of tracks and VSTis loaded...

rgames


----------



## rgames

Luke Davoll said:


> That's because it doesn't play nice with VEPro, right?
> 
> OK, so if you had say 2000 midi tracks, and 300 or 400 stereo returns, do you think you'd run into cpu limits that more cores may help with? And what do you make of the thread synchronisation issue with higher core counts? Does 18 cores simply not make sense?
> 
> I guess I'm asking as I, like perhaps others, have been waiting to upgrade their systems. But with AMD not yet the best choice, either an i7 or i9 is the choice. And wanting to future proof a little as I build my template, adding more returns and tracks may eat up more cpu...


I'm not sure but I doubt that CPU would help with that kind of situation. You're talking about stereo returns from slaves, so that's all network traffic (I'm assuming you don't have hundreds of audio returns). The bandwith is not the issue, it's the network latency (i.e. real-time) performance that'll be the limit. That's not really a function of CPU power assuming you have a decent network card that doesn't pass a bunch of processing off to the CPU.

EDIT: yes, ASIO guard doesn't play nicely with VEPro. But my setup runs at pretty low latency without ASIO guard. So I've never needed it.

rgames


----------



## jamwerks

rgames said:


> I use 128 samples for the audio buffer and 1 buffer (128 samples) for the VEP network buffer.


That's low! I could definitely live with that. And now another question pops-up, why host the synth's on a coupled VEP instance instead of directly in Cubase? Better core distribution performance ?


----------



## rgames

jamwerks said:


> why host the synth's on a coupled VEP instance instead of directly in Cubase? Better core distribution performance ?


Yes, I found that running VIs within VEP provides better overall performance. Though, to be honest, that habit is based on some testing from a number of years ago - maybe there's not much difference any longer. And maybe it depends on DAW (I use Cubase).

One thing is definitely still true, though, and that's project save times. Running everything within Cubase (or fully coupled) using a large template can result in save times of 20 seconds or more. 20s can be a real pain if you're using autosave.

I've been able to get 6-7 ms on a lot of systems and I've seen it in a lot of other studios, so I don't think it's that low. (I've also seen a lot of other studios running much higher latencies, highilghting the fact that it doesn't really matter these days). In fact, I'm just finishing up a collection of EDM-type tracks that only use synths/audio, a few Kontakt instances and bunch of FX directly within Cubase (i.e. no VEP) and those kinds of projects will easily run lower than that. The limiting factor for me is real-time performance with a large orchestral template.

rgames


----------



## colony nofi

I think its been fairly well covered by others here - but perhaps worth mentioning again. There are just so many ways that people write music and use DAWS that sweeping generalisations don't apply.
We are dealing with a large number of variables, and it is simply too difficult to solve for so many variables to come up with generalist statements on this.

For some use cases, RAM is definitely a sticking point. Perhaps this is becoming less of an issue now that machines are able to have 256GB+ of ram. From a personal point of view, I have 128 and never once had to worry about it. Indeed, I go back to my older 64GB ram sticks sometimes due to other issues with RAM timing that DO make a difference on some projects. (I'm on a trashcan mac pro)
I am sure there are many composers and sound designers who need more than 64 or 128GB of ram for the workflow that they want to use. That more mainstream boards/builds/computers are heading towards 256GB+ is good news for them. I can even see workflows for me where I'd like more than 128GB, but OTHER bottlenecks stop me from attempting them on my current machine.

How people choose to run machines / setup of master / slave machines or single machines is worth always considering as well. For those of us that work on a large variety of different kinds of projects, VEP can become quite problematic. Its great software. In 2011 I had 4 x mac minis (each running 16GB ram) hooked up to a macbook pro in order to create immersive hybrid orchestral / sound mixes in the spaces we were installing. This worked better than my mac pro at the time, and didn't cost $800 to put on the plane in excess luggage! However, maintaining those slaves when different projects require vastly different setups is massively time consuming, and ultimately became too much in the workflow we use here. (I had a slave even with my current mac pro, and ended up going back to single machine workflow as it was better for productivity in the long term!)

CPU. There's some great discussion in here regarding realtime performance vs utilisation of cores (or a single core) etc etc. 

@rgames I hear your point regarding realtime cpu vs maxing out a cpu. I would argue that cpu design and implementation plus the programming of DAW's mean the two are related. Sometimes the design of multi-core cpu's also include improvements that improve the realtime performance of a DAW, sometimes a new chip comes out and the performance drops due to design changes. Single core performance is still very important due to core zero needing to wait for all the other cores in order to playback a buffer of audio in time. While the reaper audio engine provides better core management for plugins in some cases, there are others where it isn't as efficient. 

I'm happy to send you any number of projects that max out my CPU real time performance due to synths , specialisation and external tools - some where cubase/nuendo run better, and others where reaper work better. Its all very complex and probably beyond what we can easily talk about here. Other than to say in many cases more cores WILL help. And in others they won't be as important. I will definitely be testing a massively multi-core machine in the near future with some of the audio tech that I run.

Oh - and to go down another direction of thought... this is all stated without talking about another elephant in the DAW design room : tempo ramps. Tempo changes (ramps) and a number of kontakt scripts really do place massive strain on the realtime CPU performance of a machine. And not just kontakt. Those issues show up with some synths, the spitfire sampler, and halion. It can sometimes max out a project that otherwise is sitting with tonnes of headroom most unexpectedly.

There are many things that place stress on the CPU that wouldn't show up as a maxed CPU in system monitors - due to things that have been discussed in detail. Interestingly one scenario I've come across are music cues where cues run into one another / do a lot of changing along the way. I recently did a 2.5min cue which had 7 or 8 different sections where it was necessary to freeze a bunch of tracks. Now, each of the individual parts would run by themselves just fine, but with all the processing / grouping / mixing turned on the session would grind to a stand still. Perhaps 180 tracks all up. 

Another thing to have in mind is how DAW use changes once you include live recordings / tracks / mixing into the game. While sample libs now provide great opportunities to mix without too much processing if that is what you want, recording instruments often requires loads of processing. And many folk also LIKE to process recorded orchestra to make it sound different to an orchestra in a concert hall / recording hall / wherever it was recorded. I have a 4 min project that I'm working on that has 80 tracks of solo vocal (5 different sections, each with around 16 tracks) and each of those use a number of different vocal chains for sound design. 

I like to experiment with audio processing using and audio version of ray-tracing. THAT kills cpu. One instance takes up an entire core and can grind a machine to a halt. Its unlikely tools like this are ever going to use multi-cores without being able to be loaded outside a DAW. 

Now, something like SPAT revolution DOES use multi-cores outside of a DAW. But this also interacts with the DAW's use of a cpu in a way which means it shows the limits of the CPU very quickly... be those total limits or single core limits. 

I have written (with others) audio software (including maxmsp control and bespoke sample playback in C++) that creates immersive music in real time in massively multi-channel environments using real time data sources and GB of pre-recorded music stems / ideas. Its all pretty simple from a conceptual point of view... but it really does place a lot of strain on CPU's due to max's design, my design (and our bespoke playback mechanism) and the XYZ specialisation across 24+ speakers. Here careful CPU choice was necessary, as was careful testing - and the final chip chosen wasn't the one we expected to need to use!

More CPU based considerations : Working in immersive projects often has me mixing in a modelled environment. CPU is extremely important here. Is it something loads of people do? Not at all - but that doesn't mean it won't increase / become more relevant in the future. I say CPU and mean both realtime performance + total core performance - they are inter-related at times, and not at others. 

What does this all mean?

Well - while single core performance is definitely a big thing to keep an eye on when spec'ing a cpu for a DAW, multi-core performance is becoming more important. DAW developers are making headway into their audio engines becoming more efficient for multi-core workflows. Real-time performance is still usually the first bottle neck composers come up against - but it isn't the only one. RAM is very quickly becoming less of a bottle neck with the massive amounts possible in a machine (looking at you new mac pro, as well as new XEON and AMD Rome based systems (which look like supporting 2TB at least!)

I'm rambling - but its a fun ramble....


----------



## ridgero

Intel has cut the price on the previous generation Skylake-X Refresh (9th Gen) 


The i9-9980XE costs now $950 (half the price since yesterday)


----------



## Olfirf

@rgames: What I don't get is this chart below. Ok, I understand, that there is a difference between real life application and artificial tests. But this scan pro audio test basically is about running lots of Kontakt instances and seeing at how many voices they start to choke at. I also agree that the missing part of running a mix of Kontakt instances and FX or synth plugins might change that by a whole lot. But still: according to that chart you get at least twice the voice count on a i9 9960X (overclocked) compared to the i9 9900k, depending on your buffer settings. I do believe that this performance could be pretty far away from that number depending on a lot of factors! But given that you use this 9960X as a VEpro machine exclusively with Kontakt instances, I cannot believe that this machine would suddenly perform only as well as the I9 9900k. Whatever the limiting factor is -may it be real time performance or CPU (I agree the first one is probably more relevant) - shouldn't this show in the test done by scan pro as well, if all they do is playing Kontakt voices in Cubase/Reaper? I get there is a difference between artificial tests and real life application, but it couldn't explain that much of a difference, could it?
I don't want to argue about it, but I just don't get why you still think it doesn't matter. Unless these scan pro audio tests are not what I think they are and intel is using a shut-off device like VW for cheating during test conditions.


----------



## rgames

Olfirf said:


> I just don't get why you still think it doesn't matter.


I'd suggest reading back through my posts. I'm curious to see actual musical projects that show where it matters. Not benchmarks.

I don't know what the current DAWBench polyphony test is but it used to be block chords in Kontakt. Why anybody would need 2000+ voices of block chords is beyond me.

Hence my search for a project. As I've said repeatedly, I can certainly max out a CPU. But it's with a project that nobody would actually write and/or set up in a stupid way (e.g. with 500 reverb inserts).

rgames


----------



## Olfirf

rgames said:


> I'd suggest reading back through my posts. I'm curious to see actual musical projects that show where it matters. Not benchmarks.
> 
> I don't know what the current DAWBench polyphony test is but it used to be block chords in Kontakt. Why anybody would need 2000+ voices of block chords is beyond me.
> 
> Hence my search for a project. As I've said repeatedly, I can certainly max out a CPU. But it's with a project that nobody would actually write and/or set up in a stupid way (e.g. with 500 reverb inserts).
> 
> rgames


I honestly don't know what exactly is done in scan pro audio tests, but even if it was playing sustains, only: this is using up voices of Kontakt and I don't get (even after reading through many of your posts) how that should not be relevant. There probably are differences like a legato script (or just any script) using up any additional resources. Or what is the other difference to a real piece? Your PC should not mind wether the music played during a test is unimaginative, should it? 
When I play a fast sequence with short notes, Kontakt is using up quite a lot of voices than actual than the line has simultaneous voices, sure! That is due to release samples, mic positions, stereo samples, scripts triggering additional voices ... I get that!
But why should Kontakt care wether these voices are created in that way or just by playing lots of mono piano voices at the same time. It should not matter to Kontakt that much.

What I really want to know: do you think the i9 9960x from the scan pro audio test would just reach about the same number of voices as the i9 9900k, when you send it midi from a real project?


----------



## Luke Davoll

Olfirf said:


> I honestly don't know what exactly is done in scan pro audio tests, but even if it was playing sustains, only: this is using up voices of Kontakt and I don't get (even after reading through many of your posts) how that should not be relevant. There probably are differences like a legato script (or just any script) using up any additional resources. Or what is the other difference to a real piece? Your PC should not mind wether the music played during a test is unimaginative, should it?
> When I play a fast sequence with short notes, Kontakt is using up quite a lot of voices than actual than the line has simultaneous voices, sure! That is due to release samples, mic positions, stereo samples, scripts triggering additional voices ... I get that!
> But why should Kontakt care wether these voices are created in that way or just by playing lots of mono piano voices at the same time. It should not matter to Kontakt that much.
> 
> What I really want to know: do you think the i9 9960x from the scan pro audio test would just reach about the same number of voices as the i9 9900k, when you send it midi from a real project?



I'm still confused too if it's any consolation. There was a video on youtube, marco di stefano i think , he had an 18 core beast I think, and was running VEpro on the same machine. It all looked great, but after reading what @rgames has said, I looked carefully at his cpu AND at his real time performance meter, and it looked to me that he would run out of real time juice before cpu juice. cpus were around 30%. AND he was on a buffer of 512 i think. 

I also wondered earlier about why JXL would *only* run a 10 core cpu. Maybe @rgames is right and cpu core count really isn't a factor. And no-one really has proven cpu expires before real time performance in an actual project, as @rgames has requested many times. So yeah, I dunno. I need to update my system, and with the 10900x coming out next week, and the 10900k next year sometime, I'm wondering what to do...


----------



## Manaberry

Luke Davoll said:


> I'm still confused too if it's any consolation. There was a video on youtube, marco di stefano i think , he had an 18 core beast I think, and was running VEpro on the same machine. It all looked great, but after reading what @rgames has said, I looked carefully at his cpu AND at his real time performance meter, and it looked to me that he would run out of real time juice before cpu juice. cpus were around 30%. AND he was on a buffer of 512 i think.
> 
> I also wondered earlier about why JXL would *only* run a 10 core cpu. Maybe @rgames is right and cpu core count really isn't a factor. And no-one really has proven cpu expires before real time performance in an actual project, as @rgames has requested many times. So yeah, I dunno. I need to update my system, and with the 10900x coming out next week, and the 10900k next year sometime, I'm wondering what to do...




JXL has maybe 10 core on its main machine but how many on the slaves? 
I'm running 6 cores at 512/1024 buffer. No choice because my CPU loads up to 70% with both VEP and Cubase open. Cubase hits the red zone quite often.

Looking forward to getting the 18 cores monster next week.


----------



## Luke Davoll

Manaberry said:


> JXL has maybe 10 core on its main machine but how many on the slaves?
> I'm running 6 cores at 512/1024 buffer. No choice because my CPU loads up to 70% with both VEP and Cubase open. Cubase hits the red zone quite often.
> 
> Looking forward to getting the 18 cores monster next week.


He has 3 slaves but not sure what the specs are. But I can't see how slaves just delivering samples across a network would be the bottleneck. I know, many mic positions etc, but still a slave with ssds, from what I've read is not a lot of data being sent over the network. Has anyone using slaves on a massive project ever maxed out their gigabit network connection? Would be interesting to know too...

@Manaberry you're not using slaves?


----------



## Manaberry

Luke Davoll said:


> @Manaberry you're not using slaves?



No salves here because I'm working in a tiny room hehe. It's already too hot in there. 
I might just get another license for VEP to run my synth on my laptop (which is already used for running my network midi controller.)


----------



## colony nofi

Luke Davoll said:


> He has 3 slaves but not sure what the specs are. But I can't see how slaves just delivering samples across a network would be the bottleneck. I know, many mic positions etc, but still a slave with ssds, from what I've read is not a lot of data being sent over the network. Has anyone using slaves on a massive project ever maxed out their gigabit network connection? Would be interesting to know too...
> 
> @Manaberry you're not using slaves?


You can fairly easily work out when you'll reach a theoretical bottleneck with gigabit ethernet.
24bit/48k audio is 1152 kbps - or 1.152mbps - or 0.001152gbps.
Assuming a network with no overheads, 1gbps gives you bandaround 868 mono streams of 48/24 audio.
Now, no network has no overheads. But I'd say you could safely assume 500 mono streams or 250 stereo. or 125 at 96/24 etc. 

And you certainly DON'T need to put every stream from VEP across the network. You can premix inside VEP easily enough, down to usable stems. VEP definitely has overheads - I seem to recall running over 100 mono tracks back from slaves when I was using them, but its been a while since I transferred to single computer workflow so I could be wrong on that. I was also a few versions of VEP back - which could change some things.


----------



## Luke Davoll

colony nofi said:


> 868 mono streams of 48/24 audio


That's so many! That's kind of what I was thinking about gigbit networking not really being a bottleneck, as even with a massive template and many returns, at any one time, who is going to be returning that many streams? Realistically. At any one point in time i mean.


----------



## Luke Davoll




----------



## Anthony

Luke Davoll said:


> I'm still confused too if it's any consolation. There was a video on youtube, marco di stefano i think , he had an 18 core beast I think, and was running VEpro on the same machine. It all looked great, but after reading what @rgames has said, I looked carefully at his cpu AND at his real time performance meter, and it looked to me that he would run out of real time juice before cpu juice. cpus were around 30%. AND he was on a buffer of 512 i think.
> 
> I also wondered earlier about why JXL would *only* run a 10 core cpu. Maybe @rgames is right and cpu core count really isn't a factor. And no-one really has proven cpu expires before real time performance in an actual project, as @rgames has requested many times. So yeah, I dunno. I need to update my system, and with the 10900x coming out next week, and the 10900k next year sometime, I'm wondering what to do...


I recently ran a test in Cubase to help me understand the relationship between CPU utilization and realtime processing. It may not apply to your case specifically, but might provide some insight into how to setup your template.

*Re: Average Load to CPU Utilization Ratio (test results)


Average Load to CPU Utilization Ratio - Page 2 - www.steinberg.net


*


----------



## Carl W

Hi,

I like to make sounds with Straylight from NI. But it is CPU intensive. If I load more than 4 copies of Straylight into Ableton I have only crackling and de cpu meter of Ableton goes way above 150%. With Live 11 coming up it would be possible to have 16 macro's in an instrument rack and 100 snapshots of these macro's. With 10 copies of Straylight in one rack you could reallyweird moving sounds controlable with 16 knobs/snapshots but what would be like a solution?

To buy a 10980XE or a 10990K or wait for the 11th generation?

Do you need really 128RAM to use the 10980XE like somebody mentioned in a thread here?

Sincerly,

Carl


----------



## Manaberry

Hi @Carl W 

THe 10980XE can handle quad-channel mode, up to 256GB of total RAM (that my current setup actually).
You don't need to go for that crazy amount of RAM. The ideal would be to get 4 stick of RAM to get an optional bandwidth speed (quad-channel), you just have to pick the amount of RAM you need 

NOTE: The 10900K supports dual-channel only and up to 128GB.

It can be 64GB with 4x16 for instance.

On the CPU, we cannot choose for you, but we can help. If you feel that you cannot wait more, then go for a more powerful CPU now. If you can wait another year, just wait (in the meantime, some benchmark with AMD chip would be available, pretty sure it can be a great idea to check their CPU list)


----------



## Carl W

Manaberry said:


> Hi @Carl W
> 
> THe 10980XE can handle quad-channel mode, up to 256GB of total RAM (that my current setup actually).
> You don't need to go for that crazy amount of RAM. The ideal would be to get 4 stick of RAM to get an optional bandwidth speed (quad-channel), you just have to pick the amount of RAM you need
> 
> NOTE: The 10900K supports dual-channel only and up to 128GB.
> 
> It can be 64GB with 4x16 for instance.
> 
> On the CPU, we cannot choose for you, but we can help. If you feel that you cannot wait more, then go for a more powerful CPU now. If you can wait another year, just wait (in the meantime, some benchmark with AMD chip would be available, pretty sure it can be a great idea to check their CPU list)




Hi Manberry,

my current PC is an asus B85 plus motherboard(socket 1150), an i4770 CPU, an Geforce GTX 1070 GPU and 32 RAM DDR3, it's from 2014.

So anyway a new processor is a new motherboard and DDR-4 RAM.

I did some benchmark comparations:

https://cpu.userbenchmark.com/Compare/AMD-Ryzen-9-3950X-vs-Intel-Core-i9-10980XE/4057vsm935899
https://cpu.userbenchmark.com/Compare/AMD-Ryzen-TR-3990X-vs-Intel-Core-i9-10980XE/m1035665vsm935899
https://cpu.userbenchmark.com/Compare/AMD-Ryzen-TR-3960X-vs-Intel-Core-i9-10980XE/m969111vsm935899
So I'm confused because in the first comparasion the Ryzen, half of the prize of the 10980XE gives a little better results for half the price. The other two don't give a difference.


I have SSD's, a fine GPU but like said it will be and a new CPU and new RAM and a new motherboard. I have 1500€ available so minus RAM, 64 G and motherboard it's 850-1000€ left for a processor. Or I buy the Ryzen 9 3950X and put more RAM.

Any idea/help in the decision making? Some build that keeps me for the next five years ok.


----------



## Manaberry

Beware of the regular Benchmark. AMD can have some more raw power but can be somehow overwhelmed under audio work. I know for facts that my 10980XE can handle serious polyphony even being behind in the raw power competition.

Check that thread, we have done some test with another forum member: https://vi-control.net/community/threads/threadripper-3970x-build-notes-and-cubase-benchmarks.94892/

However, despite Intel leads over AMD with audio workstation for years, AMD has just released the 5950X and it's bringing absolutely amazing performance for a very competitive amount of money. If you have the money until then, maybe wait for spring 2021 and get yourself a Threadripper 5xxx series.

So final word, if you go now for a CPU, go for the brand new 5950X. It's a better version of the 3950X for the "same" price.


----------



## Carl W

Manaberry said:


> Beware of the regular Benchmark. AMD can have some more raw power but can be somehow overwhelmed under audio work. I know for a fact that my 10980XE can handle serious polyphony even being behind are the raw power competition.
> 
> Check that thread, we have done some test with another forum member: https://vi-control.net/community/threads/threadripper-3970x-build-notes-and-cubase-benchmarks.94892/
> 
> However, despite Intel leads over AMD with audio workstation for years, AMD has just released the 5950X and it's bringing absolutely amazing performance for a very competitive amount of money. If you have the money until then, maybe wait for spring 2021 and get yourself a Threadripper 5xxx series.
> 
> So final word, if you go now for a CPU, go for the brand new 5950X. It's a better version of the 5950X for the "same" price.



Hi Manaberry,

Thanks for all your time and explanation!

I will wait some more months and by the 5950X

Greetings,

Guido


----------



## chimuelo

Still waiting to see compatibility issues w/ UAD.
AMD is bad ass but expensive, couple guys one w/ Gigabyte another w/ Asus, high end 570s are livid on tech forums.

My guess is it will get sorted before long.
Odd that Intel has no issues but AMD does.

I use DSP racks too and experienced similar messages with new Win 10 drivers, but using option #7 driver signing for start up in 10 fixed that.

18 cores of Intel for 1 large is very tempting though. It’s not like AMD trounces Intel. But gamers are as feverish as we are because everyone wants the fastest.


----------



## Carl W

Manaberry said:


> Beware of the regular Benchmark. AMD can have some more raw power but can be somehow overwhelmed under audio work. I know for a fact that my 10980XE can handle serious polyphony even being behind are the raw power competition.
> 
> Check that thread, we have done some test with another forum member: https://vi-control.net/community/threads/threadripper-3970x-build-notes-and-cubase-benchmarks.94892/
> 
> However, despite Intel leads over AMD with audio workstation for years, AMD has just released the 5950X and it's bringing absolutely amazing performance for a very competitive amount of money. If you have the money until then, maybe wait for spring 2021 and get yourself a Threadripper 5xxx series.
> 
> So final word, if you go now for a CPU, go for the brand new 5950X. It's a better version of the 3950X for the "same" price.




Have a new PC with a Ryzen 5950X and 64G RAM. Just did a quick tryout and can build 4 instrument racks in Ableton with 8 instances of Straylight in every rack and no crackling or other problems. Duplicating a rack with 8 instances of Straylight takes 1 second. My problem is solved  Once more thanks for your advice!


----------



## Manaberry

Carl W said:


> Have a new PC with a Ryzen 5950X and 64G RAM. Just did a quick tryout and can build 4 instrument racks in Ableton with 8 instances of Straylight in every rack and no crackling or other problems. Duplicating a rack with 8 instances of Straylight takes 1 second. My problem is solved  Once more thanks for your advice!



Hello Carl

Great news! I'm happy this CPU fulfilled all your needs and matched your budget. It seems you have plenty of power for the next few years.
Happy composing


----------



## Fitz

How are people's experience with Cubase and this chip 10980xe?


----------

