# DAWbench 2021 Suite - Intel 12th Gen Results.



## Pictus

DAWbench Suite - AMD 7000 and 13th Gen Intel results


https://gearspace.com/board/showpost.php?p=16229111&postcount=934






The new CPUs are factory overclocked, by tweaking we can reduce the
max wattage and use air cooler.
Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W








Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W | Club386


Prefer to keep temperature and power consumption down to lower levels this winter? Here's what happens when the best CPUs are scaled back.




www.club386.com













DAWbench 2021 Suite - Intel 12th Gen Results.








Gearspace.com - View Single Post - DAWBench DSP / VI Universal - Cross Platform DAW Benchmarks :


Post 15749620 -Forum for professional and amateur recording engineers to share techniques and advice.



gearspace.com


----------



## Laddy

Conclusion, DDR5 is a big deal, or am I reading this wrong?


----------



## Anthony

So, judging from the top two charts, to get maximim performance I need an AMD system to run my FX plugins and an Intel system to run my VIs (i.e., neither system is better at _both_)?


----------



## Pictus

Laddy said:


> Conclusion, DDR5 is a big deal, or am I reading this wrong?


It is a big deal.


----------



## Pictus

Anthony said:


> So, judging from the top two charts, to get maximim performance I need an AMD system to run my FX plugins and an Intel system to run my VIs (i.e., neither system is better at _both_)?


The new AMD chips with 3D cache will change this soon or diminish the difference. 








AMD EPYC-CPUs based on 3D V-Cache and teases Zen 4 as well as 128-core Bergamo


AMD announced several new EPYC processors during a virtual event. The 'Milan-X' processor is a variation of the currently available Zen 3-based CPUs with 3D V-Cache. The proc will become available i...




www.guru3d.com


----------



## thevisi0nary

Good showing though I wish bussing was more involved in dawbench


----------



## PaulieDC

The Passmark results are interesting for CPUs in the $600-$700 range... it appears at first that the Ryzen 5950 smokes the Intel i9-12900K with the overall scores of 46K for AMD and 37K for Intel, and that's probably true for gamers. But for us audio maniacs, we need single-core and FPU performance and the new Intel smokes the Ryzen significantly in single-core shown on this chart, and then I saw the FPU results and the Intel was about 20% faster there as well. For MIDI work the Intel is cheaper and the better choice for under $750:



AMD Ryzen 9 5950X vs Intel Core i9-12900K [cpubenchmark.net] by PassMark Software



This chart shows a higher "operating cost" but if your purpose is to save money in that realm, not sure if this is the world for you, lol!

As far as the new AMDs with all the cache improvements, let's see the specs once they are out there. And the price... only the 16/32 processor will probably be under $1K. But WOW imagine having 128 cores?? Nutty. And we'll need the DAW companies to step up to leverage that of course. Bottom line, it just keeps getting better. After 11 years of owning almost every iPad model, the new M1 with StaffPad is just plain amazing.


----------



## ogrim1

Non K 12700 with B650 board might be the value king (and for people with no GPU a no-brainer)


----------



## lokotus

thats cool, is there any indication of what hardware was used during these performance test ?


----------



## Pictus

lokotus said:


> thats cool, is there any indication of what hardware was used during these performance test ?


Try https://www.scan.co.uk/info/proaudio/presszone/intel-12th-gen-roundup


----------



## rgames

Pictus said:


> DAWbench 2021 Suite - Intel 12th Gen Results.
> 
> 
> 
> 
> 
> 
> 
> 
> Gearspace.com - View Single Post - DAWBench DSP / VI Universal - Cross Platform DAW Benchmarks :
> 
> 
> Post 15749620 -Forum for professional and amateur recording engineers to share techniques and advice.
> 
> 
> 
> gearspace.com


Thank goodness I can finally add that 400th compressor I've been needing


----------



## SyMTiK

damn I was hoping DDR5 didnt show such a big difference so that I could feel comfortable opting for a DDR4 motherboard and cheaper DDR4 ram hahaha sadly it is pretty much impossible to get DDR5 right now, really hoping early 2022 we will see more availability, been wanting to make the switch back to PC for a while now.


----------



## TAFKAT

thevisi0nary said:


> Good showing though I wish bussing was more involved in dawbench


I am in the process of developing a bussing extended version of the DSP test.

I will be adding multiple group busses with assigned resource heavy pre-loaded tracks, and then having a secondary load metric for vertical ( serial) processing.

Still fleshing out the finer details and logistics , coming in 2022.


rgames said:


> Thank goodness I can finally add that 400th compressor I've been needing


Right, you continue to miss the forest for the tree's 

Disclaimer for those not as well versed as Richard on all things DAW benchmarking.

DAWbench is a parallel (horizontal ) multiprocessing benchmark , it measures x-scaling ( x being the number of cores ) in a DAW environment at respective latency settings, the plugins or the voices are simply the metric to apply the incremental load.

Your mileage may vary how that information translates to respective working environments, session logistics and configuration.


SyMTiK said:


> damn I was hoping DDR5 didnt show such a big difference so that I could feel comfortable opting for a DDR4 motherboard and cheaper DDR4 ram hahaha sadly it is pretty much impossible to get DDR5 right now, really hoping early 2022 we will see more availability, been wanting to make the switch back to PC for a while now.


The DDR5 combined with the new dual memory controller array on the Z690 has definitely shone a light on memory bandwidths importance for the heavier Kontakt based sessions, and explains why the X299 has remained so strong in those session environments.









DDR5 Deep Dive - Exclusive interview with Kingston about the new memory standard and many examples from practice | Page 2 | igor'sLAB


The basic changes to the new DDR5 memory standard have already been discussed around the launch of the Intel Alder Lake CPUs, even if sometimes not completely correct. So today we're going to revisit…




www.igorslab.de





This is a good article explaining the Z690's memory sub system, and details the advantages of even the DDR4 array over the previous single memory controller architecture, and why the DDR5 configuration is another level above.

Its a shame Intel have essentially done another paper launch , not that AMD are any better btw. DDR5 is near non existent, and what is available is very limited and very expensive. I am yet to see many memory options for 64GB , let alone for 128GB , and the costing has to take a deep dive before it seriously becomes viable. ATM its a rort !

Peace


----------



## rgames

TAFKAT said:


> Right, you continue to miss the forest for the tree's


I don't think so. The problem is the trees are compressors, 400+ of them.

And a forest full of compressors is an interesting idea but, as far as I can tell, it has zero practical value.

rgames


----------



## thevisi0nary

TAFKAT said:


> I am in the process of developing a bussing extended version of the DSP test.
> 
> I will be adding multiple group busses with assigned resource heavy pre-loaded tracks, and then having a secondary load metric for vertical ( serial) processing.
> 
> Still fleshing out the finer details and logistics , coming in 2022.


Sweet


----------



## Jeffrey Peterson

Well this is depressing because I just bought a i9 12900k with a DDR4 motherboard because there isn't any DDR5 32GB Ram modules out there yet. And if now if I upgrade in a few months I will have to reinstall windows, software, pluging.....the works


----------



## widescreen

Jeffrey Peterson said:


> Well this is depressing because I just bought a i9 12900k with a DDR4 motherboard because there isn't any DDR5 32GB Ram modules out there yet. And if now if I upgrade in a few months I will have to reinstall windows, software, pluging.....the works


Why do you have to do that?

I have more than once successfully ported whole Win10 instances from 10 year old legacy BIOS-based PCs to the most recent stuff with UEFI. Much faster than a reinstall even considering the 2 hours of research a few years ago!
My DAW was initially installed on an i7 8700 an successfully works till today on an 11700 after a mainboard crash.

So a simple board change on a recently new installed system to a relatively similar platform does surely NOT need a reinstall. At first start there is a short hardware scan, after the a restart you log in and install the newest drivers for the platform. That's it.


----------



## Anthony

widescreen said:


> I have more than once successfully ported whole Win10 instances from 10 year old legacy BIOS-based PCs to the most recent stuff with UEFI.


What method did you use?


----------



## Jeffrey Peterson

widescreen said:


> Why do you have to do that?
> 
> I have more than once successfully ported whole Win10 instances from 10 year old legacy BIOS-based PCs to the most recent stuff with UEFI. Much faster than a reinstall even considering the 2 hours of research a few years ago!
> My DAW was initially installed on an i7 8700 an successfully works till today on an 11700 after a mainboard crash.
> 
> So a simple board change on a recently new installed system to a relatively similar platform does surely NOT need a reinstall. At first start there is a short hardware scan, after the a restart you log in and install the newest drivers for the platform. That's it.


mhmm thats encouraging. I think I just tried it once and it didn't work well. But I may try it!


----------



## Trash Panda

rgames said:


> I don't think so. The problem is the trees are compressors, 400+ of them.
> 
> And a forest full of compressors is an interesting idea but, as far as I can tell, it has zero practical value.
> 
> rgames


If a compressor applies gain reduction in a forest, and no one is around to hear it, does it really matter if it has auto makeup gain?


----------



## davidanthony

rgames said:


> And a forest full of compressors is an interesting idea but, as far as I can tell, it has zero practical value.


There's tons of practical value, just not for actual musicians... But the people who do benefit from these kinds of metrics get very touchy when this is pointed out!


----------



## widescreen

Anthony said:


> What method did you use?


But only if you move from BIOS to UEFI:









KB3156: MBR disk restore to UEFI system fails with "OS disk in backup uses MBR disk" warning


When using Veeam Agent for Microsoft Windows recovery media the following error message occurs:OS disk in backup uses MBR disk. This may cause boot issues on UEFI systems




www.veeam.com





I used the free Veeam Agent for Windows. (Because of my well experience with the full Veeam versions in my day job).

Otherwise it's sufficent if you just change the mainboard and boot from your existing drive. Windows will recognize the new hardware, maybe it will need to be re-activated due to the hardware changes and then you can proceed your work where you were before the upgrade.


----------



## thevisi0nary

rgames said:


> Thank goodness I can finally add that 400th compressor I've been needing


I think it’s mostly useful for learning how plug-in/voice count scales with cores / buffer size on different architectures. (i.e. 16 cores of Intel and 16 cores of Ryzen not always the same, also 10 core comet lake vs 10 core cascade lake not the same despite being very similar). 

Honestly though the dawbench tests really need to involve bussing and mixing up the signal pipeline for them to be genuinely informative. Max plug-in/voice count is not helpful if two or three busses cuts that number in half (theoretically). Also tracking at low buffer in a large session. Those two factors would greatly influence what I buy over max instances.


----------



## rgames

thevisi0nary said:


> Honestly though the dawbench tests really need to involve bussing and mixing up the signal pipeline for them to be genuinely informative. Max plug-in/voice count is not helpful if two or three busses cuts that number in half (theoretically). Also tracking at low buffer in a large session. Those two factors would greatly influence what I buy over max instances.


Yeah that's part of the issue I have with dawbench and similar benchmarks: I've never been able to draw a link between what they measure and any practical application. I've created benchmark projects across different use cases and measured things like min latency and there's no correlation between those results and the dawbench 400+ compressor test or the block-chord voice streaming test.

A benchmark is useful as a comparison tool if it's a proxy for something you actually care about. Nobody cares about using 400 compressors in a project so the question is how does that measurement relate to something we do care about? I've never seen anyone (other than myself) make that comparison. And when I've done it the dawbench results come up basically meaningless.

It's like a dyno for a racecar: nobody actually cares what the horsepower is, they care about lap times. But there's a link between horsepower and lap times so using a dyno makes sense because it's a more efficient and precise way to evaluate one element that relates to what you actually care about: lap times.

Dawbench aspires to be like the dyno but, as far as I can tell, nobody can demonstrate the link between what you measure and what you actually care about. The assumption is "more is better" but nobody can actually back that assumption with any kind meaningful metric (e.g. latency for a given project).

A practical metric would be something like a variety of stressing projects across different use cases. The best possible scenario is to show how some project will *not* run on computer A but *will* run on computer B. A second-best metric is to show that some project will run at, say, 512 sample buffer on computer A but 128 samples buffer on computer B.

Those are things that people actually care about. Not 400 compressors.

The trouble is that when you do those kinds of measurements you realize that they have basically no relation to the dawbench results. So, again, the dawbench results can't be tied to anything meaningful, at least not as far as I've been able to discern.

rgames


----------



## TAFKAT

> I think it’s mostly useful for learning how plug-in/voice count scales with cores / buffer size on different architectures. (i.e. 16 cores of Intel and 16 cores of Ryzen not always the same, also 10 core comet lake vs 10 core cascade lake not the same despite being very similar).


Pretty much in a nutshell as I explained in my earlier response, it measures multiprocessor ( horizontal) scaling of respective architectures , and has been a good guide to how the successive architectures have delivered regards IPC, Cache, Intercore Latency, Memory Performance, etc.

As I noted the plugins/voices are simply the incremental load metric.


> Honestly though the dawbench tests really need to involve bussing and mixing up the signal pipeline for them to be genuinely informative. Max plug-in/voice count is not helpful if two or three busses cuts that number in half (theoretically). Also tracking at low buffer in a large session. Those two factors would greatly influence what I buy over max instances.


Of course, there has never been any argument that session logistics where busses/groups that spike individual cores are the Achilles heal of DAW performance, and dynamic will also be different in respective DAW's depending on audio engine performance, multiprocessing efficiency and capabilities, as well as MMCSS dependencies that in some instances will limit max threads assignable.

Its a deep rabbit warren, not made any easier that respective DAW's behave differently regards threading dynamics.

Steinberg for example have hobbled their realtime engine in Cubase 10+ , so for example if you disable ASIO Guard it will limit Max Core Threading to 1/2 Logical -1 on anything above a 6 Core / 12 Thread system. i.e, 6 Core/12 Thread CPU - 12 Total Logical Cores Available with ASIO Guard On/Off . 8 Core/16 Thread CPU - ASIO Guard On -16 Logical / ASIO Guard Off 1/2 Logical -1 = 7 Logical Cores Total Available , 12 Core/24 Thread CPU - ASIO Guard On - 24 Logical / ASIO Guard Off 1/2 Logical -1 = 11 Total Available.

Not a problem if you never disable ASIO Guard right, well kind of, anytime you track arm you are forcing the instrument to the realtime engine and threading limitation, so if you have a Kontakt instance with multiple parts running its internal MP , and track arm one instrument, it will force all instruments to that realtime, not just the instrument track armed. Resulting in a spike and potential overloading of the session by simply track arming a VI track. Any MP capable VI is also susceptible to similar dynamic, i.e, uHe.

Cubase 9 was the last version where the real time engine with ASIO Guard Guard Off was not hobbled , tho it was limited to 16 Core/32 thread max , it was still far more predictable and consistent. But only in Windows 7 or Windows 10 with Unofficial MMCSS fix which returned 128 MMCSS threads, without the fix it was hobbled back to 14 logical cores max in Windows 10. The later was the Steinberg workaround patch, without it, it potentially collapsed even very small sessions completely. Anyone who experienced that will know exactly what I am referring to. It was brutal.

Reaper has hurdles to navigate in sessions with large numbers of tracks ( orchestration templates ) , even with instruments offline, where higher core / high memory systems can potentially reserve large % of available overhead before any level of processing is applied. This dynamic changes with its respective threading priority and behavior settings.

Just a few snippets of what I navigate with high level professional mission critical clients, well away from benchmarking.

I have enough repeat clients who run monster sessions in various DAW's natively or running extended hosting environments in conjunction, and at each successive system deployment with higher core/memory, etc , they have achieved a satisfactory level of benefit within their respective session environment and loading dynamic.

I digress.

There are so many variables in DAW's and session dynamics, its very difficult to present comparative benchmark data that will be relevant to the wide umbrella of Real World session scenarios, as that means something different at so many personal levels.

However, as I mentioned to you in my previous response, following up some feedback and ongoing discussions , I am attempting to develop a bussing extension test as another testing variable. Still working through some ideas, as its not that easy to present an empirical test that will be able to present a comparative metric of vertical performance overhead in the context of say a session that already has a measurable horizontal load, but I am working through it and I'll see if I can get something worth sharing.


----------



## Peter Satera

Intel still winning big in the composer race. availability is currently an issue of course, but hopefully will smooth over in time.


----------



## Pictus




----------



## Anthony

Pictus said:


>


Is this a concern to us (the music community)?


----------



## Pictus

Some plug-ins uses AVX-512 like Melda, 2CAudio, I guess Acoustica Audio Acqua and probably others...


----------



## rgames

TAFKAT said:


> I have enough repeat clients who run monster sessions in various DAW's


Why not make proxies for those as benchmarks? Then measure the min latency you can achieve with different processors/systems/etc.

That's a metric that people can relate to.

That's what I've done in the past. And I've found that as of about 10 years ago there's really not much difference as long as you have ~8+ cores and ~4+ GHz.

I have a stressing benchmark project that I've kept around for a long time. It ran at ~6 ms latency on 4C/8T at ~3.5 GHz. It'll run at ~5.5 ms latency on 14C/28T at ~4.5 GHz. Basically zero difference despite huge increases in speed and # cores.

Now the 4C/8T machine probably could only run 150 compressors and the new one can probably run 350 or more. But, again, who cares? That difference is clearly not reflected in the performance we care about: min latency achievable in a stressing project.

rgames


----------



## 3CPU

I did plan to build another PC by May 2022, but I might pass on this! I already have an Intel 10th gen that runs great and will be giving that to my wife for photography and video editing. Now is not quite the right time to build due to costs of DDR5 and dedicated GPU. 

*Fiery MotherBoard:* I read about the Z690 Hero motherboard, capacitor put in upside down lol, how the heck did the cameras miss that. Asus have since released a recall and the problem should be resolved. 

*Why DDR5?* If I plan to upgrade in 3 years that memory will be useful. And in fact I decided to get myself the M1 Max for DAW, Blender and video editing. And upgrade my wife's 10th gen by 2023. Hopefully by then the MeteorLake should be available. But I suppose a new motherboard will be required. And then I will upgrade the M1 Max by 2025. That's right, I like to upgrade every 3 years.


----------



## Anthony

Pictus said:


> Some plug-ins uses AVX-512 like Melda, 2CAudio, I guess Acoustica Audio Acqua and probably others...


So, does that mean these^ will no longer function at all, or will merely perform more slowly?

Acustica plugins are already famoulsy CPU-intensive; any reduction in performance is _really_ bad news.


----------



## Pictus

They will work, but not at the same speed if AVX-512 is available.


----------



## Peter Satera

3CPU said:


> I did plan to build another PC by May 2022, but I might pass on this! I already have an Intel 10th gen that runs great and will be giving that to my wife for photography and video editing. Now is not quite the right time to build due to costs of DDR5 and dedicated GPU.
> 
> *Fiery MotherBoard:* I read about the Z690 Hero motherboard, capacitor put in upside down lol, how the heck did the cameras miss that. Asus have since released a recall and the problem should be resolved.
> 
> *Why DDR5?* If I plan to upgrade in 3 years that memory will be useful. And in fact I decided to get myself the M1 Max for DAW, Blender and video editing. And upgrade my wife's 10th gen by 2023. Hopefully by then the MeteorLake should be available. But I suppose a new motherboard will be required. And then I will upgrade the M1 Max by 2025. That's right, I like to upgrade every 3 years.


It's probably a wise decision to hold off. I was thinking about another build too, but really backing out of it too due to the crazyness of prices and stability, as an early adopter it's really not worth the problems. I'll wait until I can go 256Gg DDR5 and actually be able to afford a 3000 series Nvid, probably 2023 for me.


----------



## Anthony

rgames said:


> Now the 4C/8T machine probably could only run 150 compressors and the new one can probably run 350 or more. But, again, who cares? That difference is clearly not reflected in the performance we care about: min latency achievable in a stressing project.
> 
> rgames


I agree that more realistic benchmarks would be more useful, however I disagree that the two versions of DAWbench are _completely_ irrelevant (as you seem to suggest). Why? Because although few of us are going to run 400 of the same compressors in one project, the aggregate CPU load they cause _is_ useful to know because it provides one way to comapre the ability of different systems to run a large number of plugins. Here I'm assuming that the numeric value of a parameter like the FLOPS needed to run 400 compressors is in some way proportional to the total number of FLOPS necessary to run a realistic combination of FX plugins (e.g., EQ, compressors, saturation, etc.). And ditto for VI plugins.

Indeed, you're own assertion that "_Now the 4C/8T machine probably could only run 150 compressors and the new one can probably run 350 or more_" supports what I'm saying.

Currently I'm considering building a desktop to replace my poor laptop which now routinely runs between 60 to 90% CPU load on most projects (resulting in really annoying fan-noise and frightening CPU temps!). And so here's a good example of where I could benchmark my current system (the laptop) and then compare it to a potential future system (a desktop) to see which one will avoid the probelms I'm currently experiencing.

I should also note that for me -- someone who spends most of his time 'programming' (MIDI) rather than playing/recording performances -- low-latency is not that important.

Cheers...


----------



## rgames

Anthony said:


> Indeed, you're own assertion that "_Now the 4C/8T machine probably could only run 150 compressors and the new one can probably run 350 or more_" supports what I'm saying.


Right, but what's the *realized* practical impact of that fact?

EDIT: and to be clear, I'm not saying there isn't one. I'm saying I've never seen one.


----------



## rgames

Here's an example of a meaningful benchmark:









PugetBench for Premiere Pro


Updated 12/11/2019. Want to see how your system performs in Adobe Premiere Pro? Download and run our free public Premiere Pro benchmark to see how your system compares to the latest hardware.




www.pugetsystems.com





This is for Premiere Pro. It doesn't load up an unrealistic project with 400 plugins. It loads up the kind of things that people actually do and provides a metric that is meaningful in terms of productivity and workflow. People can relate this benchmark to their daily activities. Nobody can do that with dawbench. At least not as far as I can tell.

rgames


----------



## Anthony

rgames said:


> Right, but what's the *realized* practical impact of that fact?


Here's an example of how I would use DAWbench (_in the absence of something better_).

1. Run DAWbench on my current (laptop) system and record the result
2. Run DAWbench on future potential systems and record those results
3. Plot DAWbench results of future potential systems against system cost
4. Use the above plot to select a system with the highest DAWbench/System_Cost ratio subject to the constraint that the increase in 'computing power' (relative to my laptop) will be sufficient to run current and future projects.

In this example "computing power" would be inferred via DAWbench as explained below.

For simplicity's sake, let's say that my laptop's DAWbench DSP result is 100 tracks and my CPU usage is 75% (which is unacceptably high). I would then make the inference that I'd need a new system with a DAWbench DSP of ~300 tracks because the old/new ratio of 100/300 would allow me to run my new system with ~25% CPU usage (which is 1/3 [100/300] of 75%).

Is this an optimal approach? No.
Do I want to do all this work? No.
Am I certain that I can use DAWbench as described above? No.
Are better (real-world) benchmarks available? (apparently) No.

As a scientist I've realized fairly early in my career that you _never_ have completely optimal tools to do your research and, so, you learn to use _what you have_ until something better comes along. I'm a pragmatist in this regard.

Cheers...


----------



## 3CPU

I guess Intel's Meteor will hit by 2023 and make quite an impact! 

{ Don't Panic }

Yeah! But I'm not waiting to find out, I'm going to try something else this year. I've gone over all the specs of so many difference sku's and SoC's. And then researched all the other components required, whoa this takes a lot time! Which is why I try to lighten the mood.


----------



## Pictus

AMD Teases 5nm Ryzen 7000 ‘Raphael’ Zen 4 CPUs, Unveils Ryzen 7 5800X3D with 96MB of L3 Cache


Firing back at Alder Lake




www.tomshardware.com


----------



## 3CPU

Pictus said:


> AMD Teases 5nm Ryzen 7000 ‘Raphael’ Zen 4 CPUs, Unveils Ryzen 7 5800X3D with 96MB of L3 Cache
> 
> 
> Firing back at Alder Lake
> 
> 
> 
> 
> www.tomshardware.com


Thanks *Pictus*, for sharing that article. 

A lot of power layered on one single die, silicon can only reduce some of the heat, I guess there may not be much headroom for this multi-layered beast. Should find out more about this soon after March 2022, once several tests and reviews are completed.


----------



## parapentep70

rgames said:


> Right, but what's the *realized* practical impact of that fact?
> 
> EDIT: and to be clear, I'm not saying there isn't one. I'm saying I've never seen one.


True: I've never run a project with 300 compressors and nothing else. This is not realistic. But I have run multiple projects with 20...50 tracks each with a Kontakt instance or a synth VST plus several (normally few) effects plus the master FX chain plus bus FX... all without disconnecting the network to check for the latest plug-in deal on the Internet.

My experience says that such projects are comparable to xxx compressors. Dawbench is useful to anticipate that my older computer could not run such projects without freezing track groups, and it also told me how large is my headroom running in the new machine. So I made the decission to upgrade at certian time. I run Dawbench in the new machine and also in the old machine. My results with a good PCI card never listed (EMU1820) were surprisingly close. The CPU and memory headroom as predicted by Dawbench was about 3x better. And indeed my critical project went down to 30% from some +90% (with ocassional drops).

Nobody (but me) is going to run my specific projects, so Dawbench takes a free compressor (and also a typical VSTi, I think it is Kontakt!) as an example of thread loading, then "calibration factor" is of course different for each project, but for me is a ton more than nothing... and it has been surprisingly accurate.

Of course nobody argues that using MY projects in multiple machines / DAWs could be ideal to understand my headroom... but Dawbench is good to estimate the cost / benefit of going to the next machine without actually building it. Or to make an educated decission to go with an 8th generation Intel over a deal on a 1st generation Threadripper (I'd do the opposite for video editing). Or to decide if I'll go Intel or Amd in 2023 or 2024.

Dawbench can be improved for sure, testing low latency audio is more complex than benchmarking CPUs for video edition... but... is there anything better for this kind of audio benchmarking? (I am genuinely interested).


----------



## Anthony

Pictus said:


> AMD Teases 5nm Ryzen 7000 ‘Raphael’ Zen 4 CPUs, Unveils Ryzen 7 5800X3D with 96MB of L3 Cache
> 
> 
> Firing back at Alder Lake
> 
> 
> 
> 
> www.tomshardware.com


Glad to see that AMD is not resting on its laurels.


----------



## Anthony

parapentep70 said:


> The CPU and memory headroom as predicted by Dawbench was about 3x better. And indeed my critical project went down to 30% from some +90% (with ocassional drops).


So here's a _real-world_ application where Dawbench was clearly useful. And in this case the "calibration factor" that _parapentep70_ mentions was close to 1 in that Dawbench predicted a 3-fold _increase_ in computing power and this translated into a 3-fold _decrease_ in CPU load.

I plan to do something similar before I build my new system.

Thanks for providing this information!


----------



## Anthony

Pictus said:


> AMD Teases 5nm Ryzen 7000 ‘Raphael’ Zen 4 CPUs, Unveils Ryzen 7 5800X3D with 96MB of L3 Cache
> 
> 
> Firing back at Alder Lake
> 
> 
> 
> 
> www.tomshardware.com


Just read the article...

The upcoming 5nm Zen 4 ‘Raphael’ (Ryzen 7000 family) chips could be perfect for DAW use b/c:
1. the new AM5 socket that Zen 4 chips will use supports both PCIe 5.0 and DDR5
2. the 5nm N5 process (which AMD may use for Zen 4 chips) reduces power by 30%
3. all cores (judging from preliminary _Halo Infinite results_) can be boosted >5.0 GHz

I think I'll wait until the second half of 2022 (as mentioned in the article posted by Pictus) to see if these are available and are as good as I hope. Fingers crossed...


----------



## jamieboo

Pictus said:


> AMD Teases 5nm Ryzen 7000 ‘Raphael’ Zen 4 CPUs, Unveils Ryzen 7 5800X3D with 96MB of L3 Cache
> 
> 
> Firing back at Alder Lake
> 
> 
> 
> 
> www.tomshardware.com


Very curious about this.
On the brink of going ahead and building a new machine but I'm prepared to wait for these 3d chips if they're worth it. (Although originally I had been hoping the 3d chips would be appearing earlier than spring).
Anyway, the benefits of the extra cache are so far mainly declared in gaming terms, but is this a technology that would benefit us composer/producers?
I do quite big dense Williamsy orchestral stuff (EW HO, largish template, not a vast amount of processing plugins), would the 3d cache tech be a boon for me?

Thanks


----------



## Pictus

jamieboo said:


> Very curious about this.
> On the brink of going ahead and building a new machine but I'm prepared to wait for these 3d chips if they're worth it. (Although originally I had been hoping the 3d chips would be appearing earlier than spring).
> Anyway, the benefits of the extra cache are so far mainly declared in gaming terms, but is this a technology that would benefit us composer/producers?
> I do quite big dense Williamsy orchestral stuff (EW HO, largish template, not a vast amount of processing plugins), would the 3d cache tech be a boon for me?
> 
> Thanks


I have no crystal ball or privileged information, but I highly suspect it will benefits...
How much, I do not know...
I also do not know if there will be more models than the Ryzen 7 5800X3D.


----------



## jamieboo

Pictus said:


> I have no crystal ball or privileged information, but I highly suspect it will benefits...
> How much, I do not know...
> I also do not know if there will be more models than the Ryzen 7 5800X3D.


Thanks Pictus
Yes, there's of course lots of unknowns. But I don't even know how onboard CPU cache benefits audio work in the first place, so am incapable of even a little reasoned speculation!
Yeah, I'm all over the place in my tentative new build speccing - first I felt pretty certain I was going to base things around the 5900x, then I thought maybe the i7 12700KF, but I was always curious about how the 3d cache chips might perform. But then if I'm prepared to wait until spring for them, then I possibly may as well wait for Zen 4 towards the end of the year.

(In reality, to avoid eternal resolution absence, I'll probably just get a 5900x in the next month or so.)

(Or maybe a 12700kf)

(Dammit)


----------



## Anthony

jamieboo said:


> Yeah, I'm all over the place in my tentative new build speccing ...


You're not alone; I too am going back-and-forth between Intel and AMD. A lot of changes are currently taking place in the CPU/computer industry and I think it will pay in the long run to wait a few months to see what platform is best. I use my systems for nearly a decade before building a new one.


----------



## jamieboo

Anthony said:


> You're not alone; I too am going back-and-forth between Intel and AMD. A lot of changes are currently taking place in the CPU/computer industry and I think it will pay in the long run to wait a few months to see what platform is best. I use my systems for nearly a decade before building a new one.


A decade - wow! You're probably more adept at future-proofing than me!
I'm on a 6 and a half year old i7 5820, 32GB. Still a good machine, but I'm hitting some limits.
I imagine ANY of these options will seem a night and day difference over my 5820.


----------



## easyrider

Anthony said:


> Glad to see that AMD is not resting on its laurels.


It’s Intel who have been stagnant.


----------



## rgames

Anthony said:


> Here's an example of how I would use DAWbench (_in the absence of something better_).
> 
> 1. Run DAWbench on my current (laptop) system and record the result
> 2. Run DAWbench on future potential systems and record those results
> 3. Plot DAWbench results of future potential systems against system cost
> 4. Use the above plot to select a system with the highest DAWbench/System_Cost ratio subject to the constraint that the increase in 'computing power' (relative to my laptop) will be sufficient to run current and future projects.
> 
> In this example "computing power" would be inferred via DAWbench as explained below.
> 
> For simplicity's sake, let's say that my laptop's DAWbench DSP result is 100 tracks and my CPU usage is 75% (which is unacceptably high). I would then make the inference that I'd need a new system with a DAWbench DSP of ~300 tracks because the old/new ratio of 100/300 would allow me to run my new system with ~25% CPU usage (which is 1/3 [100/300] of 75%).
> 
> Is this an optimal approach? No.
> Do I want to do all this work? No.
> Am I certain that I can use DAWbench as described above? No.
> Are better (real-world) benchmarks available? (apparently) No.
> 
> As a scientist I've realized fairly early in my career that you _never_ have completely optimal tools to do your research and, so, you learn to use _what you have_ until something better comes along. I'm a pragmatist in this regard.
> 
> Cheers...



If the project runs at 75% CPU on one computer and 25% CPU on another computer then it runs on both computers. So from the standpoint of writing/producing music there is no difference between those two computers.

That's what I'm getting at. With dawbench there's nothing being measured that has any relationship to productivity or workflow. It's a benchmarker's playground without a tie to reality.

The response to this fact is the "but one day" argument: if I lower my CPU usage then that's good because one day I'll need it. But we never get to that day. I haven't been able to create a practical project that is CPU limited for about the last 10 years. Sure, some run at 50% and some run at 15%. But they all run, and at latency equal to or better than acoustic instruments. The bottlenecks are elsewhere and don't really matter any more. The only projects I can create that are CPU limited have no relation to anything that people actually do (like adding 400 compressors).

Here's another example: let's say one desk can hold 500 pounds and one can hold 10,000 pounds. Should you buy the one that holds 10,000 pounds because maybe one day you'll need to put 10,000 pounds of gear on it? You could, but who has ever done that? Dawbench measurements are like comparing a desk that can support 500 lb against one that can support 10,000 pounds: the comparison is pointless. (Again, as far as I can tell - maybe I'm wrong - but these discussions come up a few times a year and they always wind up just like this thread: nobody can provide examples of how dawbench results relate to workflow/productivity).

A meaningful comparison is not 75% CPU vs. 25% CPU. A meaningful comparison is a project that won't run vs. one that will. If latency is an issue then min achievable latency for some reference project is also a valid basis of comparison.

rgames


----------



## Anthony

easyrider said:


> It’s Intel who have been stagnant.


Right, that was implicit in my post: "Glad to see that AMD is not resting on its laurels (despite having pulled so far ahead of Intel who, by contrast, switched from technology-development to corporate stock buybacks which caused their product line to stagnate).

Nonetheless I still worry about buying an AMD CPU b/c the only one I've ever owned (K6) actually failed. Yes it's anecdotal (N=1), but none of the Intel processors I've owned ever failed; most of those systems are still in service living re-purposed lives.


----------



## Anthony

rgames said:


> If the project runs at 75% CPU on one computer and 25% CPU on another computer then it runs on both computers. So from the standpoint of writing/producing music *there is no difference between those two computers. [emphasis added]*


Respectfully that's a bit specious. There _are_ clear, measurable, important differences between 25% and 75%. But before I list them, let me say that I chose those numbers for simplicity's sake. The CPU loads incurred by my latest projects are now routinely close to 100% such that some no longer run without audio drop-outs, clicks/pops, etc.

Running at 25% CPU usage is preferable because:

1. CPU temperatures are _much lower_. On my system a lean project (25-30% CPU usage) produces CPU temps of 45-50 deg-C compared to heavy ones (85-100% CPU usage) which produce CPU temps of 80-90 deg-C.

2. Distracting, annoying fan noise does _not_ occur. On my system the fan comes on at ~50-60% CPU usage and runs at full >75-80% forcing me to go into the next room to hear what I'm working on. That's a real PITA.

3. I don't have to worry about being frugal and keeping projects lean to avoid the issues described above. I have a lot of cool plugins that I cannot use b/c I lack sufficient CPU resources, and this _kills productivity and inspiration_ as I search for CPU-friendlier (but inferior-sounding) plugins.

Cheers...


----------



## rgames

Anthony said:


> Respectfully that's a bit specious. There _are_ clear, measurable, important differences between 25% and 75%. But before I list them, let me say that I chose those numbers for simplicity's sake. The CPU loads incurred by my latest projects are now routinely close to 100% such that some no longer run without audio drop-outs, clicks/pops, etc.
> 
> Running at 25% CPU usage is preferable because:
> 
> 1. CPU temperatures are _much lower_. On my system a lean project (25-30% CPU usage) produces CPU temps of 45-50 deg-C compared to heavy ones (85-100% CPU usage) which produce CPU temps of 80-90 deg-C.
> 
> 2. Distracting, annoying fan noise does _not_ occur. On my system the fan comes on at ~50-60% CPU usage and runs at full >75-80% forcing me to go into the next room to hear what I'm working on. That's a real PITA.
> 
> 3. I don't have to worry about being frugal and keeping projects lean to avoid the issues described above. I have a lot of cool plugins that I cannot use b/c I lack sufficient CPU resources, and this _kills productivity and inspiration_ as I search for CPU-friendlier (but inferior-sounding) plugins.
> 
> Cheers...


OK that's fine. Then that's your basis of comparison: noise, or heat. So do noise/temp measurements, not # of compressors. Measure what you care about. You're still assuming a relationship, not demonstrating it.

Computer A runs at 40 db and 70 C and Computer B runs at 35 dB and 65 C or whatever. If that's what you care about then go measure that.

I don't care about those numbers (and I strongly suspect most people don't) but maybe it matters for someone.

I've never heard anyone say "I could write a lot faster if only my CPU temps were lower."

And I'm still waiting to see my first "burnt up" CPU after many decades of tweaking computers in all kinds of ridiculous ways. It doesn't happen in consumer products unless it gets struck by lightning or engulfed in flames or manually juiced with insane voltages.

So I'm afraid I'm still going to have to vote "irrelevant" on those metrics.

But hey, have fun 

Cheers,

rgames


----------



## thevisi0nary

Anthony said:


> Respectfully that's a bit specious. There _are_ clear, measurable, important differences between 25% and 75%. But before I list them, let me say that I chose those numbers for simplicity's sake. The CPU loads incurred by my latest projects are now routinely close to 100% such that some no longer run without audio drop-outs, clicks/pops, etc.
> 
> Running at 25% CPU usage is preferable because:
> 
> 1. CPU temperatures are _much lower_. On my system a lean project (25-30% CPU usage) produces CPU temps of 45-50 deg-C compared to heavy ones (85-100% CPU usage) which produce CPU temps of 80-90 deg-C.
> 
> 2. Distracting, annoying fan noise does _not_ occur. On my system the fan comes on at ~50-60% CPU usage and runs at full >75-80% forcing me to go into the next room to hear what I'm working on. That's a real PITA.
> 
> 3. I don't have to worry about being frugal and keeping projects lean to avoid the issues described above. I have a lot of cool plugins that I cannot use b/c I lack sufficient CPU resources, and this _kills productivity and inspiration_ as I search for CPU-friendlier (but inferior-sounding) plugins.
> 
> Cheers...


You do you, but that’s like buying a gallon container to hold a pint of water instead of just buying a pint container.

Also you probably have inadequate cooling. I’ve regularly pushed my aging 4790k with a noctua fan to 70%-100% and hear nearly nothing.


----------



## Anthony

rgames said:


> OK that's fine. Then that's your basis of comparison: noise, or heat. So do noise/temp measurements, not # of compressors. Measure what you care about. You're still assuming a relationship, not demonstrating it.
> 
> Computer A runs at 40 db and 70 C and Computer B runs at 35 dB and 65 C or whatever. If that's what you care about then go measure that.
> 
> I don't care about those numbers (and I strongly suspect most people don't) but maybe it matters for someone.
> 
> I've never heard anyone say "I could write a lot faster if only my CPU temps were lower."
> 
> And I'm still waiting to see my first "burnt up" CPU after many decades of tweaking computers in all kinds of ridiculous ways. It doesn't happen in consumer products unless it gets struck by lightning or engulfed in flames or manually juiced with insane voltages.
> 
> So I'm afraid I'm still going to have to vote "irrelevant" on those metrics.
> 
> But hey, have fun
> 
> Cheers,
> 
> rgames


You cleverly avoided responding to "3. I don't have to worry about being frugal and keeping projects lean...". That is an _explicitly music-related _issue.

Indeed, a system with more CPU power will enable me to write the kind of music _I want to write_ rather than be limited to only _what's possible_ given my CPU's limitations. And DAWbench is useful in that it'll enable me to assess CPU resources of future machines.

I should thank you for this exchange; prior to starting it I really never thought much about DAWbench, but everything you've said has, ironically, convinced me just how valuable a tool it really is!

Cheers...


----------



## jamieboo

So, broadly speaking, as someone who composes orchestral stuff (EW HO, largish template, dense 'Williamsy' orchestration, not a vast amount of processing plugins), do these DAWbench results suggest that Intel would suit me more than AMD?
I'm considering either the 5900x or the 12700kf. My early instinct had been inclining towards the 5900x but maybe the 12700kf would be better for the kind of stuff I do.
I know there are imminent new developments but, as things stand, what do you think?


----------



## Pictus

The Intel 12700kf is only better with DDR5.


----------



## jamieboo

Pictus said:


> The Intel 12700kf is only better with DDR5.


Ah, silly me - I missed that rather crucial detail! Thanks Pictus.
So it seems, of the two I'm considering, my initial instinct was right: for the kind of stuff I do the 5900x seems the winner. Right?


----------



## Pictus

Right


----------



## Peter Satera

rgames said:


> If the project runs at 75% CPU on one computer and 25% CPU on another computer then it runs on both computers. So from the standpoint of writing/producing music there is no difference between those two computers.
> 
> That's what I'm getting at. With dawbench there's nothing being measured that has any relationship to productivity or workflow. It's a benchmarker's playground without a tie to reality.
> 
> The response to this fact is the "but one day" argument: if I lower my CPU usage then that's good because one day I'll need it. But we never get to that day.


I don't think 'we' is accurate. Creators hit hardware, (e.g CPU) limitations, _the day does come. _

Some creatives are proactive in hardware stepping, after all, this is what we call the '_Enthusiast_' range. It's comprehendible; to perform existing tasks with a lower ceiling. This has it's benefits, whether it's to spread core calculations to manage productivity, or reduce heat / noise the other main factor is robustness and compatibility (..._I hate my Threadripper! _). If your system shows no signs of labour, ever, then I agree there is no need to consider a new machine. But if a user is hitting 75% while another is hitting 25%, data can be taken from this, and one machine could be costing a lot less to cool / run on a daily basis. Pushing hardware at a high temp and operational level also takes it's toll and reduces life span on the components.

I agree that DAWbench needs to taken with a pinch of salt. It's not a project identical to our own use, and therefore it could be superfluous in comparison to 'real-world' production.



thevisi0nary said:


> You do you, but that’s like buying a gallon container to hold a pint of water instead of just buying a pint container.


I think this is the same discussion, about buying a system with bigger spec than what is required at a current level is sensible for any form of media creation. I imagine most creators buy bigger spec that what is currently required, to future proof for creative requirements. In the GPU market, anyone that invested in an RTX card gain big benefits, and I don't mean things like real-time game ray-tracing in unreal. *Adobe* and *Nvidia* have AI for RTX cards that significantly boost productivity in 3D. But, nobody bought these RTX cards for these exact tools. The hardware enables the production of software.

The day does come. We have threads about user hardware issues, whether it's to setup VEPro or to simply overclock a machine for better performance. But more appropriately also on posts from users unable to handle new libraries, or they state concerns or problems about CPU and Gigs of RAM eaten on Spitfire and Sine players. I wouldn't suggesting being an early adopter of hardware, but if you have the headroom then you are less likely to face these problems keeping your attention where it should be, on the audio.

The developer isn't always going to trim it down, if we want bigger and better it comes at a cost...(BBC Pro is like 600gb?), more samples, more code, more RAM, more CPU usage. We have the data to see where the demand is, our DAWs show it. To come back to both your comments, futureproofing beyond existing requirements is warranted, as it's informed by our own DAW benchmarks. When you buy, you want it to last for as long as possible.


----------



## Hendrixon

I also think Dawbench is useless... it has nothing to do with real life experience.
It lacks basic metrics like Talent and Vitamin D Deficiency, that's what maters for a composer.


----------



## Peter Satera

Hendrixon said:


> I also think Dawbench is useless... it has nothing to do with real life experience.
> It lacks basic metrics like Talent and Vitamin D Deficiency, that's what makes a composer.


 Don't forget large quantities of snorted caffeine!


----------



## thevisi0nary

Peter Satera said:


> I think this is the same discussion, about buying a system with bigger spec than what is required at a current level is sensible for any form of media creation. I imagine most creators buy bigger spec that what is currently required, to future proof for creative requirements.


I agree with this I just don’t think that’s what they were implying


----------



## Pictus




----------



## Hendrixon

Pictus said:


>



CAS 38 to CAS 40 that's all it takes to wipe off the 12900k gains?!
My feeling is that something is not right in the initial benchmarks...
The VI scores of the Intel 12 chips seemed too out of control as buffer size grew, the numbers don't follow any gradual progress.

I don't know...


----------



## 3CPU

Pictus said:


>



G-Skill and Corsair is what I usually get, the latter being my preferred choice. DDR5 is here and in the near future that will be the only choice for a new "high-end" build. Buying DDR4 now but upgrade the processor and motherboard in the near future and that DDR4 may not be compatible. AM5, with socket changing to "Land Grid Array" due out by June 2022 see - Link - will be jumping on the DDR5 wagon. What next? Perhaps by 2025, there might be integrated blazing fast memory and graphics similar to ARM architecture.... Apple, Jeff Wilcox, director, engineer responsible for Arm and M1 will be joining Intel. Source: Link1 and Link2

.


----------



## jamieboo

3CPU said:


> ...AM5, with socket changing to "Land Grid Array" due out by June 2022 see - Link - will be jumping on the DDR5 wagon.
> 
> .


Where can I find out about a June release for AM5?
As far as I can find they've just said second half of 2022 - nothing more specific than that (and so more likely nearer the end of the year).
If it's a June release then I might just wait until then to start my new build!


----------



## 3CPU

jamieboo said:


> Where can I find out about a June release for AM5?
> As far as I can find they've just said second half of 2022 - nothing more specific than that (and so more likely nearer the end of the year).
> If it's a June release then I might just wait until then to start my new build!


Unfortunately AMD could delay, your guess is as good as mine for reasons out of my control. Perhaps tough times due to Covid or other reasons and stocks for DDR5 are limited.

AM5 is coming with a new socket design (Land Grid Array), PCIe 5.0 and DDR5 support.

.


----------



## jamieboo

3CPU said:


> Unfortunately AMD could delay, your guess is as good as mine for reasons out of my control. Perhaps tough times due to Covid or other reasons and stocks for DDR5 are limited.
> 
> AM5 is coming with a new socket design (Land Grid Array), PCIe 5.0 and DDR5 support.
> 
> .


Of course. But I just wondered where you got June from at all. I couldn't find rumours of a June release anywhere. Everywhere seems to say simply second half of 2022.


----------



## Pictus

Hendrixon said:


> CAS 38 to CAS 40 that's all it takes to wipe off the 12900k gains?!
> My feeling is that something is not right in the initial benchmarks...
> The VI scores of the Intel 12 chips seemed too out of control as buffer size grew, the numbers don't follow any gradual progress.
> 
> I don't know...


I don't know either.
The memory controller/BIOS of the DDR5 right now is not mature.
For the DDR4 versions they are very picky, but looks like the new
BIOS makes things ok when using modern memory modules.

*--------------------------------------------------*



3CPU said:


> G-Skill and Corsair is what I usually get, the latter being my preferred choice.



I like Crucial more than G.Skill or Corsair.


----------



## 3CPU

jamieboo said:


> Of course. But I just wondered where you got June from at all. I couldn't find rumours of a June release anywhere. Everywhere seems to say simply second half of 2022.


Sorry, correct me if I am wrong: second half of 2022 is July 1st? Close enough I guess.


----------



## 3CPU

Pictus said:


> I like Crucial more than G.Skill or Corsair.


In quote and reference to the video you posted, wasn't that Corsair featured or did I confuse that with Crucial?


----------



## jamieboo

3CPU said:


> Sorry, correct me if I am wrong: second half of 2022 is July 1st? Close enough I guess.


No, forgive me. I misinterpreted the intent of your words.
Yeah, we'll probably be waiting a while!
I think I'll just go ahead and start plotting my February 5900x build.


----------



## TAFKAT

> CAS 38 to CAS 40 that's all it takes to wipe off the 12900k gains?!


?

He clearly stated there was no change in the results going from the CAS 38 to CAS 40.



> My feeling is that something is not right in the initial benchmarks...
> The VI scores of the Intel 12 chips seemed too out of control as buffer size grew, the numbers don't follow any gradual progress.


Similar level of scaling can be seen in the Quad Channel X299 results , indicating the Kontakt benchmark is more responsive to memory bandwidth than memory speed/latency. Z690's new dual memory controller layout with DDR5 allows cross communication that looks to increase the bandwidth in a very beneficial way.

More Info : Here

Thanks to Pictus for the original heads up and link on another forum


----------



## Hendrixon

TAFKAT said:


> ?
> 
> He clearly stated there was no change in the results going from the CAS 38 to CAS 40.


True!
How did I get it wrong?  
Well in my defense I saw the vid around 4am... so I probably fell asleep when he said the new dimms are CAS40 and I woke up to see the chart of 5200 vs 5600


----------



## Vladinemir

Pictus said:


>


Stumbled across this article








Linus Torvalds Wishes Intel's AVX-512 A Painful Death


Death to AVX-512!




www.tomshardware.com




From what I read so far on the net, overall frequency might decrease when using avx 512. Alternatively, the heat will increase significantly.
Could 11th gen become possible option then if this instruction will be used more widely? Those chips haven't received much love so far.


----------



## cedricm

rgames said:


> Thank goodness I can finally add that 400th compressor I've been needing


I care deeply about 400 compressors. They add an enchanting _je ne sais quo_i, especially between 40 kHz and 96 kHz. That's why I'm only working in 192 kHz sample rate 16x oversampling


----------



## Pictus

Vladinemir said:


> Could 11th gen become possible option then if this instruction will be used more widely? Those chips haven't receive much love so far.


No, they are old tec, produces too much heat and the AVX instructions makes things even worse.








AMD vs Intel: Which CPUs Are Better in 2022?


We put AMD vs Intel in a battle of processor prowess.




www.tomshardware.com


----------



## Jrides

Vladinemir said:


> Stumbled across this article
> 
> 
> 
> 
> 
> 
> 
> 
> Linus Torvalds Wishes Intel's AVX-512 A Painful Death
> 
> 
> Death to AVX-512!
> 
> 
> 
> 
> www.tomshardware.com
> 
> 
> 
> 
> From what I read so far on the net, overall frequency might decrease when using avx 512. Alternatively, the heat will increase significantly.
> Could 11th gen become possible option then if this instruction will be used more widely? Those chips haven't received much love so far.


This is one of the reasons contributing to my choice to go AMD.


----------



## Vladinemir

I'm little bit confused about AVX 512. It looks like both AMD's and Intel's new CPUs will support it. I'm curious if older processors with this instruction set have similar issues like the 11th gen.


----------



## Technostica

Vladinemir said:


> I'm little bit confused about AVX 512. It looks like both AMD's and Intel's new CPUs will support it. I'm curious if older processors with this instruction set have similar issues like the 11th gen.


It's mainly only been available on server and possibly high end workstation chips. 
So 11th gen support on desktop chips might have been a desparate attempt to keep closer to AMD. 
It's physically present on some 12th desktop chips but it's been disabled. 
One reason is because they use a hybrid architecture and the lite cores don't support it. 
Not sure if it will be supported on the next gen chips from both camps due later this year.
With Intel it seems as if hybrid chips are the focus now, at least for desktop.
Probably still there for the enterprise chips, for a while anyway. 
One reason being that it should have some traction there as they pushed it as a way to differentiate from AMD and it gave them good performance for certain workloads.


----------



## Hendrixon

Intel plans on an HEDT all P core monolithic chip to challenge the TR Pro.


----------



## Pictus

5800X3D test for audio in the German magazine.
Long history short, excluding Pro Tools and Cubase, the 5800X3D kick ass. 








Konzert der Prozessoren


Schnelle und leise Rechner wünscht sich jeder, besonders in der Musikproduktion: Ein überlasteter Prozessor darf dort weder in einem Gitarrensolo knacksen noch sein Lüfter wie ein Staubsauger rauschen. Der Apple Mac Studio ist schnell, leise und dazu noch kompakt. Doch unser Leistungsvergleich...




www.heise.de


----------



## Hendrixon

ZEN4 on AM4?!
Yes plz


----------



## Vladinemir

AM4 Zen 4 APU with PCIe 4 and RDNA could be nice option if they could pull that off on this socket. Don't know if newer AMD graphic drivers have latency issues thought. Potential problem could be AMD only considering this. It might be too late when they release it unless existing AM4 mobos will be able to swap the chip easily.


----------



## Pictus

No GPU latency problem from AMD and it is also easier to install only the driver.





NVidia has a more bloated and aggressive driver.





Nvidia Driver, no latency anymore?


Hi all! We all know that AMD drivers have from far, less latency than Nvidia drivers, and for that reason we all recommand an AMD graphic card for audio working. But recently i have dealt with a new install on a PC with an Nvidia graphic card. And when i updated to the latest driver i saw an...




vi-control.net





Windows 11 messy operational system may also create some latency problems...


----------



## Pictus




----------



## Jrides

Pictus said:


>




saw this the other day and was not sure if I should post it here.


----------



## Pictus

AMD to Reveal Ryzen 5 5600X3D and Ryzen 9 5900X3D with up to 200MB of Cache (128MB 3D Stacked) Next Month? 








AMD to Reveal Ryzen 5 5600X3D and Ryzen 9 5900X3D with up to 200MB of Cache (128MB 3D Stacked) Next Month? | Hardware Times


A while back, it was reported that AMD might launch additional Zen 3 SKUs leveraging the 3D V-Cache technology. It would seem that that rumor is indeed true. Well-reputed tipster @Greymon55 has stated that there will be “several new products” headed to the Zen 3D family next month. There’s no...




www.hardwaretimes.com


----------



## vitocorleone123

AMD has continued pushing ahead. Intel is at least awake now.

The downside, from what I've seen since AMD started, is that there tends to be more "growing pains" - glitches, incompatibilities, gremlins, etc. when using AMD... at least for some period of time until they can figure out how to address them. In other words, they can be cutting edge, but if you get cutting edge, expect to probably get cut. The hard part, of course, is that then, as a consumer, if you wait for those to be addressed you're then faced with buying "old" tech that's stable vs. the next new hotness.

(I'm not saying AMD is bad or that people shouldn't buy it, just that there seems to be more initial risk involved _for a time_ than with Intel - not that Intel is some paragon of perfection!).


----------



## Pictus

vitocorleone123 said:


> AMD has continued pushing ahead. Intel is at least awake now.
> 
> The downside, from what I've seen since AMD started, is that there tends to be more "growing pains" - glitches, incompatibilities, gremlins, etc. when using AMD...


True, but excluding UAD stuff, seems all is fine now.
BTW, gremlins are brand agnostic.


----------



## Pictus

How to "undervolt" AMD RYZEN 5800X3D Guide with PBO2 tuner.








GitHub - PrimeO7/How-to-undervolt-AMD-RYZEN-5800X3D-Guide-with-PBO2-Tuner: Get the Most out of your 5800X3D using PBO Curve Optimizer!


Get the Most out of your 5800X3D using PBO Curve Optimizer! - GitHub - PrimeO7/How-to-undervolt-AMD-RYZEN-5800X3D-Guide-with-PBO2-Tuner: Get the Most out of your 5800X3D using PBO Curve Optimizer!




github.com


----------



## Pictus

AMD Ryzen 7 5800X3D Continues Showing Much Potential For 3D V-Cache In Technical Computing


----------



## Pictus

Scan 5800X3D DAWbench, different from the German magazine, not good results.








DAWBench DSP / VI Universal - Cross Platform DAW Benchmarks : - Page 30 - Gearspace.com


Hey All, Quick heads up for the latest DAWbench Radio Show Episode. Music Tech Pioneers III : Sequential Circuits : Rise, Fall, Return ! 'UMII , You Me ' Uploaded Now across all of the major pod casting platforms. A few links below , but easily found on most others with a search. Podcast Home...



gearspace.com


----------



## Pictus

Intel Core i5-13600K 14 Core Raptor Lake ES CPU Tested, 40% Faster Than Core i5-12600K & Beats The Ryzen 9 5950X In Cinebench


The latest benchmarks of Intel's mainstream Core i5-13600K 14-Core Raptor Lake Desktop CPU have leaked out and it's a beast.




wccftech.com













Intel Core i9-13900K Raptor Lake CPU Gaming & Synthetic Performance Benchmarks Leaked, 5% Faster Than Core i9-12900K On Average


The first gaming and synthetic performance benchmarks of Intel's Core i9-13900K Raptor Lake 5.5 GHz CPU have been leaked.




wccftech.com


----------



## Pictus




----------



## Nico5

Yikes:









Intel Confirms CPU & Component Price Hike, Taking Effect In Q4 2022


After posting disastrous Q2 2022 earnings, Intel has confirmed that it will be raising prices on all major components including CPUs.




wccftech.com


----------



## Pictus

Starts at 04:27


----------



## Pictus

Arctic Announces Service Kit For Defective Liquid Freezer 2 AIOs








Arctic Announces Service Kit For Defective Liquid Freezer 2 AIOs


Arctic has released a free service kit for all owners of defective Liquid Freezer II AIOs, featuring a bad gasket.




www.tomshardware.com


----------



## Hendrixon

Thanks for the heads up buddy.
I have a II 360, apparently the issue started at May 2021 and since I bought mine in Jan 2021 my unit is not affected.

Hopefully


----------



## Pictus

DAWbench Suite - AMD 7000 results


https://gearspace.com/board/showpost.php?p=16188106&postcount=908


----------



## Hendrixon

Yup for VI voice streams the take from this is that the 7950X only matching/edging the 12900.
Coming 13900 there will be no contest.

For dsp processing I assume 13900 will be on par or close.

Btw, on a recent conference of Intel, they had on display a "Raptor Lake" silicon wafer, but some sharp eye saw that it wasn't anything like previous Raptor Lake wafers... that one had 34 cores!
And not just that, they were all P cores  Flipping the wafer he saw a sticker saying "Raptor Lake-S 34-core"... which means its a desktop chip.


----------



## Pictus

It is a "cat & mouse" game between AMD and Intel.


----------



## Pictus




----------



## simfoe

Pictus said:


> DAWbench Suite - AMD 7000 results
> 
> 
> https://gearspace.com/board/showpost.php?p=16188106&postcount=908



128 / 3800 for the 7950X DDR5, to put that into perspective, my 7700K from 2017 does 960 at the same buffer

5 years is a long time in CPU land 😂


----------



## AR

I mean, it's all good and so with these 12th 13th gen CPUs but, they are limited to 128gb RAM. To me, totally unuseable. My template (Orchestral Tools) takes 144gb while it's purged. The Spitfire template even more. I can't just rely on some data sheet provided by pure 1:1 comparison of stacking some staccato patches or Diva synths. I need some "real situ" comparison like a full orchestral track of 5 minutes+ in minimum 5.0 surround. Everything else is just fishing in the dark.


----------



## Pictus

AR said:


> I mean, it's all good and so with these 12th 13th gen CPUs but, they are limited to 128gb RAM. To me, totally unuseable. My template (Orchestral Tools) takes 144gb while it's purged. The Spitfire template even more. I can't just rely on some data sheet provided by pure 1:1 comparison of stacking some staccato patches or Diva synths. I need some "real situ" comparison like a full orchestral track of 5 minutes+ in minimum 5.0 surround. Everything else is just fishing in the dark.


Wait for the Intel Sapphire Rapids








Intel's Sapphire Rapids Had 500 Bugs, Launch Window Moves Further


Someone call pest control.




www.tomshardware.com


----------



## AR

I'm fine by now. Have a 10980xe as my VI machine, and 1x9900k plus 3x 3770k & 1x4770k just for the audio processing. There is still a 10920x lying around waiting to get used for my assistant.


----------



## Pictus

AMD Ryzen 7950X: Impact of Precision Boost Overdrive (PBO) on Thermals and Content Creation Performance


The new AMD Ryzen 7000 Series of processors bring terrific performance across the board, but have been criticized in many reviews due to the fact that they often hit CPU temperatures of 95 Celcius under heavy loads. However, we have found that they only operate at these high temperatures when...




www.pugetsystems.com


----------



## Pictus

AMD Ryzen 7000 is looking better than Intel, more performance with less power and
possibility to upgrade to the next generation and keep the same motherboard.
https://wccftech.com/intel-core-i9-13900k-raptor-lake-cpu-same-performance-as-core-i9-12900k-at-80w/


----------



## Anthony

Pictus said:


> AMD Ryzen 7000 is looking better than Intel, more performance with less power and
> possibility to upgrade to the next generation and keep the same motherboard.
> https://wccftech.com/intel-core-i9-13900k-raptor-lake-cpu-same-performance-as-core-i9-12900k-at-80w/


Is this what you mean by "Unlimited Power"? That could be a problem in a music studio.


----------



## Pictus

Anthony said:


> Is this what you mean by "Unlimited Power"? That could be a problem in a music studio.



I don't mean anything, the guy who wrote there...
But as Intel is the dark side, I guess yes...


----------



## MarcusD

Launch day is tomorrow (I think) so we'll hopefully get loads of bech videos drop.


----------



## MarcusD




----------



## Technostica

The good thing is you can restrict both of the new platforms to 120W or less max power and still get decent performance. 
Both of them are well outside the efficiency curve at stock and especially Intel. 
But they both offer significant efficiency gains over previous platforms when reigned in.
Good news for silent PC enthusiasts.


----------



## thevisi0nary

13900k looks so bad. The 13600k is amazing for the price and can be used with DDr4 if you want to save money.


----------



## Manaberry

Pictus said:


> AMD Ryzen 7000 is looking better than Intel, more performance with less power and
> possibility to upgrade to the next generation and keep the same motherboard.
> https://wccftech.com/intel-core-i9-13900k-raptor-lake-cpu-same-performance-as-core-i9-12900k-at-80w/


I've seen some R20 benchmarks with the Ryzen at 90W, and it does not benefit that much. In fact, Intel 13900K has a better performance per watt ratio. So it really depends on where you put that power limit.


----------



## Manaberry

AR said:


> I'm fine by now. Have a 10980xe as my VI machine, and 1x9900k plus 3x 3770k & 1x4770k just for the audio processing. There is still a 10920x lying around waiting to get used for my assistant.


Looks like you are having Elon Musk's children on Saturday afternoon at the park.


----------



## TonalDynamics

So I've personally built and tested about 5 of my own 'DAW' PCs throughout my lifetime, and I feel the need to make this post in order to help other musicians here make an objective decision about whether they should upgrade their CPU or not.

FWIW, I have _never_ found DAWBench to be a practical indication of how much performance you will gain with a CPU upgrade, mainly for the reason that they don't even test what is arguably the most important metric by far, which is single-core performance.
They get their polyphony and DSP results by spreading the entire load of the various plugins and VI across every core; but this is a _highly _impractical metric for many reasons:


Our biggest concern is audio dropouts (clicks and pops) while working.
DAWs (much like video games), are still riddled with serial-processing tasks that are either difficult or impossible to parallelize with multi-core, such as dedicated monitoring signal-chains (or virtually _any_ kind of monitoring chain), channel inserts, and busses; *thus single-core performance is still the most critical metric for DAW performance, particularly during the monitoring/experimenting/tracking phases of a project.*
You don't _ever_ get to max out all of your cores in a real-world DSP/VI scenario, as in some theoretical throughput-heaven before you run into dropouts (the testing method); sadly all you need is _one_ core, or most often perhaps a small handful of cores which are handling the most intensive work, to overload, thus causing dropouts.
You will hit that channel/buss/parallel-monitoring-chain single-core bottleneck _far, far_ more often than you will ever have to worry about maxing out all of your cores in a playback/mixing situation at max buffer settings, to the extent that the vast majority of your DAW woes will be CPU-intensive plugins (or sample DISK overloads for us VIC folk) that you are unable to monitor properly within a given signal-chain (which forces you to strip off certain FX, kill off voices or layers of synths, etc., to cull the overloads)

_If you want to gain a more realistic expectation of performance gains from a new chip, literally just go to this page:_ https://www.cpubenchmark.net/singleThread.html

*TLDR 1
Purported performance gains from i9-11900k / i7 11700k to i9 12900k for DSP and VI loads = ~45% and ~55%, respectively
Actual non-OC benchmark results between i9-11900k / i7 11700k, vs. the i9 12900k for SC performance = approx. 16% *

^I have found this to be an _incredibly_ accurate predictor of CPU overhead gains for every new board/chip I've installed.
RAM makes virtually no difference with CPU intensive plugins (assuming DDR4).
I can't speak to the full implications of *DDR5* since I haven't used it yet, but be advised that DDR5 typically does _not_ improve latency for single-core/serial tasks, and thus should not be boosting SC speeds, as described in this reddit thread a few posts down: 

Honestly I've always halfway suspected DAWBench was somehow being pushed as promotional material for the big chipmakers...
either that, or the developer just has 0 interest in performing actual relevant (in terms of bottlenecks) SC performance tests for serial processes / monitoring scenarios.

*TLDR 2*
DawBench is a largely synthetic benchmark, and you _might_ not need that CPU/RAM upgrade as much as you think; it's a massive, time-consuming PITA to go through and you will _not_ get the performance overhead boost teased by DawBench... unless you need to somehow go from 300 active tracks in your DAW to 500? 🤔

If you've upgraded within the last 2-3 years or so, it really probably isn't that big of a deal at this time... just go spend all that money on BF sample library sales instead!

Hopefully this helps add some perspective to the frenzied marketing.


----------



## Technostica

Manaberry said:


> I've seen some R20 benchmarks with the Ryzen at 90W, and it does not benefit that much. In fact, Intel 13900K has a better performance per watt ratio. So it really depends on where you put that power limit.


It's handy if you give a link to the review.
At the same wattage, the AMD chip is more efficient.
It's common for chips to be more efficient at lower power limits, so you'd need to compare them both at 70W to get a more meaningful comparison.
I'd rather see them compared at 125W as 70W seems rather low.

In ecological terms, Intel, AMD and Nvidia are all irresponsible in releasing products that are arguably overclocked at the factory to way beyond their efficient operating range.
I'd like to see CPUs limited to 125W at stock, so that the average person never sees power go beyond that.
Leave the power hungry overclocking to the enthusiasts as they are a relatively small number.
The average consumer shouldn't be exposed to this nonsense.


----------



## thevisi0nary

Manaberry said:


> I've seen some R20 benchmarks with the Ryzen at 90W, and it does not benefit that much. In fact, Intel 13900K has a better performance per watt ratio. So it really depends on where you put that power limit.


Unless you are building a small form factor PC the idea of buying a 13900k to cap it at 90w just makes no sense to me.

The 13900k capped at 90w gets 4000 points less in r23 multi thread compared to the 13600k stock. 20k vs 24k at half the price. It would take you years to recoop the difference in electricity costs unless you were rendering 24/7.

(@ 19:00, can’t time stamp on mobile.)


I can’t speak for other tests but in this one the 7950x does much better at lower wattages. At the same time the 7600x and 7700x make zero sense while the 13600k exists, and the 7900x is only 20% faster in MT and costs $200 more.



https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_9_7900x-vs-intel_core_i5_13600k


----------



## tabulius

Ryzen 7000 series seem very tunable and efficient when undervolting and/or limiting power:



And the same seems to apply for Intel 13th gen as well:



So building a quiet PC, I think some tuning is required with these latest gen of CPUs. I'm happy to lower temps around 20 degrees without affecting performance that much.


----------



## Pictus

Technostica said:


> It's handy if you give a link to the review.
> At the same wattage, the AMD chip is more efficient.
> It's common for chips to be more efficient at lower power limits, so you'd need to compare them both at 70W to get a more meaningful comparison.
> I'd rather see them compared at 125W as 70W seems rather low.


In all other tests I saw the AMD is more efficient.


----------



## Technostica

Pictus said:


> In all other tests I saw the AMD is more efficient.


As was the case here also.
My initial rough impression is that it seems close enough that it wouldn't be the major factor when choosing a platform. 
You have probably looked at it more closely than I have, so how much of a difference have you seen at 125W and lower? 
Whilst AM5 has longevity on its side, it's quite expensive at this point overall.


----------



## Pictus

Sorry, I only remember that AMD was overall more efficient, but not the numbers...
Something interesting, the reviews mention with AMD they do not perceive any
thermal throttle, the system always runs smooth even with high temperatures.


----------



## thevisi0nary

thevisi0nary said:


> Unless you are building a small form factor PC the idea of buying a 13900k to cap it at 90w just makes no sense to me.
> 
> The 13900k capped at 90w gets 4000 points less in r23 multi thread compared to the 13600k stock. 20k vs 24k at half the price. It would take you years to recoop the difference in electricity costs unless you were rendering 24/7.
> 
> (@ 19:00, can’t time stamp on mobile.)
> 
> 
> I can’t speak for other tests but in this one the 7950x does much better at lower wattages. At the same time the 7600x and 7700x make zero sense while the 13600k exists, and the 7900x is only 20% faster in MT and costs $200 more.
> 
> 
> 
> https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_9_7900x-vs-intel_core_i5_13600k



Apparently the results in that power scaling benchmark were caused by a clock speed error in Intel's overclocking software, so if this is correct it's certainly a lot better than the original numbers. Just updating it here for transparency.


----------



## Manaberry

Technostica said:


> It's handy if you give a link to the review.
> At the same wattage, the AMD chip is more efficient.
> It's common for chips to be more efficient at lower power limits, so you'd need to compare them both at 70W to get a more meaningful comparison.
> I'd rather see them compared at 125W as 70W seems rather low.
> 
> In ecological terms, Intel, AMD and Nvidia are all irresponsible in releasing products that are arguably overclocked at the factory to way beyond their efficient operating range.
> I'd like to see CPUs limited to 125W at stock, so that the average person never sees power go beyond that.
> Leave the power hungry overclocking to the enthusiasts as they are a relatively small number.
> The average consumer shouldn't be exposed to this nonsense.


Sure, forgot to link it.


I totally agree with you. The power consumption (the 4090 RTX crazy stupid consumption) gets out of hand. I may be one of the few having a machine with an underclocked 2080 Ti @ .700 mv, running games smoothly for 150W less.


----------



## AR

Manaberry said:


> Looks like you are having Elon Musk's children on Saturday afternoon at the park.


Dunno if that's good or bad. I think - bad.


----------



## Pictus

Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W








Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W | Club386


Prefer to keep temperature and power consumption down to lower levels this winter? Here's what happens when the best CPUs are scaled back.




www.club386.com


----------



## Manaberry

Just ordered the 13900K. Gotta see what it's capable of with macOS.


----------



## TonalDynamics

Manaberry said:


> Looks like you are having Elon Musk's children on Saturday afternoon at the park.





AR said:


> Dunno if that's good or bad. I think - bad.



"at least 1 upper case, numeric, and special character must be included"


^Error message received by Elon when naming his children


----------



## Pictus




----------



## Hendrixon

So that's the direction? 360mm AIO struggles to keep up?
Lord have mercy


----------



## Technostica

Pictus said:


> Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W
> 
> 
> 
> 
> 
> 
> 
> 
> Intel Core i9-13900K vs. AMD Ryzen 9 7950X at 125W and 65W | Club386
> 
> 
> Prefer to keep temperature and power consumption down to lower levels this winter? Here's what happens when the best CPUs are scaled back.
> 
> 
> 
> 
> www.club386.com


Based on that wonderful article, purely looking in terms of CPU performance and efficiency, I'd take the Intel chip and limit it to 90W or so.
I'm basing this partly on the gaming performance as I think that's the closest to DAW usage.

There are other metrics to look at, including platform features and longevity and overall cost, but overall, for performance and efficiency with non continuous 100% load workloads, Intel look better to me.
I'm amazed that this is the case considering the process node differences.

For non techies, be aware that both Intel and AMD are in effect over-clocking their chips at the factory, compared to how things were done in the past.
So ignore the headline figures and rely on articles such as above to get info on how to run these in DAWs.


----------



## Anthony

To be safe, I'm going with _this_ cooler ... it even comes will cool RGB!


----------



## Pictus

Hendrixon said:


> So that's the direction? 360mm AIO struggles to keep up?
> Lord have mercy



It is what Technostica wrote, they are factory overclocked.
Audio workload is not like 3D rendering, tweak the CPU max wattage and 
go with the biggest air cooler you can get.


There will be a new copper version of IceGiant


Next year there will be a new NH-D15 version with improved 140mm fan.





Roadmap of upcoming products


This roadmap allows you to stay up to date regarding the estimated time of arrival (ETA) of future Noctua products.




noctua.at


----------



## Hendrixon

Pictus said:


> Next year there will be a new NH-D15 version with improved 140mm fan.


Reading this and posting here from a pc running an NH-D14 since 2009 without a single hickup!
Still works the same way, same original fans... what an amazing bit of engineering

p.s. Yes this machine is still on win 7 and works flawlessly, I think its even the same original install (I have system files still reading creation date from 2009). its sure is more stable then my 1.5 y/o win 10 DAW box


----------



## Pictus

Hendrixon said:


> Reading this and posting here from a pc running an NH-D14 since 2009 without a single hickup!
> Still works the same way, same original fans... what an amazing bit of engineering


RuLeZ!



Hendrixon said:


> p.s. Yes this machine is still on win 7 and works flawlessly, I think its even the same original install (I have system files still reading creation date from 2009). its sure is more stable then my 1.5 y/o win 10 DAW box


I would blame the bloatware and the anti-piracy stuff like PACE/iLock.


----------



## Pictus

DAWbench Suite - AMD 7000 and 13th Gen Intel results


https://gearspace.com/board/showpost.php?p=16229111&postcount=934


----------



## Pictus

Sauron won the Brazilian elections.
Now everything will rot as before.
What you see in the media is not true, they are Morgoth's servants.
I won't have free time for forums.
Bye bye.


----------

