What's new

Xeon CPUs vs 7700k

Phryq

AlbertMcKay.com
So I'm planning to build a travel-computer.

This board is ideal, as is supports 64gb of ram and is tiny,

https://asrockrack.com/general/productdetail.asp?Model=EPC612D4I#CPU

but it only supports E5 CPUs.

So I'm wondering, how do E5s, or E3 (I'm thinking the e3 1275 v6) compare to a 7700k. On all benchmarks online, the 7700k seems to win by far... so how do they compare for audio? Even close?

I'm seeing in general Xeons have lower clocks, but higher cache (and sometimes cores).

So running a big orchestral template (imagine Berlin Strings, Woods, and Brass) with a few verbs / EQ / compressors, how do they compare?


One thing I don't understand... everyone says single-core performance is key. But my laptop can run any single synth by itself - the only problem I have is with many tracks together. So since Reaper is spreading the load across tracks, shouldn't more cores allow me more tracks? In that case a low watt high-core Xeon like a Xeon-D should be ideal, no?

Is the high-clock thing only for ultra-low latency / live performance? I don't mind running a 1024 or 2048 buffer (right now I'm often at 8132. Waiting a second before playback isn't an issue).
 
Last edited:
I was looking at that, but only 2 ram slots, so maximum 32gb. I'd ideally like 64. I could go for their z270g, which allows 4 slots, but then it's slightly bigger.

Will the x270i support a 7700k?

Maybe I should go with the registry-tweak, and try my 2x samsung 960 Pro (is a RAID 0 a good idea? I was thinking not).
 
I'm using a Xeon for audio. It's the most stable system I've ever owned, Mac or PC. The E3 series Xeons in particular are capable of clock speeds up to 4.0 GHz, comparable to the 7700k.

If you plan to overclock, the 7700k has a definitive advantage. On the other hand, the ability of Xeons to take ECC RAM is an asset, not a liability (and it is just an ability with the E3 Xeons--you don't have to use ECC). Bit flips in RAM are more common than you might think.
 
On the other hand, the ability of Xeons to take ECC RAM is an asset, not a liability

Depends on what system is going to be used for. Servers, enterprise, for sure, ECC RAM is great in those cases. Audio production, not really that much... https://blog.codinghorror.com/to-ecc-or-not-to-ecc/

Scroll a bit down to "But who gives a damn what I think. What does the data say?" part. :)


ECC is not really necessary for our line of work.
 
It is if you have a Mac Pro, since that's what they take. :)

Seriously, though, what kind of work is it necessary for? That's a real question, not an argument! I always just assumed that the error correction is to prevent instability and crashes, and that was why Apple uses it.
 
Depends on what system is going to be used for. Servers, enterprise, for sure, ECC RAM is great in those cases. Audio production, not really that much... https://blog.codinghorror.com/to-ecc-or-not-to-ecc/

Scroll a bit down to "But who gives a damn what I think. What does the data say?" part. :)


ECC is not really necessary for our line of work.

What constitutes necessary? Caring about the integrity of your data? Ideally, all computer systems that create and store important data should have ECC RAM.

Data corruption increases with the density of RAM, and RAM densities are getting higher all the time.

The fact that Intel only offers ECC memory support on high-end platforms is more about profitable segmentation of its products than anything else. AMD has long supported ECC RAM on their consumer CPUs, as they should.
 
It is if you have a Mac Pro, since that's what they take. :)

Seriously, though, what kind of work is it necessary for? That's a real question, not an argument! I always just assumed that the error correction is to prevent instability and crashes, and that was why Apple uses it.

Data servers, stuff like that. Anything that's on and doing something 24/7 and needs to be error-free.
 
Data corruption increases with the density of RAM, and RAM densities are getting higher all the time.

Looks like you didn't read the parts of that article I pointed out.

Soft issues with RAM are not a real issue anymore, hard issues (as in hardware failure) are a LOT more often than those.
 
Looks like you didn't read the parts of that article I pointed out.

Soft issues with RAM are not a real issue anymore, hard issues (as in hardware failure) are a LOT more often than those.

I enjoy Jeff Atwood's posts as much as anyone, but:

  • He admits that the cost difference between ECC and non-ECC RAM is minimal these days. So why not use it?
  • He uses Google's early, non-ECC servers as evidence that you don't really need ECC. What he doesn't mention (and might not know) is that an early Google engineer believed that not using ECC RAM was one of the biggest mistakes they ever made, and one they spent a lot of time fixing.
  • There is nothing to back up Jeff's assertion that "soft issues with RAM are not a real issue anymore." Yes, RAM is more reliable these days, but densities are higher, and the chance that an energetic proton or neutron causes a soft error in your RAM hasn't gone down.
  • Modern file systems (ZFS, btrfs) are getting better at detecting bit rot, but they can't do this as reliably without ECC RAM.
  • An interesting post discussing Jeff's ECC article from a Google employee is here.
If you don't care whether you can still open a file or use a particular sample more than five years from now, then you might not need ECC RAM. But these days, it's relatively cheap insurance.
 
Error correction starts with a large CPU Cache.
When data retrieved from Cache is used it saves a trip to RAM.
Extra speed from high efficiency.
Same goes with ECC.

However whether our software is highly optimized or not is the question.
We have Plug ins that require 4.4ghz CPUs and a one size fits all OS.
I run equally impressive plug ins on 4 x Cores of a 400mhz DSP Chip, that uses a few lines of code from windows for compatability.

Nothing wrong with high speeds to cope from inefficient code.
It's not developers faults.

Imagine if they banned together to support and create an OS just for us.
No background processes, no permissions from the CPU for every little instruction regardless if it's repeated frequently.

Before long we'll see no more speed as heat is the problem, but more Cores to compensate...
I'm already seeing that with AMD.
If my number crunching DSPs can operate at such low speeds there's no reason our audio apps can't run 10 times more efficient with a dedicated OS.

What a future that would be.
 
If you don't care whether you can still open a file or use a particular sample more than five years from now, then you might not need ECC RAM.

Yes, I don't care. Why? Because in my past two computers and my current one, over 10 years, I've never had a file access issue caused by soft or hard RAM faults. Honestly.

There's a much greater chance of a hard drive breaking down and not complying.
 
Yes, I always thought it would be amazing if Reaper made an OS "ReaOS", that could only run Reaper and plugins, but it would need Windows VST compatibility.

chimuelo What plugins are you talking about?

Back to CPUs; I really don't care about ECC ram, or losing random files in 5 years.

However, I still don't know about Xeon vs Core, or Cores vs Clocks... I'm seeing arguments from all sides.

Logically it seems to me that more cores would be better. E.G., my crappy 47w Haswell can run any synth, or any single track, with only 2.7 GHz. Therefore, an 8 or 16 core at 2.7 GHz should also be able to run any synth/track, and also, 2 or 4 times as many of them, right? If my DAW (Reaper) is able to split the load among cores, there should be no problem.

Most likely I'm not understanding something; this is just my layman logic.
 
It's like this: a 2.7 GHz core will fall over much sooner than a 4.4 GHz core (especially with non-multicore compatible plugins like Reaktor or Falcon). Means you could load more stuff per track before a core gets overloaded. When you load multiple plugins in series, they are always processed on the same core in the DAW (serial processing like that cannot be parallelized, since input of second plugin depends on the output of the first plugin, which is logical).

So, for a great DAW, core frequency is just as important as number of cores, and in both cases the same adage works: the more, the merrier!
 
When you load multiple plugins in series, they are always processed on the same core in the DAW (serial processing like that cannot be parallelized, since input of second plugin depends on the output of the first plugin, which is logical).
I'm surprised to hear you say this. Isn't this exactly what Reaper's anticipative FX does? Anything not record armed (and therefore has no RT requirement) can be slightly pre-rendered. This allows for parallelism on the same FX chain, with each FX working on a different block in time. (At least any non master bus FX chain.)
 
Last edited:
I didn't know that ^^. I've always kept it off, because I heard somewhere, somehow it was buggy with samples...

But anyhow, high clock is still only important if your longest 'serial' is too much for a single core, right?

E.G. I have 100 tracks. Each track has Kontakt and an EQ. They all go to a single reverb/compressor send. On the master track is a multi-band compressor, EQ and limiter.

Therefore the 'serial' length would be Kontakt-EQ-Verb-Comp-MultiComp-EQ-Limiter. And one core has to be able to handle that.

However, for the 100 instances of Kontakt+EQ, more cores would be of more help, right? As long as my single core can handle that 'serial length', then more cores will allow more tracks?

Disclaimer. I wouldn't have a chain like that. I would master separately.
 
I didn't know that ^^. I've always kept it off, because I heard somewhere, somehow it was buggy with samples...
I don't think I've run into any real issues with it. I do leave it on myself. But yes, when I find I'm troubleshooting something wonky it's the first thing I try disabling. :)

Therefore the 'serial' length would be Kontakt-EQ-Verb-Comp-MultiComp-EQ-Limiter. And one core has to be able to handle that.
Well, again, not necessarily if you have anticipative FX enabled. If none of the tracks are record-armed, then Kontakt, Track EQ, Verb, and compressor can run in parallel across multiple cores (where they are each processing slightly different moments in time). The master bus FX chain is exempt from this parallelism regardless of whether anything is record-armed (for implementation reasons I can't claim to understand), so those would be serialized.

The moment you arm a track for recording, then that track's entire routing chain enters the realtime path and is all serialized. Your other (non-armed) tracks can be processed in parallel, but once they enter the routing path of the realtime chain they become serialized. For example, another track that sends to a reverb bus: the track FX can be processed in parallel, but once it sends to the reverb bus it can no longer be parallelized.

So, in the end, with a DAW like Reaper that can make pretty effective use of multiple processors, your main constraint in the cores vs clock decision will be your own realtime processing requirements. This can include recording on tracks involving crazy FX chains, or weighty single-threaded FX like say Zebra.
 
Last edited:
I'm surprised to hear you say this. Isn't this exactly what Reaper's anticipative FX does? Anything not record armed (and therefore has no RT requirement) can be slightly pre-rendered. This allows for parallelism on the same FX chain, with each FX working on a different block in time. (At least any non master bus FX chain.)

Anticipative processing processes the whole track in advance. Not each individual plugin on the track. At least AFAIK.

You cannot process the last plugin in the chain separately from the first plugin in the chain, the input to the last plugin depends on everything coming before it. Basic causality.
 
Top Bottom