What's new

Would an m2 sample library read speed be bottlenecked if PC supports it?

Maker9662

Member
I was reading through a few threads on here and I'm confused.
My theory is that faster read speeds = faster sample loading speeds.
Therefore, if I buy the fastest internal M2 drive that my 2022 i7 13th gen PC supports, then sample libraries will load as fast as that drive.
Is this true?
Is there downsides to doing this?
 
There are many places where bottlenecks can form between the application, OS and device drivers. So, no. It’s not inevitable that the sampler will read at full hardware speed. Also, the effective speed will be closer to the random read speed than the serial-access speed even without those software-related bottlenecks.

M2 will probably be a lot faster than SATA but you may not perceive a big difference between drive models. Those are more likely to be evident in areas like video editing.
 
There are many places where bottlenecks can form between the application, OS and device drivers. So, no. It’s not inevitable that the sampler will read at full hardware speed. Also, the effective speed will be closer to the random read speed than the serial-access speed even without those software-related bottlenecks.

M2 will probably be a lot faster than SATA but you may not perceive a big difference between drive models. Those are more likely to be evident in areas like video editing.

Thanks but how can I find out what the bottleneck speed is, to avoid overkill, and whether to go with an Samsung SSD or a Samsung M2?
 
Thanks but how can I find out what the bottleneck speed is, to avoid overkill, and whether to go with an Samsung SSD or a Samsung M2?
For a while, those were selling for basically the same price. If that’s still the case, just go with the fastest one. There really is no consensus. as already stated, it depends on a number of factors. For example, the sample player being used and the streaming tech involved on that sample platform. Someone did a white paper on case six I believe. The conclusion reached was that M2 wasn’t worth it, because the sampler couldn’t take advantage of the extra speed. Some like myself have read the paper, and we’re immediately able to poke holes in the conclusion based on the testing methodology. If you want to research further on this, you can probably look up. M.2 Kontakt, and you will probably get some more info on this topic.



I personally went with 8Tb nvme. I am happy with that choice. If the price difference is significant, I’m sure SSD will work perfectly fine. Many here are very happy with their SSD set ups
 
For a while, those were selling for basically the same price. If that’s still the case, just go with the fastest one. There really is no consensus. as already stated, it depends on a number of factors. For example, the sample player being used and the streaming tech involved on that sample platform. Someone did a white paper on case six I believe. The conclusion reached was that M2 wasn’t worth it, because the sampler couldn’t take advantage of the extra speed. Some like myself have read the paper, and we’re immediately able to poke holes in the conclusion based on the testing methodology. If you want to research further on this, you can probably look up. M.2 Kontakt, and you will probably get some more info on this topic.



I personally went with 8Tb nvme. I am happy with that choice. If the price difference is significant, I’m sure SSD will work perfectly fine. Many here are very happy with their SSD set ups
I plan to use kontakt 6 onwards a lot.
 
I'd stick with nVME not just for samples, but basically everything. It isn't just speed, it is latency and overhead that are advantages. SATA was designed for magnetic hard drives. SSDs were made to work with it, of course, but it wasn't designed with them and how they work in mind. nVME was.

Basically the way to look at it is that SATA is bottlenecking the flash and the controller on the SSD. It could likely perform faster and have better latency, but the interface just wasn't designed for that. So when feasible get an nVME drive.

Basically I'd only get a SATA these days if there's a special reason, like upgrading an old computer, or you need more storage and are just out of nVME ports. Otherwise, I'd stick with nVME since it is going to let the fast flash of your SSD better flex its abilities.
 
I'd stick with nVME not just for samples, but basically everything. It isn't just speed, it is latency and overhead that are advantages. SATA was designed for magnetic hard drives. SSDs were made to work with it, of course, but it wasn't designed with them and how they work in mind. nVME was.

Basically the way to look at it is that SATA is bottlenecking the flash and the controller on the SSD. It could likely perform faster and have better latency, but the interface just wasn't designed for that. So when feasible get an nVME drive.

Basically I'd only get a SATA these days if there's a special reason, like upgrading an old computer, or you need more storage and are just out of nVME ports. Otherwise, I'd stick with nVME since it is going to let the fast flash of your SSD better flex its abilities.
Thank You!
 
I am all NVMe here - and still do not get "breakneck" speeds that are promised.

Certainly faster than an old rust spinner but not "nano-second" loading times.

Small files (which all of these usually are) are notorious for speed hassles whether reading, copying etc whether NVMe or SSD.

One NOTICEABLE speed up tip I can offer is - if possible - avoid using the actual M2 slots on your motherboard and install your sample NMVe on a PCIe card like this:


Where the NVMe uses an uncluttered PCIe 3.0 x4 lane that is essentially a pipeline directly to the CPU rather than getting caught up in the mechanics of the native M2 motherboard slots which - depending on how many you have and where they are located can have shared bandwidth with other devices and other technical hassle factors.

S
 
One NOTICEABLE speed up tip I can offer is - if possible - avoid using the actual M2 slots on your motherboard and install your sample NMVe on a PCIe card like this:
Depends on the board and CPU. For older CPUs like the intel 10900, they only have 16 lanes of PCIe on the CPU, and those lanes are almost always going all to the GPU slot. All other PCIe slots are wired to the PCH, just like the nVME slots. Only exception is if you have a board with a second GPU socket in which case it will split the lanes 8 and 8 between both slots if they are both in use (which can slow down the GPU) or a VERY expensive board with a PCIe switch on it.

Newer Intel and AMD CPUs usually have 20 lanes on the CPU: 16 for the GPU and 4 for the primary nVME drive. Those are usually wired to one of the nVME slots, not to a PCIe slot. All the PCIe slots and the rest of the nVME slots are wired to the PCH.

You have to look at the manual for your board to find out what is wired to what.

None of this applies to workstation/server class CPUs like Xeon and Threadripper, they have more PCIe lanes and more stuff is wired right to the CPU.
 
Depends on the board and CPU. For older CPUs like the intel 10900, they only have 16 lanes of PCIe on the CPU, and those lanes are almost always going all to the GPU slot. All other PCIe slots are wired to the PCH, just like the nVME slots. Only exception is if you have a board with a second GPU socket in which case it will split the lanes 8 and 8 between both slots if they are both in use (which can slow down the GPU) or a VERY expensive board with a PCIe switch on it.

Newer Intel and AMD CPUs usually have 20 lanes on the CPU: 16 for the GPU and 4 for the primary nVME drive. Those are usually wired to one of the nVME slots, not to a PCIe slot. All the PCIe slots and the rest of the nVME slots are wired to the PCH.

You have to look at the manual for your board to find out what is wired to what.

None of this applies to workstation/server class CPUs like Xeon and Threadripper, they have more PCIe lanes and more stuff is wired right to the CPU.
Would any of this affect a board bought in late 2022 (Asus Z790F Rog Strix board)?
 
Depends on the board and CPU. For older CPUs like the intel 10900, they only have 16 lanes of PCIe on the CPU, and those lanes are almost always going all to the GPU slot.
True dat. But guess what happens if a user (like me) does not use a GPU (instead favoring the onboard GPU) - but still decides to stick an PCIe 3.0 x4 in there?

Tasty speed boost across the board. And this was with my i5-10600K.

On my (prior) ASUS ProArt Z490-Creator 10G - both of the PCI3 GPU slots bypass the z490 chipset completely and go directly to the CPU - so my experience was both immediate and very noticeable once I got it going.

There was a LOT of trial and error getting the PCIE bifurcation setup for this board - but it was worth the effort:


And yes - 100% agree that this is motherboard dependant but then again - all of today's modern NVMe need a matching motherboard anyway to get even the base advertised speed. As in - I would not be trying to jam a 2024 NMVe into the PCIe slot on a 2016 motherboard and expecting miracles.

S
 
Last edited:
Small files (which all of these usually are) are notorious for speed hassles whether reading, copying etc whether NVMe or SSD.
This. It doesn't matter what the highest read speed is, you're never going to see that speed when loading 6 kb or 60kb at a time with Kontakt as a preload:



David Kudell has an M2 Ultra, as fast a computer as is out there, and his Kontakt load times for a large project are the same between his internal Mac SSD (5GB/s max read) and a 400MB/s external USB SSD.
 
Last edited:
Kontakt load times for a large project are the same between his internal Mac SSD (5GB/s max read) and a 400MB/s external USB SSD.
yep. Kontakt doesn't really seem to care whether it's a SATA SSD or a PCIe NVMe, as far as "how quickly can it load things" goes.

I've seen the EastWest OPUS player report up to 1.1GB/s speeds, about double that of SATA. So it really depends on the player, I guess.
 
Would any of this affect a board bought in late 2022 (Asus Z790F Rog Strix board)?
Yes, Z790s have one M.2 slot connected to the CPU, the rest connected to the PCH. You can see it in the storage section of the spec page you linked where one is noted to be CPU and the rest to be chipset. As to which is which you need to check the manual to find M2_1 and that's the one that is connected to the CPU.

If you only have one drive, that should be the slot you use, as it does have dedicated bandwidth and thus the best performance.

That said, I don't find the performance of the slots on the PCH bad either. I have 3 M.2 drives in my system: The boot/apps/work drive is in the one connected to the CPU, two others that store samples are on the PCH. They both perform well.

True dat. But guess what happens if a user (like me) does not use a GPU (instead favoring the onboard GPU) - but still decides to stick an PCIe 3.0 x4 in there?
Yes, in the case you use the GPU slot, it'll connect it to the CPU directly. However in that case I'd actually consider getting a GPU for performance, as the onboard graphics shares memory bandwidth with the CPU, whereas dedicated GPUs have their own. It's not a deal breaker, lots of people use integrated graphics, but if you are getting in to optimizing performance, particularly with something that is RAM intensive like samples, it is a consideration.

David Kudell has an M2 Ultra, as fast a computer as is out there, and his Kontakt load times for a large project are the same between his internal Mac SSD (5GB/s max read) and a 400MB/s external USB SSD.
It's not so much load times that I think you'd see improvements on, but dropouts due to a buffer running out. nVME has better latency (and slightly lower overhead). It isn't something you really notice in normal use, but something like sample streaming is one of the few situations I could see it mattering. Particularly if you use a sampler that let's you turn down the amount cached in RAM (as Opus does) or under heavy load.


Either way I wouldn't worry about it particularly. Like if I had a system with a SATA SSD that held samples I wouldn't replace it with nVME just for, unless there was a problem. However if you are getting a new drive I would get nVME because it is the current standard and specifically designed for low latency solid state storage.
 
Yes, Z790s have one M.2 slot connected to the CPU, the rest connected to the PCH. You can see it in the storage section of the spec page you linked where one is noted to be CPU and the rest to be chipset. As to which is which you need to check the manual to find M2_1 and that's the one that is connected to the CPU.

If you only have one drive, that should be the slot you use, as it does have dedicated bandwidth and thus the best performance.

That said, I don't find the performance of the slots on the PCH bad either. I have 3 M.2 drives in my system: The boot/apps/work drive is in the one connected to the CPU, two others that store samples are on the PCH. They both perform well.


Yes, in the case you use the GPU slot, it'll connect it to the CPU directly. However in that case I'd actually consider getting a GPU for performance, as the onboard graphics shares memory bandwidth with the CPU, whereas dedicated GPUs have their own. It's not a deal breaker, lots of people use integrated graphics, but if you are getting in to optimizing performance, particularly with something that is RAM intensive like samples, it is a consideration.


It's not so much load times that I think you'd see improvements on, but dropouts due to a buffer running out. nVME has better latency (and slightly lower overhead). It isn't something you really notice in normal use, but something like sample streaming is one of the few situations I could see it mattering. Particularly if you use a sampler that let's you turn down the amount cached in RAM (as Opus does) or under heavy load.


Either way I wouldn't worry about it particularly. Like if I had a system with a SATA SSD that held samples I wouldn't replace it with nVME just for, unless there was a problem. However if you are getting a new drive I would get nVME because it is the current standard and specifically designed for low latency solid state storage.
Thank you for breaking this down for me. I didn't realise that chipset is not CPU, but yes I see how clear the allocation is now.
 
Thank you for breaking this down for me. I didn't realise that chipset is not CPU, but yes I see how clear the allocation is now.
No problem. For a little more detail: The CPU is, of course, the center of the system and where all the processing is done. It's physically the thing you put in the socket. The CPU contains a lot of things in the system these days, including the integrated graphics, the memory controller and the PCIe controller.

The chipset, also called PCH (platform control hub) is a chip located on the motherboard. In the case of gamer boards like the ASUS, it is under a bunch of the gamer-y heat spreaders. It contains (almost) everything else in the system that the CPU doesn't like NIC, SATA controller, onboard sound, USB controllers, etc. It also has a lot of PCIe lanes which attach to PCIe and nVME slots.

The chipset and CPU are then connected together with another link, in the case of Intel chips called DMI. The DMI link from your CPU to chipset for a Z790 is equivalent in bandwidth to 8 PCIe 4.0 lanes.

Now the upshot of that is that there's not enough bandwidth for everything on the chipset to go full blast all at once. Even just the nVME drives could overload it since there are 3 slots on the chipset at 4x each, that would be 12 lanes worth. In actual usage, it usually isn't a big deal since it is rare that a lot of devices are going full blast. But it is why people will recommend trying to put your disk on the port for the CPU because then there are no issues ever since it is dedicated.
 
Yes, in the case you use the GPU slot, it'll connect it to the CPU directly. However in that case I'd actually consider getting a GPU for performance, as the onboard graphics shares memory bandwidth with the CPU, whereas dedicated GPUs have their own.
When I weigh cost vs noise vs performance on a machine that only ever uses Studio One and Wavelab - the use of a standalone GPU makes little sense. Haven't needed one for 8 years now - and to be honest - the Presonus forum and many others continue to be populated with ongoing graphics issues - from both nVidia and AMD - due to the excessive bloated driver packages these cards insist on using.

Performance wise - espeically with my new ASUS Prime z790 and 13th Gen i5-13600K - not sure my onscreen grphics action could go any faster to be honest - this combo with Windows 10 is like lightning and I have yet to see a single issue using the iGPU.

S
 
No problem. For a little more detail: The CPU is, of course, the center of the system and where all the processing is done. It's physically the thing you put in the socket. The CPU contains a lot of things in the system these days, including the integrated graphics, the memory controller and the PCIe controller.

The chipset, also called PCH (platform control hub) is a chip located on the motherboard. In the case of gamer boards like the ASUS, it is under a bunch of the gamer-y heat spreaders. It contains (almost) everything else in the system that the CPU doesn't like NIC, SATA controller, onboard sound, USB controllers, etc. It also has a lot of PCIe lanes which attach to PCIe and nVME slots.

The chipset and CPU are then connected together with another link, in the case of Intel chips called DMI. The DMI link from your CPU to chipset for a Z790 is equivalent in bandwidth to 8 PCIe 4.0 lanes.

Now the upshot of that is that there's not enough bandwidth for everything on the chipset to go full blast all at once. Even just the nVME drives could overload it since there are 3 slots on the chipset at 4x each, that would be 12 lanes worth. In actual usage, it usually isn't a big deal since it is rare that a lot of devices are going full blast. But it is why people will recommend trying to put your disk on the port for the CPU because then there are no issues ever since it is dedicated.
Ah okay, Chipset is MOBO, and that's where all the Disk controllers and misc items in device manager are likely located, got it!

I really wish manufacturers wouldn't cheap us out by designing hardware in a way that assumes we wouldn't max out everything simultaneously.
 
Ah okay, Chipset is MOBO, and that's where all the Disk controllers and misc items in device manager are likely located, got it!

I really wish manufacturers wouldn't cheap us out by designing hardware in a way that assumes we wouldn't max out everything simultaneously.
I mean, they do it that way to save money. You can get chips and boards without those limits, you just don't choose to pay for them, because they cost a lot. For example even if we go with AMD, who is the more economical option in the HEDT/workstation space:

The base processor you are talking about is a Threadripper 7960X. That's a nice CPU 24 core, boost as high as 5.3GHz. But it is $1500, just for the CPU.

You then need a board that can support it, the board is totally different, having different wiring. ASUS makes a nice one the TRX50-SAGE. Supports a lot more PCIe straight off the CPU, but even then not everything. However, it's $900.

So just the core components, you are up to $2400, and a dedicate GPU is required, they don't have onboard ones at that level. Makes for a much more expensive computer.

For all that, I't bet you never notice the difference. The chances that the chipset becomes a bottleneck even with multiple nVME drives really isn't all that high. Hence, most of us just get regular desktop class hardware. I spent entirely too much on my 13900k and motherboard and the two of them together still came out to less than the TRX50-SAGE alone costs.
 
Top Bottom