What's new

M2 NVMe drives vs standard SSDs for heavy Sample Library use?

Looks like the SanDisk is the most affordable 4TB I have seen in a while at under $500. performance-wise how does that compare to the QVO and the WD Blue 3D? Overall data throughput is less interesting to me since they all exceed what sata3 can do anyway, but latencies, seek times, the job counter thing..etc.. those could make a difference.
 
Looks like the SanDisk is the most affordable 4TB I have seen in a while at under $500. performance-wise how does that compare to the QVO and the WD Blue 3D? Overall data throughput is less interesting to me since they all exceed what sata3 can do anyway, but latencies, seek times, the job counter thing..etc.. those could make a difference.
The Sandisk and the WD are basically the same drive with different badges as the latter own the former; there could be tiny differences.
Make sure you get the 3D versions; no glasses required! ;)

The QVO drives struggle with random reads which I think is the key metric for libraries:

ADDED.
Look how slow they get when they are full and you know you want to fill it. :)

sustained-rr.png
 
Last edited:
I have one of the Intel Optane drives. [snip]

Conclusion: agree with @rgames and others who argue it doesn't make a practical difference

Thanks John, useful info. May I ask which of the Optane drives it is? I was wondering whether the low latency claims might be useful not only in the context of driving samples, but also for some other work I'm involved with.
 
It has zero relevance for an SSD used to host samples which is very rarely written to.

I was also thinking this. Do test figures exist for the number of times one can READ from an SSD, or is this considered completely unnecessary? Which takes me back to my earlier question whether merely reading from a drive generates a lesser amount of heat as writing to it, and in what proportion?
 
So, still some people say they are getting benefits from NVME, and others not.

I wonder if anyone is using one of these, and what their experience of it is?

 
Trying to follow along with this thread but am newer to sample libraries coming from a video background on my iMac Pro. I've recently invested quite on bit on them and am now looking to house everything.

I was looking to get OWC Thunderblade which rates-
  • Up to 2800MB/s Read & 2450MB/s Write
Based on what I seem to be reading is this overkill?


Also I was thinking of spreading out the libraries based on type (strings/Winds/Brass/Other) and getting 4 different 2-4 TB ones. This seems to be common practice.

Would others doing this professionally recommend this as the way to go?

Thanks so much for any advice!
 
We have 18TB of SSDs configured in RAID0. All SSDs are 2TB Samsung 860 SSDs and run and managed through a dedicated HDD disk device on our studio Dell R720 file server (Windows Server 2012R2) They are reliable, robust (never failed) and provide for around 6500MB/sec read and 4000MB/sec write. This is where our sample libraries (ours and 3rd parties such as 8DIO, Spitfire, CineSample, etc) are stored. Just a single drive.

On my own Windows workstation-PC I have a mix of NVMe drives (Samsung 970 and AddLink) and a couple of Samsung 860 PRO drives (2 TB each). Of these the AddLink NVMe drives give the best performance of around 4500MB/sec read and 3800MB/sec write. The SSD drives max out at 650MB/sec due to SATA limitations. I use these for video editing and storing my own sample libraries that I use offline (ie, when the studio servers are offline or down for maintenance).
In my experience OWC are very reliable but they underperform and not a good investment compared to Samsung and AddLink which never seem to fail. I've personally tried others including OWC, WD, Intel etc, but AddLink and Samsung seem to be the best value for money and most reliable for the money.

Hope that helps.
 
Mushkin Helix NVMe PCI 3.0 x 4 are 80 bucks each, no sale. They were 50 for 1TB on BFDay. I picked up 6 of them.

As far as performance goes no difference between them and my other NVMe’s like the Samsung 960, or BPX’s.

Im going to be using an ASRock B550 for audio which is only PCI 3.0.
No need for speeds that audio apps will never take advantage of any ways.

PCI 3 x 4 devices will be peanuts as soon as Intel goes to 4x.

Fine by me.
 
We have 18TB of SSDs configured in RAID0. All SSDs are 2TB Samsung 860 SSDs and run and managed through a dedicated HDD disk device on our studio Dell R720 file server (Windows Server 2012R2) They are reliable, robust (never failed) and provide for around 6500MB/sec read and 4000MB/sec write. This is where our sample libraries (ours and 3rd parties such as 8DIO, Spitfire, CineSample, etc) are stored. Just a single drive.

On my own Windows workstation-PC I have a mix of NVMe drives (Samsung 970 and AddLink) and a couple of Samsung 860 PRO drives (2 TB each). Of these the AddLink NVMe drives give the best performance of around 4500MB/sec read and 3800MB/sec write. The SSD drives max out at 650MB/sec due to SATA limitations. I use these for video editing and storing my own sample libraries that I use offline (ie, when the studio servers are offline or down for maintenance).
In my experience OWC are very reliable but they underperform and not a good investment compared to Samsung and AddLink which never seem to fail. I've personally tried others including OWC, WD, Intel etc, but AddLink and Samsung seem to be the best value for money and most reliable for the money.

Hope that helps.

How are you accessing the sample libraries from the server? How does that work licensing wise for kontakt and the rest? I would *LOVE* to set something like that up here, but not sure how licensing for each machine would be done.
Regarding accessing the server... you say read performance of 4500MB/sec - so well above 10GbE.

What interconnects are you using? (Or is it just 10GbE and the performance of the drive is CAPABLE internally on the server for those figures?)
 
How are you accessing the sample libraries from the server? How does that work licensing wise for kontakt and the rest? I would *LOVE* to set something like that up here, but not sure how licensing for each machine would be done.
Regarding accessing the server... you say read performance of 4500MB/sec - so well above 10GbE.

What interconnects are you using? (Or is it just 10GbE and the performance of the drive is CAPABLE internally on the server for those figures?)
So the fileserver is setup as a network drive and mapped as such by any of the PCs and VEP servers which run on Windows Server 2012R2 or Windows10. So as far as any VEP server is concerned the fileserver drive is just another drive. Nothing special is needed for licensing.
The fileserver drive is comprised of a bunch of Samsung 860 2TB SSD drives in RAID0 and managed by a Dell RAID controller and provides for circa 6500MB/sec read and 5600MB/sec write and networked via 10GB/sec SFP+. DAW machines are networked with 1GB ethernet which is plenty for audio transmission and VEP. Each server would need an eLicenser, VEP software, windows server 2008 or 2012 (probably works with 2016 or 2019 but these are overkill and expensive). Each of our Dell servers have dual CPU (24 threads total) and at least 192GB RAM and a small 120 GB SSD or HDD to boot windows and any other software to run that server. VEP doesn’t use the local server’s HDD for storage as all samples are pulled from the main fileserver.
A point to note is that although the servers are networked with 10GBE that the bottleneck becomes Kontakt etc which often restricts the loading speeds of samples, probably due to the time needes for sample decompression. But the network definately helps to speed up initial load times we’ve seen spikes of up to 6GB/sec when VEP first loads. A typical VEP server loading its 192GB RAM to 95% takes about 45 minutes on 10GBE or around 2 to 2.5 hours on 1GBE. Around 5GB RAM is needed to run windows.
Also we built our server farm over 3 years. Initially starting with a QNAP drive (easy setup but slow network speeds) then gradually moved to Dell servers with trial and error and as budget allowed.

Hope that helps
 
Last edited:
So the fileserver is setup as a network drive and mapped as such by any of the PCs and VEP servers which run on Windows Server 2012R2 or Windows10. So as far as any VEP server is concerned the fileserver drive is just another drive. Nothing special is needed for licensing.
The fileserver drive is comprised of a bunch of Samsung 860 2TB SSD drives in RAID0 and managed by a Dell RAID controller and provides for circa 6500MB/sec read and 5600MB/sec write and networked via 10GB/sec SFP+. DAW machines are networked with 1GB ethernet which is plenty for audio transmission and VEP. Each server would need an eLicenser, VEP software, windows server 2008 or 2012 (probably works with 2016 or 2019 but these are overkill and expensive). Each of our Dell servers have dual CPU (24 threads total) and at least 192GB RAM and a small 120 GB SSD or HDD to boot windows and any other software to run that server. VEP doesn’t use the local server’s HDD for storage as all samples are pulled from the main fileserver.
A point to note is that although the servers are networked with 10GBE that the bottleneck becomes Kontakt etc which often restricts the loading speeds of samples, probably due to the time needes for sample decompression. But the network definately helps to speed up initial load times we’ve seen spikes of up to 6GB/sec when VEP first loads. A typical VEP server loading its 192GB RAM to 95% takes about 45 minutes on 10GBE or around 2 to 2.5 hours on 1GBE. Around 5GB RAM is needed to run windows.
Also we built our server farm over 3 years. Initially starting with a QNAP drive (easy setup but slow network speeds) then gradually moved to Dell servers with trial and error and as budget allowed.

Hope that helps
For sure it does.
Thank you.
VEP is the part I was missing. :)

I might get a little more time later today for a decent reply. Interesting your experiences with kontakt loading. We have seen similar.
 
Top Bottom