# SSD with hardware RAID



## peksi (Aug 30, 2013)

Anyone have experience in doing multiple SSDs in (hardware) raid? I need to make 2 terabyte storage and I want to make if perform well.

I've spend a few hours studying if there is any point in SSD raid and I am slightly leaning towards a big ye. In tests it seems to give best advantage when big amounts of data is being read or written. On the other hand they are at their worst if multiple small transactions are done rapidly. At least this is how I understood it. I am pretty sure that loading gigabytes of samples into memory is very much raw data transfer kind of operation therefore justifying the advantage to the raid setup.

My choices would be:
1. A 2 TB SSD but they are pretty expensive and I get 500MB/s transfer rate due to SATA3 bottleneck.
2. A 2 TB PCIe SSD drive from OCZ with 2000MB/s transfer rate but that would cost even more
3. A PCIe hardware raid card with 3 x 750M SSDs with up to 1500MB/s read speed with raid0. Raid0 is a little risky because a single disk failure breaks the whole thing but aren't SSD's free of moving parts so it is quite safe? Maybe a 2TB conventional HDD for nightly backups would patch it upl?

There are controllers from Adaptec, LSI and Highpoint. Latter being the most cost effective (around 300€) and pretty much equal performance.

What do you think? Which would make the most sense?


----------



## JohnG (Sep 2, 2013)

On a PC I'm using a PCIe SSD card from OCZ (not the same one you mentioned). Are you Mac or PC?


----------



## rgames (Sep 2, 2013)

I don't have an answer but I am suspicious of whether RAID offers much for sample streaming. A huge part of the advantage of SSD's is the extremely low seek time - that latency is not a function of bandwidth.

So before you spend a ton of money trying to get 2000 MB/s I'd check to see if anyone has been able to show that it really matters for the sample streaming application. I've been watching out for that for a while and haven't seen anything.

In my own testing, a 400 MB/s SSD and 550 MB/s SSD gave the same performance in terms of number of streaming voices as would be needed in an extreme orchestral setup (huge number of modulating 16th note runs across many instruments at 120 BMP). However, the difference between 550 and 400 might not be enough to really show the difference - 2000 vs. 400 might show a difference.

One thing to pay attention to is the SATA controller - I saw a significant increase in read speeds using the native Intel controller on the newer chipsets as compared to add-on boards via PCI or PCIe. Again, though, in terms of actual use in an orchestral setup I didn't see a difference.

rgames


----------



## Nick Batzdorf (Sep 2, 2013)

I would use JBOD - just a bunch of drives - for sample steaming. You might be able to use the additional bandwidth RAID gives you, but you get more storage space without using space for redundancy. And you'll still get the bandwidth if your libraries are spread across multiple drives.


----------



## peksi (Sep 3, 2013)

thanks for your input.

johng: i am using PC. i would love to get the bigger OCZ PCIe aswell but my need is 2T which makes the price very high. i also have a small 120G OCZ PCIe in the machine and it is crazy fast, love it.

rgames: that is just what i am worrying too. although i hardly do audio tracks. my biggest pain are the loading times of the instruments. my biggest project instrument set took over 24G of ram and i could go have a lunch while waiting for it to load. that i was assuming more like a bulk transfer with fewer searches. what do you think?

nick: jbod would be simple and may go for that. i've got 2 x sata 3 interfaces and they would just take 2 x 1T drives.


----------



## JohnG (Sep 3, 2013)

peksi @ 3rd September 2013 said:


> johng: i am using PC. i would love to get the bigger OCZ PCIe aswell but my need is 2T which makes the price very high. i also have a small 120G OCZ PCIe in the machine and it is crazy fast, love it.



I don't think this is necessary for all samples, just very heavy stuff like HW strings with multiple mic positions. I do use it for LASS as well and of course it works very well, but only the most demanding libraries require this level of speed.

I do have SSDs for (almost) all samples nowadays, but I don't use PCIe for everything.


----------



## chimuelo (Sep 3, 2013)

I have already tested mSATA SSDs using the single PCI 2.0 lane transfer rate which is 250GBps single direction.

It works really well due to it's PCI-e .01 msec latency.
But will be buying the ADATA M.2 NGFF 512GB PCI-e SSD for transfers of 1.8GBps and 200k reads as soon as the new PCI 3.0 Z87 boards come out in the next month.
The latency and top end speed will bypass any SATA III bottleneck.
PCI-e brings us to where we want with sample streaming regardless of which developer we use. PLAY and VSL are improved immensely, Kontakt already works well with low latency only as compressed audio files are spit through fast enough to avoid a slow top end speed.
Don't fall for the Asus mPCI-e Combo II cards on their Maximus boards, the PCI 2.0 protocol and single lane connection is no better than SATA III, with the exceltion of it's fast .01 latency. If you do use it, it is fine for Kontakt only.
But I will be building with ADATA SX 2000 NGFF and no RAID as 1.8GBps is 3.5 times faster than SATA III SSDs.
Due to my 1U ATX CHassis I will use the ribbon extender since the other Horizontal PCI slot has my XITE-1 connector card. THe SUpermicor chassis holds 2 x half length or full length cards when using a riser card, or 2 x ribbon extenders.

Plus a little 3mil. Thermal Conductor Tape.

BTW, Bidule, even as a VST in a DAW will load my 23GB 4 x times faster than a DAW does. In Standalone I order a tipper at the bar, come back and it's ready.


----------



## peksi (Sep 3, 2013)

- 3 x Samsung 840 EVO SSD 750 GB 2.5" SATA3 = 1500€
- HW RAID contoller = 300€

That totals 2.25T, speed possibly 1500M/s, 1800€


- OCZ Z-Drive Performance R2 p88 PCI Express SSD 2 TB

This is 2T, speed guaranteed 1400M/s, 1800€. Price not so much as i remembered I think this is better.


----------



## Udo (Sep 6, 2013)

Nick Batzdorf @ Tue Sep 03 said:


> I would use JBOD - just a bunch of drives - for sample steaming. You might be able to use the additional bandwidth RAID gives you, but you get more storage space without using space for redundancy. And you'll still get the bandwidth if your libraries are spread across multiple drives.


RAID 0 has no redundancy and is faster than JBOD.


----------



## peksi (Sep 8, 2013)

SSD drives have no moving parts so I assume the are more reliable than conventional hard drives. Does someone have any numbers on that?


----------



## chimuelo (Sep 8, 2013)

I have a dead KIngston form 4 years ago. Also have 6 x Raptors and 3 x Seagate Momentus Hybrids all in working order.
Newer SSDs from 2011 forward are much better now.

Not to be an annoying guy, but RAID of 2.5GBps transfer rates on the new KingSpec sound super fast but in all honesty the specs. we want to see are huge improvements in Random Reads and IOps. 100k and above are pretty good, but under that is not worth the time.
My Vertex 4s and 840 Pros work just fine right now at 85-90k IOps, but I don't consider an upgrade valid unless the numbers scale in Seq.Reads and Random Reads.

We are seeing old tech repackaged like the Asus RAID PCI-e card, and this one with impressive Read scores. But looking at the devices used we see stacked 120GB SSDs and their configuration actually lowers Random IOps.
Expect to see this cycle of dumping off old tech until the new faster NGFF M.2 protocol gets going.

http://www.thessdreview.com/our-reviews ... irst-look/

Griffin based drives are coming out where a single storage device outperforms hardware RAID configurations.
Read some articles and see if they are your cup of Tea.

http://research.microsoft.com/apps/pubs/?id=115352


----------



## Cat (Sep 9, 2013)

So for streaming samples (from PLAY/Hollywood Series, Kontakt/LASS, Albion, Sable, etc), what is recommended: a Raid-0 of 2 x SSDs (like 2x Samsung 840) or 2 separate, independent SSD's loaded with different libraries?

On my Intel x79 board (Asus Sabertooth x79) the CrystalDiskMArk benchmark shows double the sequential read/write speed for the RAID-0. But it seems that there are other factors that are more important for Sample streaming (like seek time, etc) that I don't know how to interpret. 

Fail-proofing is not an issue as I have all the SSD disks' data duplicated on a standard hard disk (backup). 

Also I have read on this forum about people using PCI-Express Raid Controllers that can maybe run a Raid-0 of 4, or even 6 SSDs. Can anyone recommend a good one that would not break the bank?


----------



## chimuelo (Sep 9, 2013)

See if these specs are worth the price.

http://www.thessdreview.com/our-reviews ... tb-review/

And as far as what matters most to sample streaming, is similar to RAM.
Low latency, which is a natural due to the PCI-e protocol bypassing SATA III, Seq. Read speed once targets are aquired, but the random reads are what occurs as we press down controllers which go to RAM, then storage subsystem.
With Random reads that are 4 x times faster than my speedy SATA III Vertex 4s or 840 Pros, the Micron above seems to raise the bar, and the doors to competition are starting to pick up.


----------



## JohnG (Sep 9, 2013)

I wouldn't use RAID but would instead just use a bunch of drives -- except that I sort of already do with a PCIe card. This is already RAID-ed for you by the manufacturer. Once you have SSDs I think it's the SATA bus that becomes the bottleneck for streaming, not the drives themselves.

Thus the preference for PCIe because it bypasses the SATA bus and goes straight for the PCIe bus.


----------



## rgames (Sep 9, 2013)

JohnG @ Mon Sep 09 said:


> Thus the preference for PCIe because it bypasses the SATA bus and goes straight for the PCIe bus.


Do you think that's true for the more recent chipsets with the native SATA III controllers?

I know a couple years ago when native SATA III came about the chipset SATA III ports were faster (in terms of MB/s) than PCIe connections. I'm not sure if it's still true (does PCIe 2.0 or 3.0 make a difference?) but I'm curious because I'm likely putting together a new DAW this fall and the native SATA III vs. PCIe SATA III is a consideration I'm struggling with.

rgames


----------



## JohnG (Sep 9, 2013)

Hi Richard,

Possibly there is different architecture available from what I think you're describing. My particular PCIe card is not just a conduit to which one can attach SSDs -- they come already attached / incorporated into the card itself, so that there is no SATA connection. That's why you can get the super-fast data rates.

But anyway things are always improving so perhaps now SATA III is fast enough for practical purposes. The reason I bought the card was because at the time that's what East West was using to test its libraries, so I figured that would work well with Hollywood Strings, and it does.


----------



## peksi (Sep 10, 2013)

rgames @ Tue Sep 10 said:


> JohnG @ Mon Sep 09 said:
> 
> 
> > Thus the preference for PCIe because it bypasses the SATA bus and goes straight for the PCIe bus.
> ...



In case you decide to ditch the SATA and go for the direct PCIe here are some specs I just checked.

SATA III bandwidth is 0.6GT/s
PCIe 1.1 is 2.5GT/s
PCIe 2.0 is 5.0GT/s
PCIe 3.0 is 8.0GT/s

As seen from the link above the PCIe Micron can push 3.3TB/s and OCZ can do 1.4TB/s.

I think it would be better to wait for the Micron if it turns out to be decently priced. Or wait for next SATA version to open up the bottleneck. I've decided to wait it out for now.


----------



## peksi (Oct 3, 2013)

A hardware specialist enlightened me today that OCZ PCIe SSD drives are internally multiple SSDs in RAID0 configuration. He suspected that is the case for others aswell.

His opinion is very strongly towards doing 4 x SATA SSD + hardware raid 4. That allows 1 SSD failure and should provide more speed. This just needs a solid controller which I don't know yet.


----------



## gaz (Nov 22, 2013)

I just ordered a Pegasus J4 and four 512GB SSD drives (Samsung SSD840). I will configure them as JBOD as I believe this will work better for sample libraries. The reason being that each drive is independent to seek and transfer data, so you have four of them independently streaming data. With a RAID0 setup, it would stream large data faster BUT this is not the use case for sample libraries, which requires fast access to lots of different data.

-Gari


----------



## Scrianinoff (Nov 23, 2013)

I'm using a Highpoint RocketRaid 2720SGL in my 2600k-based 32GB slave, overclocked to a conservative 4.5GHz. I'm using 8x256GB SSDs. Look at the review here: http://thessdreview.com/our-reviews/hig ... c400-ssds/ If you build this setup with Samsung 840 EVOs instead of the M4s of the review, you will even exceed the reviewed performance.

My master system is a Dell Alienware M17x laptop with the i7 3820qm processor overclocked to 4.1GHz and 32GB of dram. Two Samsung 840 EVO 1TB drives host sample libs exclusively. I have discovered some time ago that setting up RAID-0 in Windows 7 (and 8 ) outperforms the Intel iRST (hardware) Raid-0 that you can setup in BIOS. I will post some AS-SSD screenshots shortly.


----------



## Scrianinoff (Nov 23, 2013)

This is the performance I get from a *single* Samsung EVO 840 1TB *drive *connected to one of the two Intel Sata 6Gbps channels, already not too shabby for a Sata SSD. Even the 4K 64 threaded performance fills more than half the Sata bus.


----------



## Scrianinoff (Nov 23, 2013)

This is the performance I get setting up both Samsung EVO 840 1TB drives as a 2TB *RAID-0* volume using *Intel iRST* in the BIOS. All settings, such as strip size, are the recommended and *default *values.


----------



## Scrianinoff (Nov 23, 2013)

This is the performance I get setting up both Samsung EVO 840 1TB drives as a 2TB *RAID-0* volume using* Intel iRST* in the BIOS. All settings, except the strip size, are the recommended and default values. *Strip size: 128 kB*.


----------



## Scrianinoff (Nov 23, 2013)

This is the performance I get setting up both Samsung EVO 840 1TB drives as a 2TB *RAID-0* volume using *Windows *Disk Management. All settings are the recommended and default values.


----------



## Scrianinoff (Nov 23, 2013)

As you can see the 4K 64 threaded peformance scales very well for 2 drives in Windows, much, much better than using Intel's 'hardware' Raid. Of course if you dig deeper into Intel's chipset raid implementation, you will find that most of it is done by the BIOS in software, which is apparently less efficient than Microsoft's implementation.

Windows reaches a scaling of 1.89x, while Intel reaches 1.15x. That was convincing enough for me. From what I have seen, sample libs mostly stream in 32 kB to 256 kB chunks at a time. So the sample streaming performance lies between the 4K-64Thrd and Seq performance values, let's say somewhere between 800 and 900 MB/s.

Even in my busiest musical contraptions I am far from needing this bandwidth to playback in real time. The bottleneck at the moment is the cpu again. Where it does help is the initial load of a template and 'bouncing offline'. Especially Kontakt's background loading reaches these throughput values and fills up the queues that you can view in the Windows Resource Monitor in the Disk tab. The template load scaling is 1.5x to 1.6x. So this is in my experience the benefit in running Raid-0 over JBOD, or over manually dividing the sample libs over *j*ust a *b*unch *o*f *d*isks (which if you didn't know is what the acronym jbod stands for).

An extra benefit of using Windows Raid is that it supports TRIM. However, in my configuration that does not matter, since I hardly ever write to my this sample lib volume, only to write a new sample lib to it. Of course it's mostly used for reading instead of writing.


----------



## rgames (Nov 23, 2013)

The problem with all those benchmarks is that you're not measuring what you care about (unless you care about benchmarks).

You need to set up a test to measure number of streaming voices.

I've done that and have not seen a difference in performance for drives that vary between 400 and 550 MB/s. Maybe 900+ MB/s will make a difference but until you measure it, it's impossible to say.

You need to measure what you care about.

You don't just show up at the racetrack with your dyno readings and predict a lap time. You still need to measure the lap time.

rgames


----------



## Scrianinoff (Nov 23, 2013)

rgames @ Sat 23 Nov said:


> The problem with all those benchmarks is that you're not measuring what you care about (unless you care about benchmarks).
> 
> You need to set up a test to measure number of streaming voices.
> 
> ...


Some observations:
My experiences are that streaming voices under Raid-0 are higher than on separate drives, and higher with Windows Raid compared to Intel Raid. How much higher, well not the theoretical 2x on my laptop or the theoretical 8x on my desktop, somewhere 1.3x-1.6x on my laptop, and roughly 2x - 3x on my desktop. It depends a lot on other factors, such as the size of the audio buffers, audio hardware, audio drivers, choice of DAW software, sample host software (VE Pro / Cubase), Sample play software (Play, Kontakt, VSL), cpu speed, background processes, efficiency of the template, memory available after a full template is loaded that is used to 'cache' notes that will most likely be played again in the music playing, etc. etc. 

I like analogies because they're often amusing, as this one is, because if you put me behind the wheel it will say nothing about the maximum performance of the engine or the car. Even in the case of a top driver it says hardly anything, as anybody following car racing knows, they often have bad days, then you have the weather, engine tuning 'experiments', the influence of the other drivers and cars, etc. The inaccuracy and false sense of authority and false common-senseness that analogies often instill in casual listeners and readers is more than mildly irritating to lots of people I care about.

We (all) care about the maximum number of streaming voices. We at least agree about that. When actually streaming voices in my setup, the number of streaming voices is cpu bound, as I already wrote in the post you replied to. This might change when the software becomes more efficient or when processing speed grows faster than storage sub-system throughput.

The figures you quote are sequential read speeds. This is indeed not a performance value we care about. Especially not in the case of a Sandforce based drive that reaches falsely inflated performance due to compressing data, as sample data in almost all cases doesn't compress. As I have told you years ago in relation to the performance of your Agility drives.

However, the 4K-64Thrd value of AS-SSD is a (incompressible data) performance value that is a direct and pure indicator for the maximum storage sub-system performance you can expect for sample streaming. The fact that my setup became cpu bound compared to my previous setup in which I had a maximum 4K-64Thrd performance of around 250 MB/s proves this. If however I would have relied on the number of streaming voices reached by Play 3, then I would not have seen a difference. Kontakt and VSL do show a considerable difference, until the voice streaming becomes cpu bound. Different versions of VE Pro and Cubase also yield quite different results in performance, which again points me to the software from which I have to expect further efficiency improvements in my setup.

Don't get me wrong, I really do think you're maximum number of streaming voices tests are interesting and valuable. They clearly show what one can expect from a modern DAW with slaves in a setup built by a composer who clearly knows what he's doing.


----------



## Scrianinoff (Nov 23, 2013)

What I tried to focus on with my performance values is the clear and considerable increase in performance of the storage sub-system, when applying Raid-0 compared to separate drives, and the further increase by choosing the Windows based Raid implementation over Intel's. 

I think the actions of the storage sub-system shown in the screenshots speak louder than both of our posts combined.


----------



## Scrianinoff (Nov 23, 2013)

peksi @ Fri 30 Aug said:


> [...]There are controllers from Adaptec, LSI and Highpoint. Latter being the most cost effective (around 300€) and pretty much equal performance. [...]


If you consider Highpoint, make sure you buy the SGL version, as in 2720SGL. It's the cheap 120 euro version, that does not include the cables that you do not need in the first place. You need 2 cheap 15 euro cables that each connect 4 Sata 6Gbps drives.

Also, make sure you insert the card in a cpu connected PCIe port, in any case not to a chipset connected PCIe port, because then you need to go through the chipset through the DMI channel (look that up), increasing latency and capping throughput. In short, put it in a port in which you would put your master high-performance graphics card. These Raid cards are capable of maximum throughputs in the neighbourhood of 4GBytes/s, only in cpu-connected ports.


----------



## Scrianinoff (Nov 23, 2013)

Here is the post in which I explain why the drives that rgames tested, which are according to him in the range of 400-550 MB/s, are actually much closer in benchmark performance if you take into account the relevant performance values, that is, the 4K-64Thrd values:

http://www.vi-control.net/forum/viewtop ... 42#3587342

To put this into perspective with the much newer Samsung EVO 1TB drives, their 4K-64Thrd performance is more than double that of the Crucial M4 that I (and rgames) used before, whereas the seq performance is in the same 500+ range; (m4 is also above 500MB/s after a firmware update of 1.5 years ago). So again, the benchmark was a perfect indication of the sample streaming performance one can expect from the drive compared to the older M4 (and Agility3 drives), either used as separate drives, in an Intel Raid or Windows Raid.

Here's a guide with screenshots on how to setup a Raid-0 volume (striped volume) using Windows Disk Management:
http://www.howtogeek.com/133433/geek-sc ... ing-disks/


If I can give all of you one piece of advice regarding technical issues it's this: Don't be misinformed, stick to the facts, the relevant facts, think for yourself, don't blindly rely on the 'wisdom' of any of the 'authorities' here (including me if anyone would be so foolish to see me as an authority), unless of course the learning curve is too steep for you at the moment, then take it with a grain, no, a truck-load of salt.


----------

