# Impact of stripe size on SSD RAID 0 performance



## Udo (Aug 17, 2014)

IIRC, none of the comments/opinions re RAID 0 mentioned the stripe size when talking about performance. I'd like to know the best stripe size for sample libs. Is there a recommended size for sample libs on SSD?.

Keep in mind that, because of very fast seek times for SSDs, the relative impact of some performance aspects are different between HDD and SSD.


----------



## dannthr (Aug 17, 2014)

That's tough, technically, you want your stripes to be precisely the size of the individual sample--which is not only variable, but larger than most blocks.

RAID is a terrible idea for sample performance because you never want more than one drive working on the same sample--you're better off logically distributing samples across drives to improve performance. 

You should put samples you are likely to use together on separate drives, whole and not chopped up into pieces.


----------



## holzlag0r (Aug 18, 2014)

dannthr @ Mon Aug 18 said:


> That's tough, technically, you want your stripes to be precisely the size of the individual sample--which is not only variable, but larger than most blocks.
> 
> RAID is a terrible idea for sample performance because you never want more than one drive working on the same sample--you're better off logically distributing samples across drives to improve performance.
> 
> You should put samples you are likely to use together on separate drives, whole and not chopped up into pieces.


I don't want to hijack the tread, but are you sure that this works?

Because loading multiple libraries from different drives does not work, why would that work with samples?

See here: http://www.vi-control.net/forum/viewtopic.php?t=40257

I don't think that Kontakt can load more than one sample at a given time.


----------



## AR (Aug 18, 2014)

SSD is so fast, 400-500mb/s. Why raid roll it?


----------



## holzlag0r (Aug 18, 2014)

AR @ Mon Aug 18 said:


> SSD is so fast, 400-500mb/s. Why raid roll it?


Because Raid 0 increasees the speed to like 150%.
So then you have 750 mb/s.

And you don't loose any capacity. :D


----------



## Udo (Aug 18, 2014)

dannthr @ Mon Aug 18 said:


> That's tough, technically, you want your stripes to be precisely the size of the individual sample--which is not only variable, but larger than most blocks.
> 
> RAID is a terrible idea for sample performance because you never want more than one drive working on the same sample--you're better off logically distributing samples across drives to improve performance.
> 
> You should put samples you are likely to use together on separate drives, whole and not chopped up into pieces.


 From practical experience, there appears to be a definite advantage in using RAID 0 for SSDs. The very fast ssd seek time changed the situation. So far I've just used the default 128KB stripe size. Sample sizes vary greatly, even within libs, so it's a matter of finding the happy medium.

When dealing with a large collection of libs it's impractical, if not impossible, to logically distribute samples across SSDs to improve performance.


----------



## AR (Aug 18, 2014)

holzlag0r @ Mon Aug 18 said:


> AR @ Mon Aug 18 said:
> 
> 
> > SSD is so fast, 400-500mb/s. Why raid roll it?
> ...




Hmmmm you probably have a good thought. Though I'm very cautious. I don't fill up my SSD over 80%. Can't remember exactly the definition...but using a SSD to the full extend decreases your SSD lifetime. So how do you manage that in raid? Is there a way to tell your raid controller I just want to use 220gb each of my 256gb space in Raid 0?


----------



## Scrianinoff (Aug 18, 2014)

I am also enjoying a considerable performance increase by using Raid-0. In my laptop I have two 1TB Samsung SSDs. Initially I used the Intel chipset based Raid-0, the one that you setup in the bios. A bit later I learned that the Raid-0 implementation in Windows 7 yielded higher throughput, that is, the variety that is most important for sample streaming, the random reads of small blocks. Template load times and total samples played before clicks and cracks improved greatly. In my desktop system with 8 SSDs I have comparable improvements. I posted my results at the end of last year: http://www.vi-control.net/forum/viewtop ... 44#3744544 Look athe the "4k_64Thrd" results in the 3rd row of each screenshot, not the too often quoted "Seq" results which are almost meaningless for our purpose. Seq is a measure of the speed of reading very large chunks of data consecutively. Samples streaming is all about reading small blocks of data in random order, that is exactly what 4k-64Thrd is about: reading of 4 kilobyte blocks, 64 at a time, in random order.


----------



## Ozymandias (Aug 18, 2014)

AR @ Mon Aug 18 said:


> Hmmmm you probably have a good thought. Though I'm very cautious. I don't fill up my SSD over 80%. Can't remember exactly the definition...but using a SSD to the full extend decreases your SSD lifetime. So how do you manage that in raid? Is there a way to tell your raid controller I just want to use 220gb each of my 256gb space in Raid 0?



Where sample drives are concerned, I question whether you need to leave space.

My samples are on a dynamic volume spanned across 3 SSDs. Two of those drives are completely full, but loading/streaming performance seems no different to when I used them as discrete drives. Write performance has likely suffered, but not to the extent that it's an issue.


----------



## Udo (Aug 18, 2014)

People, I already knew from experience that SSD RAID 0 is noticeably faster than individual SSDs, but this tread is about finding the optimum stripe size for use with sample libs. 

So far I've only used the default 128KB stripe size, but I think that 64KB may be better. Has anyone experimented with stripe size?


----------



## Scrianinoff (Aug 18, 2014)

Yes, see the results in the link I posted above. Strip(e) size didn't give a big difference when using the Intel chipset (driver) based Raid-0.

The best thing is to test it yourself, on your system. Reconfiguring the Raid settings and formatting the Raid volume only takes a few minutes for Raid-0 volumes. Based on my own experiences, I expect you won't see a significant difference. Give Windows Raid a try too, (in Disk Managment, on an empty (part of a) drive right click "New striped volume" and the wizard will lead you through choosing the other drives and the rest of the process).

Until this day I did not regret trying Windows Raid out as it gives me the highest real life sample streaming performance compared to the Intel chipset Raid. 

As I have mentioned before through the years, Intel chipset Raid is NOT hardware Raid. It is a driver implemented software Raid. That's also the reason why it performs infavourably compared to real hardware based Raid solutions. Windows Raid performs better because it takes advantage of information in the file system, information to which a driver based Raid solution (as Intel's) does not have access to.


----------



## Scrianinoff (Aug 18, 2014)

Intel Raid-0 Strip Size 128KB:


----------



## Scrianinoff (Aug 18, 2014)

Intel Raid Strip Size 16KB:


----------



## Scrianinoff (Aug 18, 2014)

Windows 7 Raid-0 Striped Volume (Strip size unknown and afaik non-configurable):


----------



## Scrianinoff (Aug 18, 2014)

As you can see above in the measurements that are relevant for sample streaming, that is, the 4K-64Thrd performance, 16KB striping gives 10% higher read throughput compared to 128KB striping. Windows striping gives 81% higher performance compared to Intel 16KB striping. In other words Windows Raid is 1.81 times faster, or almost twice as fast. I still consider that a nice surprise.

In real life sample streaming using Cubase 7 and VE Pro 5, using Windows Raid I can get away with 50% more busy arrangements compared to using no Raid at all. With Intel Raid I was only getting about 10% more performance. This is consistent with the 4K-64Thrd measurements. Never again look at the Seq measurements, they are only relevant when reading very big files.

Mostly the cpu is the bottleneck in my laptop, an Intel 3820QM clocked 4.1 GHz. Before choosing Raid, the storage system (SSDs, drivers, and file system) was the bottleneck.


----------



## Pietro (Aug 18, 2014)

If you are after faster load times, then RAID-0 will help a bit. If you are using Kontakt and PLAY in your projects, better have those drives separated and put your Kontakt libraries on one drive and PLAY libraries on the other. These do load at the same time, causing overall drop in loading speed, unless they are from separated drives.

However if your main goal is good streaming performance (which is what most of us are after), even with just one sampler, it's definitely better to have the drives separately and distributing sample libraries over the two.

It's still possible, even with 2 SSDs in mind. You can have less taxing or less used libraries (percussion, sample logic, heavyocity - alike) on HDDs instead.

- Piotr


----------



## Scrianinoff (Aug 18, 2014)

A single Samsung 1TB SSD on its own gives already 86% of the best Intel Raid-0 (16KB strip) performance using two Samsung 1TB drives:


----------



## Scrianinoff (Aug 18, 2014)

Pietro @ Mon 18 Aug said:


> If you are after faster load times, then RAID-0 will help a bit. If you are using Kontakt and PLAY in your projects, better have those drives separated and put your Kontakt libraries on one drive and PLAY libraries on the other. These do load at the same time, causing overall drop in loading speed, unless they are from separated drives.
> 
> However if your main goal is good streaming performance (which is what most of us are after), even with just one sampler, it's definitely better to have the drives separately and distributing sample libraries over the two.
> 
> ...


I completely disagree. Did you test this yourself? I have and I experienced the opposite. Load times and number of voices have BOTH increased by a factor 1.5. As have some others who were brave enough to try it out themselves.

Anyway, let's limit this discussion to the topic of strip(e) sizes, as per Udo's request.


----------



## AR (Aug 18, 2014)

I agree with Piotr


----------



## chimuelo (Aug 18, 2014)

Transfer rates themselves like seq. read and write cannot transfer data any quicker than the SATA III bus allows, but what it does is boosts the number of IOps rates up and that's the performance increase we see.

I enjoy 1.2GBps reads as the PCI-e 4x bus I use does decrease template load times, but w/ 130-50k IOps ( library dependent ) is definitely polyphony nirvana.

I can also concur that a friend using PCI-e 2x speeds and a Plextor M6e gets the same polyphony counts since his IOps is 110-130k.

Good news is you can still use the SATA III bus and avoid an upgrade.
Just figure out a way to run redundancy.

IOps shown in this review is what all of us want. You can achieve this on SATA III using RAID, or go straight no chaser to the new PCI-e 4x transfer rates using M.2's.

http://www.thessdreview.com/our-reviews ... sata-ssds/

Please let us know what stripe you end up using.


----------



## Scrianinoff (Aug 18, 2014)

chimuelo @ Mon 18 Aug said:


> Transfer rates themselves like seq. read and write cannot transfer data any quicker than the SATA III bus allows, but what it does is boosts the number of IOps rates up and that's the performance increase we see.


Indeed! So with my two SSDs in my laptop I am running just shy of 1000MB/s, because I only have two SataIII ports doing 500MB/s each. *The IOPS scale with a factor of 1.81 for two drives*. You can get the IOPS numbers by dividing the MB/s by 4000 for the 4K-64Thrd measurement.

That, or you can switch for example AS-SSD to IOPS view. I just now reran the test to show you the IOPS of my *2 x Samsung EVO 1TB drives under Windows Raid0*. It's* 182 K IOPS*. Not shabby at all for a non-PCIe-x4 solution.

The stripe size in Windows Server 2008 (same major release as Windows 7) is 64KB. Here: " the *Windows* built-in volume manager for dynamic disks (Dmio.sys) *has a fixed stripe unit size of 64 KB*" http://blogs.technet.com/b/dominikheinz ... -ssds.aspx


----------



## chimuelo (Aug 18, 2014)

I wanted this last Spring as I knew the access times and rates of an SSD would really scale up but redundancy for live work was more expensive if I went with RAID 5/10/1+0, so I took a chance on M.2s and it paid off.
I will RAID 1 a pair of M.2s once developers embedded a pair of 4x connectors on a motherboard.
Right now ASRock is in the lead by a wide margin and their crappy boards from the Conroe days are long gone. They still have budget solutions but we are getting 12 phase power and enterprise quality boards from them.
Their latest Z97 has dual M.2s but a 2x with a 4x which from my testing shows the 4x would lower it's rates in a RAID 1 config, but even so the IOps would be 150k-180k, and 775MBps seq. reads, plenty fast for what we are doing.
Diminishing returns seem to be right around the top of the 20GBps range or PCI-e 2Xs max throughput.

As they say in Vegas, you can't win if you don't go.....
Fuckin A.... /\~O


----------



## Pietro (Aug 18, 2014)

Scrianinoff @ Mon Aug 18 said:


> I have and I experienced the opposite. Load times and number of voices have BOTH increased by a factor 1.5.



Are you comparing no RAID (and well distributed sample libraries) with RAID0 of the same drives? Or single drive vs RAID0. Of course you will get better results with RAID0 over single drive, but theoretically and last time I checked, RAID0 doesn't improve IOPS that much.

Last time I checked, means a couple of years ago... Perhaps something changed with the newest SSDs? Might also depend on the RAID controller quality (and hardware or software)

- Piotr


----------



## gbar (Aug 18, 2014)

holzlag0r @ Mon Aug 18 said:


> AR @ Mon Aug 18 said:
> 
> 
> > SSD is so fast, 400-500mb/s. Why raid roll it?
> ...



So a hypthetical 50% boost in performance best case, and a reduction of at least 50% in terms of probable reliability (you doubled your chances of failure within the lower realm of mean time before failure)?

That's the thing i don't like about raid 0. RAID 10 would improve your ability to recover and rebuild, but in my experience a lot of drives will begin to fail within a relatively short window after one goes bad, so unless you've got deep pockets.....

I, personally, don't see the point in even considering RAID 0 once you've got a large amount of RAM (so you aren't necessarily forced to stream sample libraries) and once you've got nice SSD disks. I'd rather spend money on more storage/RAM/backup storage if possible.

I would like to think there isn't much of a future for RAID when it comes to DAWs because storage capacity of single drives keeps increasing, and interface and access times keep improving. It already seems like a lot of bother for little gain to me, anyway.

But if that's what floats your boat, I expect that's all that matters. Some folks are always going to build and restore hotrods, so they've got no use for a Lamborghini even (where's the fun if they can't modify and tweak things themselves?). 

One thing is certain, you don't need a hotrod or a Lamborghini to drive to work every day, and those choices may not even be anything near the best choice for your work vehicle, so if you are buying a car to commute to work, you might want to think about what that entails


----------



## Scrianinoff (Aug 19, 2014)

gbar @ Mon 18 Aug said:


> So a hypthetical 50% boost in performance best case, and a reduction of at least 50% in terms of probable reliability (you doubled your chances of failure within the lower realm of mean time before failure)?
> 
> That's the thing i don't like about raid 0. RAID 10 would improve your ability to recover and rebuild, but in my experience a lot of drives will begin to fail within a relatively short window after one goes bad, so unless you've got deep pockets.....


Surely nobody in their right mind is going to rely on their SSDs to keep their data safe. Of course we all have our valuable (sample) data backed up at least twice on offline external hard drives, well at least I do. So Raiding my SSDs 2 (in my laptop) or 8 (in my desktop) is no issue at all to me. The redundancy of my daw work is exactly the availability of at least one of those two systems, laptop and desktop.



gbar @ Mon 18 Aug said:


> I, personally, don't see the point in even considering RAID 0 once you've got a large amount of RAM (so you aren't necessarily forced to stream sample libraries) and once you've got nice SSD disks. I'd rather spend money on more storage/RAM/backup storage if possible.
> 
> I would like to think there isn't much of a future for RAID when it comes to DAWs because storage capacity of single drives keeps increasing, and interface and access times keep improving. It already seems like a lot of bother for little gain to me, anyway.


Yes, indeed, Mr. Bill Gates, you're right, 'nobody will ever need more than 640KB'.
The direction appears indeed to be towards more ram, Colin was investigating a few months ago whether 128GB or more would be feasible in a Mac, because he liked the results Troels was getting with a 128GB PC. That is approaching the specs of a 'hotrod' system. 



gbar @ Mon 18 Aug said:


> But if that's what floats your boat, I expect that's all that matters. Some folks are always going to build and restore hotrods, so they've got no use for a Lamborghini even (where's the fun if they can't modify and tweak things themselves?).
> 
> One thing is certain, you don't need a hotrod or a Lamborghini to drive to work every day, and those choices may not even be anything near the best choice for your work vehicle, so if you are buying a car to commute to work, you might want to think about what that entails


Analogies are sometimes entertaining, but unfortunately most often highly inaccurate and confusing. I think yours belongs in the latter category.
I consider a hotrod system a Supermicro or Asus Server Board with 4 Xeon processors and 512GB ram and a hardware raid card with 16 SSDs configured at Raid-0, and then doubled up for redundancy. The same system of the shelf by HP would be a Lamborghini, by Dell a Ferrari, etc.

The people who posted their results here don't have hotrods at all, they have family cars, 7 seaters (single cpu, 32 or 64GB). I have a cheap 7 seater (i2600k 32GB). The advice I am giving people is that if they forgot they have those two extra seats in the back that have been folded down since they bought the car (one regular volume per SSD), that they can actually fold them up using a simple procedure outlined in the owners manual (create a striped volume raiding multiple SSDs using a few mouse clicks outlined in the Windows documentation), so they can seat more people and they might not need to drive with two cars to bring 7 people to the party (so they can stream more samples and they might no longer need to run a slave to bring all the samples they need to the party).


----------



## ChoPraTs (Sep 9, 2020)

I would like to know if there are any news about this discussion today, 6 years later, hehe.

Has any of you changed his mind? Still everyone thinking the same? Do you have any more tests with different block sizes and Raid configurations?

I've noticed that many external enclosures for SSD drives sold today and the fastest external drives existing (like the OWC Thunderblade) are all focused to raid configurations with SSD drives. Is the only way to get the full performance of SATA or NMVe SSD drives in external Thunderbolt devices. So I would like to know if today, in 2020, it is common practice in medium/big music studios to use RAID 0 to store and load orchestral libraries.


----------



## milesito (Oct 6, 2020)

Any updates on this all? I'm curious to know as well what Stripe Size we have bottomed out on for Spitfire Libraries specifically...


----------



## Brobdingnagian (Jun 3, 2021)

Reviving this thread, as it was derailed and we never got a clear answer all this years ago. Thank you for your patience.

Currently using SoftRaid on a new Mac Pro, with:
• 8TB Accelsior PCI NVME Raid 0, w/ a block size/stripe unit size of 128KB (Softsynth drive)
• 16TB Accelsior PCI NVME Raid 0, w/ a block size/stripe unit size of 128KB (Kontakt/SINE/Opus drive)

Wondering if there is a benefit to setting the block/chunk/stripe unit size to 64 on the Kontakt/SINE/Opus drive, in the spirit of the initial poster's question...

Yes, I do realize the formidable power of my rig now vs. where things were back in the long ago of the original poster's query is perhaps both fun and trivial, however, just curious to learn what others here are doing with regards to this setting. Extra efficiency and performance is always appreciated...

@ChoPraTs @milesito did either of you ever discover anything?

@colony nofi , as an (if not THE) oracle, perhaps you have considered this issue in a professional setting?

-B


----------

