# RAID 0, no benefit on my end...



## dlpro (Dec 19, 2018)

I have a 2 Mac setup. An iMac Pro (10 cores - 64GB Ram) which is the master, and an 2013 Mac Pro (8 cores - 64GB Ram), the slave.

Both these computers have external SSDs which will read around 400 MB/s.

Between both computers, I have 130 instruments loading up when I launch my Pro Tools template. My template loads in 1:20.

I just purchased (3) OWC 4M2 enclosures, 2 for the iMP and 1 for the MP. In these enclosures are 4 (WD Black 1TB High-Performance NVMe PCIe M.2 2280 SSD - Gen3, 8 Gb/s - WDS100T2X0C).

These are capable of reaching 2800 MB/s. Using Blackmagic Disk Speed, I get around 2500 MB/s.

When I load the same PT template using this new setup, I don't benefit a second. It still takes 1:20 to load. I thought I would benefit from RAID 0 and these speedy PCIe SSDs.

Anyone have a clue as to why it's still taking 1:20 to load a session?

Now mind you, I know 1:20 isn't very long, but I'm assuming this new setup should cut the load time in half.

Thanks!


----------



## JohnG (Dec 19, 2018)

I've never seen a test that demonstrates significant audio improvement from RAID, especially with SSDs. I've seen maybe 10% with hard disks.


----------



## tack (Dec 19, 2018)

dlpro said:


> Anyone have a clue as to why it's still taking 1:20 to load a session?


What made you think your session load times were bottlenecked on I/O? Had you taken measurements and found you were hitting a wall with your disk performance?

If we're talking about a lot of Kontakt instances, in my own experience (benchmarking) overall load times (to the point where the UI becomes usable, not necessarily completed all background loading of all samples) is constrained by CPU. Single core performance mostly.


----------



## Dewdman42 (Dec 19, 2018)

This is one reason a lot of people waste their money on raid. I am trying to decide myself about whether to get some raid stuff (more on that in a minute).

If you read around the net you will see many articles and posts be people describing the same frustration as you are having, that you have this rocket ship raid system and the benchmarking tool shows that its fast enough to cure cancer, but then in real world situations, it doesn't seem to gain any real noticeable advantage over what was there before in terms of real world situations such as loading a game, etc. You can find many tests on youtube where people compare different setups and game load times, and the load times do NOT correlate inversely to the benchmarking max speed tests. Not even close. In the lower levels of improvement it can make a big difference, but as your storage system gets up into higher rocket ship speeds, it simply makes very little difference to real world situations of working with apps.

So why is that?

For one thing, in the benchmarking tool, the app itself is highly optimizing the way it will write or read a long stream of data, enabling the raid storage system to reach full transfer speed for a sustained period of time and measure it. 

In real world situations, such as loading a DAW project, including samples; The application is not optimizing such things. In fact its very un-optimized. It reads a series of files, one at a time, and then goes to the next plugin that reads different files and then the DAW reads some stuff, then another plugin, etc.. and in between all those small reads, the CPU has to do some work to prepare for what its going to read next, etc, etc.. So basically the CPU is getting in the way of the raid storage from every really showing how much faster it is. In truth, at some low level its going faster for the actual SSD reading, but the overall time to load the project is still going to have some amount of time that is hard to improve upon much more, no matter how fast your raid.

I just bought two SSD's for my 5,1 MacPro which only has slow sata2 inside. I'm trying to decide whether to buy a sonnet tempo SSD in order to raid them and get around 1000MB/sec. The question is, will it make any difference?

I think it my case I should see an improvement over sata2, across the board. But sata2 is very slow at only 300mb/sec on a good day. In that case, the sata2 is probably the thing blocking the cpu and causing long load times. Sata3 will probably show not only a doubling of speed in benchmarks but real observable reduced load times, but will they be half the time? Searching around the net, people have done many kinds of real world tests loading games and such, and the answer is no, it will not be half the time. I think in my case it will be substantial enough of an improvement over sata2 to warrant buying some kind of PCIe card for my SSD's in order to get at least sata3 speed. 

The next question becomes, should I raid it? Hypothetically, the Sonnet Tempo SSD Pro Plus gets 1000MB/sec in a raid configuration. People have reported that to be the case. But will load times of my DAW projects be 1/4 the time of my currently slow sata2? No I seriously doubt it. I doubt it will even be half. There is still an advantage to raiding them, which is mainly that I will have one large virtual volume to store all my stuff. But I do not expect to see my DAW project load times significantly improve much beyond what a simple sata3 interface can do, at least on this mac. In that case I might be better off saving the money and not getting the ProPlus version of the card. The lesser version of the card, for 1/3 the price, will still get 500-600MB/sec in raid configuration and probably that is where I would see the most of the load time improvement, in that range. Trying to stretch beyond that will show faster benchmarks, but will not show drastic improvements to DAW project load times nor much of an increased feeling of general snappiness while using the OS X desktop, loading apps, etc.. The CPU and other things are getting in the way of utilizing the full capacity of the raid.


----------



## Dewdman42 (Dec 19, 2018)

Incidentally, that's another reason why a lot of people are losing sleep over how to use an M.2 SSD card in their MacPro's and PC's for no good reason. They will show super fast benchmarks, but in real world usage, most people will not notice any difference over sata3. They might notice a difference compared to sata2 and definitely compared to spinning hard disks.

All that being said, anything you can do to remove bottlenecks provides potential for system improvement, at some level, even if minor. Its just a question of how much you want to spend and whether a small improvement is worth a large investment is questionable.


----------



## Dewdman42 (Dec 19, 2018)

Regarding audio recording, there is no benefit at all to using SSD, nor raid or anything at all, a spinning HD on sata2 is perfectly fast enough to handle most recording and playing of many audio tracks. So using raid for something like that is obviously way overkill. The main thing people are seeking here is a way to load projects faster, load up sample libraries faster, etc.. and definitely faster storage systems can make it faster, but just remember the rest of your system can get in the way, and after some point its a quickly diminishing return in terms of how fast to make your raid system and whether you'll see any real benefit to the above.


----------



## Nick Batzdorf (Dec 19, 2018)

Dewdman42 said:


> I just bought two SSD's for my 5,1 MacPro which only has slow sata2 inside. I'm trying to decide whether to buy a sonnet tempo SSD in order to raid them and get around 1000MB/sec. The question is, will it make any difference?



As I've posted before, I have an OWC SATA 3 card (my curiosity got the better of $40). Moving the same SSD between it and the internal bus on my 5,1 makes exactly zero difference in the real world.


----------



## Dewdman42 (Dec 19, 2018)

including loading daw projects? this is good to know....maybe I will forget about the Sonnet. Other people have said they noticed a difference with it though...so I am torn.. its a lot more than $40.


----------



## chimuelo (Dec 19, 2018)

There’s another argument about redundancy that dosnt apply to sample streaming at all. 
With audio when there’s a failure and another device is called upon to replace a sample device I can assure you that during the rebuild you won’t be able to stream jack shit.
Redundancy for an OS is no problem since the bulk of the OS footprint is already loaded in RAM. Unlikely you would notice the failure unless you get Remote messages via text or email as in quality Array Cards offered by Microsemi.

RAID 1 for OS is wise in mission critical scenarios, but for samples the only Card I ever saw that could still play a few instruments was NetCell RAID 6 video streaming Array that required 5 devices and quite expensive, no longer even made IIRC.

I was gung Ho Supermicro RAID Zero 5 Card using several 10k Cheetahs back in Gigastudio days. Thousands of dollars. I was so excited and then simulated a crash and was so depressed. It became my video archive PC which did save me time.
But twas a lesson I never forgot.

My attitude about technology since then has been if something’s working don’t fix it.


----------



## ChristianM (Dec 20, 2018)

On a Mac 5.1, the PCI ports share the maximum rate I think, for the 4x at least. Which caps at 1500 MB / s ... so it will depend on other cards mounted elsewhere.
If the video card is mounted on the PCI 4x, there is not much left for the disk if we have a large screen resolution. It is worthwhile then to mount the video card on the 16x.
Other : In software RAID, we gain on the bitrate but we lose on the random access time…


----------



## JohnG (Dec 20, 2018)

@Dewdman42 

I can't say I am able to get through everything you wrote, but just recently I bought a Thunderbay 4 from OWC, connected via Thunderbolt cable. Disks inside it appear to operate at about the same speed as internal drives.


----------



## Nick Batzdorf (Dec 20, 2018)

chimuelo said:


> There’s another argument about redundancy that dosnt apply to sample streaming at all



This story sounds more like an argument for JBOD - back-ups. You don’t need extra RAID hardware for that.


----------



## Dewdman42 (Dec 20, 2018)

Well I broke down and ordered the Sonnet Tempo SSD (the lesser one, not the pro Plus). It was either that or two OWC slide trays, which add up to more than 1/3 the price for the PCI card, so why not. The faster Sonnet card is 2.5x the price and I suspect it would make no real-world difference.

When I get it all in I will run a bunch of tests with Sample loading and Logic and VEP project loading with the samples on various different HDD, SSD, sata2 and PCI interfaces and will report back the results here for anyone interested.


----------



## Nick Batzdorf (Dec 20, 2018)

ChristianM said:


> On a Mac 5.1, the PCI ports share the maximum rate I think, for the 4x at least. Which caps at 1500 MB / s ... so it will depend on other cards mounted elsewhere.
> If the video card is mounted on the PCI 4x, there is not much left for the disk if we have a large screen resolution. It is worthwhile then to mount the video card on the 16x.
> Other : In software RAID, we gain on the bitrate but we lose on the random access time…



The 5,1 has two 16X and two 4X PCI slots. That total is probably cumulative, but a 4X slot is 2,000 MB/S.

I doubt you're going to saturate that bus with a 600 MB/S drive. For that matter, I doubt you're going to saturate the internal SATA 2 (300) bus.

But maybe a nerd can straighten me out?


----------



## Dewdman42 (Dec 20, 2018)

Oh for sure you are correct, even the faster sonnet card will not saturate either bus on the 2010 5,1 macPro. Getting the faster PCI card would get about 1000mb/sec in bench marks. And another option, the Samsung SM951 M.2 AHCI card can get to about 1500mb/sec in this computer, even on the 4x slot. So the 700mb/sec Sonnet card is not going to cause my mac pci bus to break a sweat at all, even while providing theoretically more then 2x the data rate as the built in sata2 interface.

But nonetheless, in real-world performance, most of what we do with DAW's can't take advantage of all that high end speed, just as the OP is complaining about... Its quite possible that even the lessor Sonnet that I'm getting, in Raid mode will not be any faster at loading sample libraries and Logic projects then the basic built in sata2 interface due to this phenomenon, but I will do disciplined testing when I get it and let you all know. It would probably load a very large sample library a little bit faster, but might not even do that. Not by much. Or maybe it will be at least some improvement and worth the $50 pci card, but probably not worth the $250 higher end pci card.

Basically our macs will only go as fast as our slowest component, in simplistic terms. In the past HDD's were some of the slowest components. SSD, perhaps even on sata2, is no longer the slowest thing, so you can make it as fast as you want, but other things that are now the slowest things, will be limiting the results in real world situations where the disk i/O is not isolated for benchmarking purposes. Like as if Steve Austin only had one bionic leg.


----------



## Nick Batzdorf (Dec 20, 2018)

The only test I did was load an EastWest Bosendorfer off the drive. It takes :10 on the internal bus or the card.

But of course a piano won't get to the point where it makes a difference, because it's not large enough to saturate either bus.


----------



## Dewdman42 (Dec 20, 2018)

Well that’s not exactly how it works. Saturation of the bus would mean the bus itself is the limiting factor. Sata2 is 300mb/sec, sata3 is double that. Neither one saturates the bus but one is clearly faster then the other.

Yet, your piano isn’t loading in less time with one way having double the disk speed and never running against bus saturation wall.

This has more to do with the fact you have other components in your system that ARE peaking out and preventing the disk i/o from getting anywhere. Your cpu mostly, but memory busses and other things could also be a factor. The disk can only go as fast as the other components will allow it.

And also the piano is likely not in a single monolithic file that EW can slurp into memory in one long and fast sata3 gulp. Rather it is reading a series of files and in between reading each file the cpu has to do some work to figure out the next file to read among other things and that is the bottleneck.

Benchmarking programs are optimized to stay out of the way and isolate disk i/o so that it can go full speed without any other components blocking it. But real world apps do not do such optimizations and in many cases can’t, so disk i/o can only go as fast as the other components and application programming allow it to.


----------



## Nick Batzdorf (Dec 21, 2018)

Dewdman42 said:


> Well that’s not exactly how it works. Saturation of the bus would mean the bus itself is the limiting factor. Sata2 is 300mb/sec, sata3 is double that. Neither one saturates the bus but one is clearly faster then the other.



Are you sure? I've always understood the bus to be wider, not actually faster.

That's any bus, whether it's FireWire, SCSI, whatever. The front side, I think it's called.


----------



## Dewdman42 (Dec 21, 2018)

Am I sure about what?


----------



## Nick Batzdorf (Dec 21, 2018)

That SATA 3 is twice as *fast* as opposed to wide as SATA 2.

In other words, how fast the traffic moves vs. how many cars can drive side-by-side on the highway.


----------



## Dewdman42 (Dec 21, 2018)

I dunno Nick, you may be mincing words. 300mb/sec vs 600mb/sec. Sorry if the word "fast" is confusing you. 

If you can theoretically move 600MB/sec from disk to memory over sata3, then a 600MB sample should take one second. If you use sata2 at 300mb/sec, then it should take 2 seconds. Which do you think is "faster"?

width of the bus can also be an influence, which ultimately will result in a perceived speed of data transfer. 

Just trying to clarify for you that PCI bus saturation (or not) has nothing at all to do with why your EW piano seems to take just as long to load regardless of which interface you are using. 

It has to do with other components that are preventing the throughput by blocking the disk i/o intermittently. It has to do with the apps you're using or the specific job, which brings other components into the mix. You may find, for example, that other operations on your computer actually do feel faster with the sata3 card, because of the way they are optimized. Benchmark tools can certainly move a lot more data faster because they care optimized that way. If you had an app that reads a very large continuous file, you would also notice a difference. However, loading EW piano is not optimized that way, so the blocking will keep it slow enough you don't notice any difference in the time it takes to load over sata2 vs sata3.

If we had 10ghz cpu's, then sata3 would suddenly show itself to be tremendously faster then sata2, in terms of the volume of data moved per second.


----------



## Gerhard Westphalen (Dec 21, 2018)

Nick Batzdorf said:


> Are you sure? I've always understood the bus to be wider, not actually faster.
> 
> That's any bus, whether it's FireWire, SCSI, whatever. The front side, I think it's called.


I could be wrong, but I believe that it's actually "faster." It only transfers one thing at a time. Essentially one lane of traffic. The only way to get more cars through is to have them driving faster. Having said that, I think that the latency will be the same so from when you ask for a file to when it's actually moved through will take the same length of time. If it's twice as fast, transferring something that doesn't saturate the bus won't transfer faster.


----------



## Nick Batzdorf (Dec 21, 2018)

Dewdman42 said:


> Sorry if the word "fast" is confusing you.



I'm actually quite comfortable with what the word "fast" means. Are you familiar with the word "patronizing?" 

Gerald gets what I'm saying, but here's the obligatory car analogy.

In case there's a traffic light with an onramp to a highway:

1. Four cars enter the 4-lane highway, and they all go 60MPH (that's the speed limit). It's going to take them an hour to travel 60 miles.

2. Four cars enter the 6-lane highway, and they all go 60MPH (that's the speed limit). It's going to take them an hour to travel 60 miles.

The two extra lanes don't matter.

3. Six cars enter the 6-lane highway, and they all go 60MPH (that's the speed limit). It's going to take them an hour to travel 60 miles.

4. Six cars want to enter the 4-lane highway, but only four of them can enter at a time. Two of them are delayed by the traffic lights.

The two extra lanes do matter.

So while the 6-lane highway can accommodate more traffic before it gets jammed, the journey takes the same amount of time as the 4-lane one if the cars are carrying sample data. And - here's where the analogy gets silly - my argument is that it *has* to take the same amount of time in order to merge with the other cars onto Highway 3.46 heading to the Hamlet of Processor-on-Computer.


----------



## tack (Dec 21, 2018)

You're describing bandwidth and latency. We pretty commonly equivocate on the word "fast" to mean either or both, depending on context. The interaction of both those things is called the bandwidth-delay product, and this is what Dewdman was getting at: if you spend a lot of time roundtripping between disk and processing the data (during which time you leave I/O idle), your effective throughput is heavily constrained, even though neither CPU nor disk look like they're breaking a sweat.

(They are breaking a sweat though, for _very_ brief periods, but at the macro level of our performance monitor's 1 second (or whatever) sample interval, it all gets averaged away and ends up looking like a very bored system with some inscrutable bottleneck.)

Extending the car analogy, there's another aspect to bandwidth apart from the number of lanes: it's the size of the car's trunk. As renowned nerd Andrew Tanenbaum famously said, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway."

Speed limit (latency) being equal, the amount of luggage you can transfer (throughput) is a function both of the number of cars and lanes (parallelism) and the amount you can cram into a single car (bandwidth, or line transfer rate in this case). So while the number of lanes on the highway is an indicator of the amount of parallelism you can achieve, indeed, if you just have one car, the number of lanes doesn't matter. But if you have a station wagon instead of a cube car, suddenly you can move a lot more dat--err, luggage.

In the case of SATA, the _S_ gives us the answer: serial. Like Geraldhard said, the evolution of speed from SATA 2 vs SATA 3 isn't achieved through increased parallelism (lanes) but through an increase in line transfer rate. Because we're running the lines at higher frequencies to achieve greater bandwidth, at some level our latency is improved too. But at the protocol level, we're dealing with so many other layers that I'm not sure there's an appreciable improvement in latency with SATA 2 vs SATA 3. Anyway, at this point our car analogy starts to fall apart. Especially once you factor in other layers in the overall system (like the PCH, and HSIO lanes in the case of Intel).

So SATA 3 does let us move data faster without increasing parallelism, but as everyone on this forum has experienced, faster I/O provides rapidly diminishing returns because of the amount of roundtripping involved by our VSTs.

Sidebar: in the case of Kontakt, after the patch is finished its initial load (past the point where the GUI is blocked), NVMe vs SATA does improve the amount of time it takes to pull those samples into memory. NVMe is barely driving enough throughput to saturate SATA 3, but the bandwidth-delay product strikes again: it's faster because access times (latency) are appreciably better with NVMe, so we can roundtrip faster, increasing our effective throughput.


----------



## Dewdman42 (Dec 21, 2018)

Interestingly, my understanding is also that SSD in general is largely perceived as faster (in terms of responsiveness) then HDD because of lower latency more than anything. I see what you mean by the fact that the faster responsiveness also minimizes the context switching time and thus improves throughput and overall speed perception.

Its possible that an SM951 for a boot drive with apps and stuff there might gain some very small responsiveness improvements over sata2/3 SSD. Using SM951's for sample libs is not practical, for me anyway, so I guess SSD speeds are what we get for now in terms of loading projects and sample libs on 5,1 macpros.


----------



## Nick Batzdorf (Dec 21, 2018)

tack said:


> You're describing bandwidth and latency



I'm describing bandwidth. Latency would require more extensive details to my riveting analogy, because it's a measure of how long it takes to fill the trunk.

The trunk size... actually I don't know which of several things determine how much data gets picked up with each read - the disk block size, the number of processing bits (i.e. 32 vs. 64), and so on. But data get read from a buffer, of course, and that buffer is ultimately the trunk size.

The question comes down to one thing: whether Gerhard is right that the clock frequency of the bus is doubled in SATA 3 vs. SATA 2. If so, there's your answer.


----------



## Gerhard Westphalen (Dec 22, 2018)

Nick Batzdorf said:


> I'm describing bandwidth. Latency would require more extensive details to my riveting analogy, because it's a measure of how long it takes to fill the trunk.
> 
> The trunk size... actually I don't know which of several things determine how much data gets picked up with each read - the disk block size, the number of processing bits (i.e. 32 vs. 64), and so on. But data get read from a buffer, of course, and that buffer is ultimately the trunk size.
> 
> The question comes down to one thing: whether Gerhard is right that the clock frequency of the bus is doubled in SATA 3 vs. SATA 2. If so, there's your answer.



I just looked up some details of SATA (which were surprisingly hard to find) and it is completely serial ("Serial ATA") as opposed to parallel which is precisely what it was designed to replace. The speed is synonymous with it's clock rate. You can say that SATA II has an effective clock rate of 300MB/s so SATA III is a doubling of the clock rate.

What doesn't double would be some of the latency aspects which is where SSDs excel.


----------



## Nick Batzdorf (Dec 22, 2018)

^ Then there you have it.

And yeah, their ability to find and get data in the blink of an eye is definitely what makes SSDs so revolutionary.

I've posted it again and I'll post it before: for years we replaced computers every 2-3 years, and for a couple of days the new machine would feel "snappier" - until that wore off.

Just replacing the system drive with an SSD is *way* more of an upgrade than a new machine ever was. Night and day.


----------



## dlpro (Dec 23, 2018)

After a few days testing, there is absolutely no difference between the two setups. The 400MB/s SSDs or 2800 MB's PCIe are identical when loading VEP instruments/libraries. Certainly not worth the pricy investment. If you're moving files from one drive to another, there is definitely an improvement, but that's where the buck stops. I was expecting to see a 30% difference when launching my sessions. I'm sending everything back after the holidays.


----------



## Fermile (Dec 23, 2018)

dlpro said:


> After a few days testing, there is absolutely no difference between the two setups. The 400MB/s SSDs or 2800 MB's PCIe are identical when loading VEP instruments/libraries. Certainly not worth the pricy investment. If you're moving files from one drive to another, there is definitely an improvement, but that's where the buck stops. I was expecting to see a 30% difference when launching my sessions. I'm sending everything back after the holidays.



It is a kontakt issue. I believe they can better utilize the RAM to be faster.
I recently sent them a suggestion to at least support 2 concurrent samples loading if it is on 2 separate disks.
Hope someday there will be less bottlenecks in it.

BTW, you all should chip in, maybe they'll change their priority of features with a critical mass request.


----------



## chimuelo (Dec 23, 2018)

The speed of SSDs for loading are fantastic these days.
Akin to having a dual processor back when things were slow.

I’m really very happy with Windows 8.1 and a Samusung 950 SSD for OS + Apps.
I’m told Windows takes a RAM Snapshot when closing that remakes re opening faster.
I’m working on a Windows 10 rig atm and it’s also pretty nice now that the dust is starting to settle on W 10 tweaks and “fixes.”
Seems 10 also works on some sort of snapshot too.
It started at 80-90 seconds and after a month it’s 10 seconds or so, and the audio / interface I use loads up DSP Based Mixers automated via Touch OSC or in my case MIDI.
No native or DSP based FX now, just DSP Mixers and Strymons Pedals for each AUX Channel.
I was in heaven this weekend playing.
Storm caused a power failure so I had to reboot once and it was as fast as the lightning...

Mixers and DSP settings take 20 seconds, VST Host with 9 GBs of RAM used takes about 30 seconds.

I keep trying to find decent Mixers in UAD, CueMix, TotalMIX, etc.
Automation is incomplete and doesn’t route in External hardare without some phasing or sound quality loss, so I’m stuck with Scope until it’s no longer supported.


----------



## dlpro (Dec 23, 2018)

So now that we know 2800MB/s as absolutely no advantages over 400 MB/s when loading libraries, are there any other valid reasons why I should keep the 4M2/PCIe setup?


----------



## Dewdman42 (Dec 23, 2018)

Personally I would return it if that’s still an option. High performance raid like that simply won’t have much advantage for daw use. Or you can keep it and enjoy how fast you can copy and unzip files, but that’s hardly worth the cost


----------



## dlpro (Dec 23, 2018)

That's my intension. I have a couple weeks to return everything. Just thought there might be some advantages.


----------



## Mishabou (Dec 23, 2018)

dlpro said:


> After a few days testing, there is absolutely no difference between the two setups. The 400MB/s SSDs or 2800 MB's PCIe are identical when loading VEP instruments/libraries. Certainly not worth the pricy investment. If you're moving files from one drive to another, there is definitely an improvement, but that's where the buck stops. I was expecting to see a 30% difference when launching my sessions. I'm sending everything back after the holidays.





I compose in PT for my own stuff and Cubase 10 when collaborating with others.

My template in PT uses track presets and in Cubase disable tracks.

With a Samsung SSD (400 MB/s read-write), I can load 20 instrument tracks (one articulation per instrument) of EWHO Diamonds first violins in 12s, the same procedure takes 9s with a 2 disk raid 0 and 6s with a 4 disk raid 0. The disks reside in a Black Magic dock.

In my case, raid makes a huge difference.


----------



## dlpro (Dec 23, 2018)

One of my smaller PT templates is 130 instruments. Both 2800 vs 400 MB/s load time end up at 1:20. I can probably load 20 instruments in a matter of seconds too. Try and create a bigger template, say 100 plus instruments.


----------



## Mishabou (Dec 23, 2018)

Please read my post...I use Track Presets in ProTools...My template is over 4500 instruments, I load/unload what I need. PT session loads in less than 5 sec. I LOVE this workflow!


----------



## dlpro (Dec 23, 2018)

This post isn't a work-flow comparison. I use Track Presets, Freeze, Commit, unload/load etc... Fun all of it, but this is a comparison between 2 different loading speeds (400 vs 2800MB/s) using the same session, same two computers, same everything and there's no difference.


----------



## Mishabou (Dec 23, 2018)

I thought my post was pretty clear as i've shown exactly that...disk speed via Raid 0 makes a big difference in my case.

Comparison between 3 different loading speed 1 SSD @ 400 MB/s vs 2 x SSD (Raid 0) @ 750 MB/s vs 4 x SSD (Raid 0) @ 1200 MB/s using the same computer, same session, loading exactly the same sets of instruments...I get 12 sec (1 SSD) vs 9 sec (2 SSD raid 0) vs 6 sec (4 disks Raid 0).

I have a couple of M.2 PCIe, will try them and report back but so far 4 x SSD Raid 0 has shaved my loading time in half...huge difference.


----------



## dlpro (Dec 23, 2018)

Yes, I completely understand your results. But this thread clearly states that I'm not seeing the advantage. And by the way, if you had posted where you had a one minute session that loaded in 6 seconds, I'd be pretty impressed.

My session loads in 1:20. Spending $3,600 on the latest and greatest for a 6 second load reduction wouldn't be a great investment. Hell, I'm not evening seeing that. If I was able to load my session in 20-30 seconds, I'd reconsider the new setup.


----------



## Mishabou (Dec 23, 2018)

Alright just did a quick test with two Samsung 970 EVO 2 TB M.2 NVMe via OWC 4 slot enclosure (Raid 0).

Loading the exact same 5 min session in surround with about 25 VIs and 45 to 60 audio tracks, with 1080p24 video in Prores via Kona3 card:

- Session and audio files reside on one SSD (400 MB/s), VI samples on 4 x SSD Raid 0 (1200 MB/s), loading time is a tad under 25 seconds.

-Session and audio files reside on one SSD (400 MB/s), VI samples on 2 x M.2 NVMe Raid 0 (2400 MB/s), loading time is 16 sec and change.


----------



## dlpro (Dec 23, 2018)

Sounds like you got yours working. I and others I spoke to are not seeing the benefits. Without going back to your other posts, are you on Mac or PC? I have an iMac Pro-10 cores and 2013 Mac Pro-8 cores.

I'm loading 130 instruments by way of VEP. Nothing in PT. Both 400 and 2800MB/s load time is 1:20.


----------



## Mishabou (Dec 23, 2018)

Just loaded another 5 min session in surround that has 108 instruments and 59 audio tracks, video is on another machine running Video Slave.

Session and audio files reside on one SSD (400 MB/s), VI samples on 4 x SSD Raid 0 (1200 MB/s), loading time is 55 sec.

Session and audio files reside on one SSD (400 MB/s), VI samples on 2 x M.2 NVMe Raid 0 (2400 MB/s), loading time is 36 sec.


----------



## Mishabou (Dec 23, 2018)

I'm using a nMP 12 cores with 128 GB ram, Thunderbolt port 1 has a Black Magic Dock with 4 x SSD and port 2 has the M.2 and thunderbolt port 0 has a Lacie SSD for my sessions.


----------



## dlpro (Dec 23, 2018)

I'm using (2) OWC 4M2/PCIe via TB3 on the iMP and (1) 4M2/PCIe on the 2013 MP. When I run Blackmagic Speed Test, the 4M2 connected to the iMP will read 2500-2600. On the MP, 1200, due to the limitations of TB2.

So you're not using VEPro?


----------



## Mishabou (Dec 23, 2018)

I do have VEP pro on two other nMP but since switching to Track Presets, I have not needed it.

Btw, Thunderbolt 2 is 20 Gbps = 2500 MB/s and I'm getting 2200 - 2400 MB/s on my 2013 MP.


----------



## dlpro (Dec 23, 2018)

Looks like we have 2 different setups here. Maybe VEP is the culprit. Then again, I couldn't load my sessions in PT alone without having to reconfigure everything. Still interesting, none the less.


----------



## Dewdman42 (Dec 23, 2018)

I believe vep is a major factor. I would be interested if the op can test some project load times with and without raid using just the daw and no vep.


----------



## dlpro (Dec 23, 2018)

I'll see what I can do after the 25th.


----------



## Mishabou (Dec 23, 2018)

Just fire up my VEP pro for a simple test.

Created a new VEP pro session with one instance that has 50 PLAY VI (1 patch per).
- when the samples are on my 4 x SSD raid (1200 MB/s), it loads in 38 sec
- When the samples are on the OWC 4M2 PCIe (2400 MB/s), it loads in 23 sec


----------



## Dewdman42 (Dec 23, 2018)

The 4m2 is probably out performing the ssd SATA raid due to lower latency more so then overall throughput. A better comparison would be to compare a single ssd vs raid of same ssd. Likewise compare a single m.2 against raided m.2


----------



## Dewdman42 (Dec 23, 2018)

for a real world test it’s also important that if you setup 50 channels they should not be loading the same samples, to avoid caching from effecting results


----------



## dlpro (Dec 23, 2018)

But the idea is to have a true comparison using the same session, files, samples, etc...


----------



## Dewdman42 (Dec 23, 2018)

What I mean is don’t setup a daw project with 50 Channels loading the same sample instrument. Setup a test project with 50 unique channels using different samples on each channel. Then reuse that same project with different hardware configurations. It’s not out of the question to reboot between tests to ensure no caching is used for the comparison also.


----------



## Nick Batzdorf (Dec 23, 2018)

You know what I'm getting from this?

That we've already spent more time discussing the load times than we would have spent waiting for templates to load in several lifetimes.


----------



## Nick Batzdorf (Dec 23, 2018)

Seriously, my big template is everything I could possibly want to load and then some. It fills between 32-40GB of RAM (depending on macOS's mood), and just now it took 2:03 to be ready to record. I run Logic with a 128-sample buffer.

Does it really matter whether you can knock :30 off the loading time by spending a few hundred dollars on whetstones dry megaflop 23000 multicore frame rate gaming edition?

I'm way too busy measuring my navel with scientific test equipment to give a flying hoot!


----------



## Mishabou (Dec 23, 2018)

Dewdman42 said:


> for a real world test it’s also important that if you setup 50 channels they should not be loading the same samples, to avoid caching from effecting results



I'm very aware of that and no they are not loading the same samples...


----------



## dlpro (Dec 23, 2018)

Dewdman42 said:


> What I mean is don’t setup a daw project with 50 Channels loading the same sample instrument. Setup a test project with 50 unique channels using different samples on each channel. Then reuse that same project with different hardware configurations. It’s not out of the question to reboot between tests to ensure no caching is used for the comparison also.



Make sense.


----------



## dlpro (Dec 23, 2018)

Nick Batzdorf said:


> Seriously, my big template is everything I could possibly want to load and then some. It fills between 32-40GB of RAM (depending on macOS's mood), and just now it took 2:03 to be ready to record. I run Logic with a 128-sample buffer.
> 
> Does it really matter whether you can knock :30 off the loading time by spending a few hundred dollars on whetstones dry megaflop 23000 multicore frame rate gaming edition?
> 
> I'm way too busy measuring my navel with scientific test equipment to give a flying hoot!



I agree, but like I said in my original post, 1:20 isn't that long in the grand scheme of things, but I go between many sessions in one given day. I'm trying to find a quick load solution. Maybe there isn't any and I'll have to find an alternative way of loading these sessions.


----------



## Mishabou (Dec 23, 2018)

Nick Batzdorf said:


> Seriously, my big template is everything I could possibly want to load and then some. It fills between 32-40GB of RAM (depending on macOS's mood), and just now it took 2:03 to be ready to record. I run Logic with a 128-sample buffer.
> 
> Does it really matter whether you can knock :30 off the loading time by spending a few hundred dollars on whetstones dry megaflop 23000 multicore frame rate gaming edition?
> 
> I'm way too busy measuring my navel with scientific test equipment to give a flying hoot!



Your needs / workflows are obviously different. Professionals who have to deal with tight deadline on a daily basis, time is everything 

I'm collaborating with a composer who runs a Cubase template with over 3500 tracks, all disabled. He enables/loads the instruments as needed. In this case loading time is everything. Being able to shave a second or 2 each time he enables an instrument makes a huge difference at the end of the day.

Of course, he can preload everything and be done with it, and in this case, i agree with you, a few seconds or minutes extra...who cares as he only has to do it once. The downside is that he would need at least 5 slaves, each with 128 GB ram, use VEP pro and deal with multi computer set up.


----------



## dlpro (Dec 23, 2018)

I get what Nick is saying. I compose music for many TV shows, so I like to pre-load my template instruments so that I can go from one sound to another without having to load disabled sounds. Everyone's got their way of working. I think we all get that.

I just can't figure out why my new setup isn't giving me those extra seconds...


----------



## Mishabou (Dec 23, 2018)

dlpro said:


> I get what Nick is saying. I compose music for many TV shows, so I like to pre-load my template instruments so that I can go from one sound to another without having to load disabled sounds. Everyone's got their way of working. I think we all get that.
> 
> I just can't figure out what my new setup isn't giving me those extra seconds...



Well Nick's need is obviously different than guys who use/need access to lots of sounds. There's just no way I can pre load even one third of my libraries with 40 GB of ram...man I wish!!!

When I was using my two VEP pro rigs, I managed to pre load most of my sounds with 192 GB ram but as I added more libraries, this workflow proves to be too expensive.


----------



## dlpro (Dec 23, 2018)

Don't feel so bad. I have 64GB times 2 and I still can't pre-load all my libraries.


----------



## Nick Batzdorf (Dec 23, 2018)

Mishabou said:


> Your needs / workflows are obviously different. Professionals who have to deal with tight deadline on a daily basis, time is everything



Ah, that explains it. Professionals.


----------



## Dewdman42 (Dec 27, 2018)

So I got my Tempo SSD raid card and ran a few tests with a raided set of MX500's, with 16k block size in the raid configuration. As expected from this thread, no real-world improvement. Will probably return it even though its only $50 cost. Might as well stick to simple sata2 bus since no improvement. They can still be raided into a larger virtual volume that way too if I want. The Tempo SSD card is not really providing significant real world benefit.

Benchmarks were very positive, 750mb/sec sustained read speed compared to 250mb/sec over a normal sata2 non-raid.
VEP loaded a 93 channel VSL template only 2 seconds faster then the sata2 single, 2 seconds off of a 3 minute load time. Hardly a win.
File copying across drives is definitely way faster with the raid.
Loading single patches into Vienna Instrument Pro does not seem any faster, hard to get exact numbers, with small preload settings, they load pretty fast anyway.
I observe my cpu core meter while loading the 93 channel template in VEP that its only using one or two cores at a time, which tells the whole story...VEP is not parallelizing the process of reading from disk, so therefore the disk can't reach its full potential.
Perhaps other applications might show some improvement, but I haven't really noticed any difference other then zipping files, unzipping files, and copying large files around.
I am probably going to just send this Tempo SSD card back, even though its only $50 more then the sleds I need to buy to put the same two SSD's into my Mac Pro, I am seeing no real gain, mostly due to the way VEP is coded I reckon, maybe if I tested more stuff I would eventually find something that loads faster, but generally I'm just not seeing any gain whatsoever in terms of load times.

It is remotely possible that it may be possible with the raid to lower the preload settings in VEP to very small numbers, so that loading a template takes less time and then hope that the raided SSD can stream a little better. But that is theoretical and I don't have time to test that comparison.


----------



## JohnG (Dec 27, 2018)

Nick Batzdorf said:


> Ah, that explains it. Professionals.



If only.


----------



## dlpro (Dec 27, 2018)

I came to that conclusion again after running a few tests yesterday. I was going to run a PT only test too but that's not how my system is configured and I wasn't about to do extra testing with no benefits. If a 2800 MB/s setup and absolutely no benefit over 400 MB/s, then what's the point in keeping a $3600 setup. Obviously, Avid, Vienna, etc... have a lot of work to do to optimize their app's the fully take advantage of the latest speeds. I expected the new setup to cut load time in half. I didn't even see a .1%.

I packed everything and returning it for a full refund.

On the other hand, this looks interesting and the price is very reasonable. I don't thing these will help cut load time by any percentage, but for $380., that's a lot of SSD space.

https://eshop.macsales.com/item/Mic...37298715&_bta_c=expmsozx642pzji5wx1jn3c9lyab2


----------



## Dewdman42 (Dec 27, 2018)

That is not a SATA drive, FYI

I got 2x2TB sata SSD drives on black friday sales for about that much money. I will raid them together with JBOD to make one large volume. 

Its obvious to me that VEP and DAW's could do more to do things in parallel, get more threads going on multi cores, etc..and perhaps get better load times, but I guess its not a big priority.

I'm not sure if I am missing something here in my testing, I have a 93 channel VEP/VSL template that takes 3 minutes to load, no matter what I do, including that's how long it took from HDD. huh? I used Directory Manager to change the location of the VSL libs, but somehow it still look just as long HDD, sata2 or sata3-raid.


----------



## dlpro (Dec 27, 2018)

Yes, I know they're not SATA. Still wonder how when they'd perform under heavy load.


----------



## Mishabou (Dec 28, 2018)

Dewdman42 said:


> That is not a SATA drive, FYI
> 
> I got 2x2TB sata SSD drives on black friday sales for about that much money. I will raid them together with JBOD to make one large volume.
> 
> ...



Is there any way you can perform a loading test directly in PT and / or CB ? Try loading a preset of 20 different instruments (Kontakt and/or Play) and see if you notice a big difference between raid and single SSD. 

I know the OP uses VEP pro, personally I don't anymore and via track presets, i'm getting big improvement between Raid vs single SSD.

Thx


----------



## Dewdman42 (Dec 28, 2018)

I’ve already pulled the raid out and boxed it up and reformatted the drives as non raid. Sorry I should have done a direct daw test to see if any difference but I was tired of working on it and my situation will always use vep so for me it’s moot.


----------



## JohnG (Dec 28, 2018)

Mishabou said:


> i'm getting big improvement between Raid vs single SSD



Pardon me -- I don't mean to contradict but I have never seen a methodically sound (get it? "sound") example of RAID improving audio performance more than about 10%. It would be interesting to hear your method and results, including the kind of setup you have.

A couple of my PCs have four or more SSDs, each used as individual drives, not RAID. Leaving aside the risk that some RAID configurations create, I would be curious to learn whether there's a substantial benefit from a RAID setup. It could, depending on one's situation, make updating more straightforward.


----------



## Mishabou (Dec 28, 2018)

JohnG said:


> Pardon me -- I don't mean to contradict but I have never seen a methodically sound (get it? "sound") example of RAID improving audio performance more than about 10%. It would be interesting to hear your method and results, including the kind of setup you have.



We're talking about loading time of different patches in Kontakt and/or Play, not audio performance. 

For audio recording, editing, playback, etc, any non raid SSD is more than enough.


----------



## Mishabou (Dec 28, 2018)

Dewdman42 said:


> I’ve already pulled the raid out and boxed it up and reformatted the drives as non raid. Sorry I should have done a direct daw test to see if any difference but I was tired of working on it and my situation will always use vep so for me it’s moot.



No problem, thanks for getting back.


----------



## Nick Batzdorf (Dec 29, 2018)

JohnG said:


> RAID improving audio performance more than about 10%.



10% could be meaningful if you're hitting any bottlenecks with your drive system, but I haven't heard of it improving audio performance that much.

The load times argument... it all just seems like such a waste of money to me.


----------



## JohnG (Dec 29, 2018)

Nick Batzdorf said:


> 10% could be meaningful if you're hitting any bottlenecks with your drive system, but I haven't heard of it improving audio performance that much.



You're probably right. I may have misremembered and it's less than 5%.

Point is, many of these "solutions" are very expensive and deliver marginal improvement. For the same amount of money, some could instead buy an extra PC slave, offload percussion or strings to the new PC, and dramatically increase speed.


----------



## Dewdman42 (Dec 29, 2018)

generally finding ways to use VEP decoupled is probably the best way to avoid sitting around waiting for projects and sample libraries to load. I also haven't been able to see any improvement in load times, and I did actually reinstall my raid last night and tried some tests directly in LPX with 20 channels of kontakt and PLAY instruments. Saw absolutely no difference in project load times between using sata2 single SSD vs the same samples located on sata3 raid (Tempo SSD).

I am probably going to just keep this Tempo SSD card at this point because its not that expensive. The two drive sleds I would need to get for my two new SSD's cost half as much as the Tempo SSD card, so for essentially about $40 I can have it installed as sata3-raid for any micro improvements or breathing room it might provide, but otherwise, I probably wouldn't spend even $90 for that card for what we do, there is not much benefit. Maybe none.

The one place it MIGHT make a difference is when playing back projects, in that case parallelism is used and it might be able to take more advantage of faster storage better then what happens during project loading. That includes not only audio tracks, but also sampler disk streaming and stuff like that. But in a few tests I did, I couldn't really see any appreciable difference. At some level, there is probably more breathing room in the system, but I really didn't notice any difference at all.

The person who says they are loading projects faster, I would like to hear more specifics. I'm just not seeing it and if you google around you can find countless people talking bout benchmarking improvements and countless people also pointing out that raid simply does not benefit most desktop users, including DAW users. It might have made a difference back in the HDD days, but now with SSD, I don't even see it making 10% difference, but even if it is, its hardly worth a high cost.


----------



## Mishabou (Dec 29, 2018)

Dewdman42 said:


> The person who says they are loading projects faster, I would like to hear more specifics. I'm just not seeing it and if you google around you can find countless people talking bout benchmarking improvements and countless people also pointing out that raid simply does not benefit most desktop users, including DAW users. It might have made a difference back in the HDD days, but now with SSD, I don't even see it making 10% difference, but even if it is, its hardly worth a high cost.



I'm the person who said Raid SSD made a difference for my workflow.

For an upcoming gig, i need to access a huge amount of sounds (close to 10K), anything from sound fx libraries, custom made sounds to VI patches.

After weeks of testing different templates and workflow in Logic, CB and PT, I end up choosing Pro Tools for its superior database/browser capabilities.

My PT session has all the routing and FX preconfigured and opens in less than 10 sec and saving is instantaneous, all my sounds and VI patches are stored within PT’s Workspace (similar to CB’s Media bay) and I simply drag and drop on the timeline what i need.

As I am constantly trying out different sounds/patches, loading time is crucial and I found that a Raid 0 SSD have reduced it by at least 30-40% compare to non Raid SSD.

It’s important we take workflow in consideration when talking about loading time. If your session has hundreds of patches (with or without VEP pro) and only needs to be loaded once, then whether it takes 30 sec or 2 minutes, who cares. In my case, via track presets, i constantly load / delete patches throughout my session.

Anyways, my test rig is a nMP 12 cores (cylinder), 128 GB ram, 2 x Black Magic thunderbolt Dock, each with 4 x 2 TB Samsung SSD in RAID 0, 2 x BenQ 4K monitor, Rednet Focusrite PCIe, DAD AX32, Avid S6, AJA Kona 4, backup is on LTO 8. Daw tested was CB 10, LPX 10.4.3, PT Ultimate 2018.12.

For the upcoming gig, I will consolidate everything by using a single custom HP Z-Workstation.


----------



## JohnG (Dec 29, 2018)

Mishabou said:


> As I am constantly trying out different sounds/patches, loading time is crucial and I found that a Raid 0 SSD have reduced it by at least 30-40% compare to non Raid SSD



Honestly what you are talking about is, first off, a pretty unusual situation for most composers. Second, that's a really very expensive setup. Third, you're reporting miles more improvement than I've ever seen reported by anyone using RAID 0 in audio.

So hats off to you. Maybe we will all have a coffee or a beer some time and you can tell us more about it.


----------



## dlpro (Dec 30, 2018)

Mishabou said:


> Is there any way you can perform a loading test directly in PT and / or CB ? Try loading a preset of 20 different instruments (Kontakt and/or Play) and see if you notice a big difference between raid and single SSD.
> 
> I know the OP uses VEP pro, personally I don't anymore and via track presets, i'm getting big improvement between Raid vs single SSD.
> 
> Thx



I did the test yesterday using nothing but PT and 20 CFX Pianos. Both the 400 and 2800 MB/s load in 13 seconds.

I'm going to try various instruments today.


----------



## Mishabou (Dec 30, 2018)

dlpro said:


> I did the test yesterday using nothing but PT and 20 CFX Pianos. Both the 400 and 2800 MB/s load in 13 seconds.
> 
> I'm going to try various instruments today.



Just tried the same test...PT loading 20 CFX piano via track preset...single SSD is 12 sec and 4x SSD @Raid 0 is 7 sec. 

Are you using a raid controller or strictly software raid ? What block size are you using ?


----------



## chocobitz825 (Dec 30, 2018)

Mishabou said:


> I'm the person who said Raid SSD made a difference for my workflow.
> 
> For an upcoming gig, i need to access a huge amount of sounds (close to 10K), anything from sound fx libraries, custom made sounds to VI patches.
> 
> ...



I have a similar raid drive situation and I also find that load times are improved a lot compared to what I was used to back in the HDD days, as well as better than loading off of any singular SSD drive I have currently. For large projects, this has made a world of difference.


----------



## dlpro (Dec 30, 2018)

Last test.

20 CFX Grand Piano
10 Large drum and percussion libraries
10 Large Orchestral libraries

No difference. Both the 400 and 2800 MB/s SSDs loaded in 28 seconds.


----------



## dlpro (Dec 30, 2018)

Mishabou said:


> Are you using a raid controller or strictly software raid ? What block size are you using ?



I'm using SoftRaid XT and block size is 64.


----------



## simsung (Jan 9, 2019)

i just received my 4m2 and im a bit shocked about the loud fan. did you find any solution to make it at least less noisy?


----------



## dlpro (Jan 9, 2019)

I spoke to OWC about the loud fan and wanted to change it. They said not to change it as it could cause issue's. I had (3) 4M2's. I returned them.


----------



## simsung (Jan 9, 2019)

dlpro said:


> I spoke to OWC about the loud fan and wanted to change it. They said not to change it as it could cause issue's. I had (3) 4M2's. I returned them.


is there another solution where i coult put the m2s into, which is quieter but also mobile?


----------



## dlpro (Jan 9, 2019)

I don't know of anything like that.


----------



## ChoPraTs (Sep 7, 2020)

simsung said:


> is there another solution where i coult put the m2s into, which is quieter but also mobile?



I'm also looking for NVME enclosures for my iMac as we have been discussing here: https://vi-control.net/community/th...-8tb-or-owc-express-4m2-with-4-2tb-ssd.86571/ 

Maybe you can take a look at Netstor website. I've seen they have a lot of interesting products, but I've never read a review about them.

It seems that they have enclosures for NVMe drives with 3 slots (NA611TB3) and 4 slots (NA622TB3). In their webite they specify "High Efficiency Ultra Quiet Temp Dissipation Framework" which I think is something very important if its true. Take a look here if you want: https://www.netstor.com.tw/product_...e=Thunderbolt&ArID=92&PID=PID_190328202742977

Another option that I'm considering (because I would like to reach 8 TB capacity) is the OWC Thunderblade. Maybe it's the fastest portable disc at this moment, NVME, preconfigured in Raid 0 and without noisy fans, but it's very expensive and you can't get only the enclosure, you should buy it with OWC native NVME drives already installed.


----------

