While we are at the disk debate, how crucial is it really to keep the OS drive separated from samples if you use that particular machine exclusively as a sample slave?
Does it make that much of a difference?
While we are at the disk debate, how crucial is it really to keep the OS drive separated from samples if you use that particular machine exclusively as a sample slave?
Does it make that much of a difference?
RAID 0 is just another way to spread your libraries across multiple drives.
It should perform pretty similar to placing them directly on separate drives. Personally I would avoid RAID-0 just so as not to have the headache of losing the entire volume when a drive fails (or having to deal with specialized recovery tools).
(I still centralize all my libraries in one place by way of junction points.)
If the machine is just a sample slave with lots of RAM, I'd assume there isn't much that the OS would even need to read/write onto the system disk all the time.
That depends on your stripe size, but even with a relatively small stripe size those read operations are distributed over multiple drives in parallel, which after all is kinda the point of raid 0's performance benefitI would expect raid 0 to be slower than two separate drives in certain high usage scenarios, at least on HDDs, where you're capped on seek-time instead of bandwidth, because imho raid 0 should roughly double the number of reads needed for the same amount of data.
There is a huge sale on samsung T3 like I never seen at an aussie store.. he has already sold over 600..
the 2TB for 825 AU, minus 20%, so minus 165$.. therefore $660.. I can't even see them that cheap in the US.. that's about 450 USD for the 2TB.
I think i will get 2, simply connect one each to usb 3 ports on my caldigit thunderbolt 3 hub, and raid 0 them...
T3 sustains 450 read even over USB 3.. that should definitely take it to 700.. Believe it or not no one else has done this on video, anywhere. all people have done is raid some flash drives for fun, like a bunch of 8GB ones.
Well, I was talking about 4 1G 860 EVOs over thunderbolt 3. Still a little confused, should I raid 0, or leave as separate drives? Any guidance would be appreciated as I am setting up a new storage/streaming solution.That depends on your stripe size, but even with a relatively small stripe size those read operations are distributed over multiple drives in parallel, which after all is kinda the point of raid 0's performance benefit
I think it might be slower for different reasons. For example if you had lopsided access times between the drives in the array, where you would end up with a lowest common denominator effect. Another scenario where raid 0 with spinning rust would suck pretty bad is two parallel sets of sequential reads (e.g. preloading two different libraries in parallel): with the libraries on separate drives the heads would advance in one direction while with raid 0 they'd be all over the place.
I can't believe I didn't..Thanks for this heads up TNM! I just ordered one T3 for 660 AUD to ship here to Adelaide, coming in a day or two. I'll let you know how it works. Did you get them?
I can't believe I didn't..
Yes, streaming is going to be better with NVMe. However I didn't specifically test and measure DFD, so I can't quantify by how much (and therefore whether the cost differential is warranted).But what about latency (For real time midi playing)? You know nvme latency is lower so in purge instruments wouldn't you get advantage with real time midi playing?
It was complete enough for me. I had a very specific question: would NVMe help with the part of loading projects I found most annoying.Well my friend your test document will never be complete without the DFD measurements
That's kind of the rub. To do it properly is definitely not a quick observation. Or at least, the quick observation is this: NVMe will help DFD streaming. I just can't tell you by how much.if you can do a quick observation on this matter it would be super helpful.
The outcome from my testing was this: while initial load times (where Kontakt blocks the UI) were equal, unpurged patches preloaded all samples into memory 2x faster with NVMe with the documented configuration. Objectively NVMe will handle more voices for DFD before dropouts. With the caveat that I haven't measured it, my WAG on that is about 2-3x more voices in the tested configuration, but possibly up to 4-5x times more, since DFD is sensitive to latency and the drives I benchmarked had about a 4.5x disparity in latency at the relevant block size for Kontakt.there are just too many people on the internet saying that the difference is performance is that not great in real life practical terms due to other bottlenecks.