# Sample libraries on 10gbe NAS?



## strojo (Jan 3, 2021)

Curious if anyone is running their libraries off a NAS connected via 10gbe?


----------



## jcrosby (Jan 3, 2021)

Main issue I could see with this is if your power goes down and your router is out... If you have a UPS with a battery or a laptop you could actually work through part of a power outage if your drives didn't rely on a router. (Even if your router has a backup, your drive would need to be on a UPS with a battery..)

I've also personally found NAS storage quirky in terms of reliability, or at least never when I've actually needed it to be... I could see the appeal but, it also seems like it has the potential for adding one more layer of headaches solve if/when something goes wrong vs a wired internal/external drive. Simplest is best IME.


----------



## gamma-ut (Jan 3, 2021)

I’ve got raw samples on a NAS RAID5 that’s also used for backups, and tried it as an overspill for Kontakt but I found Kontakt to be really slow when it comes to browsing patches etc. Some of it may be due to NAS performance - 1Gb/s Ethernet was good enough for the libraries concerned but it‘s only an Atom processor in this one - but I suspect disk format is another issue as it’s a Linux format rather than something like HFS or NTFS and so may lead to speed issues in the same way as ExFAT with Kontakt.


----------



## colony nofi (Jan 4, 2021)

Its an interesting proposition.
Much comes down to the setup of the NAS / file system / drives / ram etc.
As it happens, I am getting a new file server for our post rooms in the next 8 weeks or so.
Still to decide on a few things (we were going truenas, but not sure its the best route yet - testing will be done!)
We are going ALL SSD (NMVE U.2 drives) over SFP+ 
Once its all up and running, I'm going to test it with kontakt libraries as well.

A few things to consider though.

If your network is setup well, 10GbE (over SFP+ or Cat6e/7) should do for bandwidth.
However, the file system of the drive, RAID type (if any), and random IOPS performance (as well as performance of reads of loads of small files) will all play a part in how well it performs.

Have a play with ATTO - looking at the small file size speeds of drives that work OK for you. This will be a much better benchmark to use than just throwing numbers from blackmagic speed test - which doesn't really show speeds relevant to sample instruments.

(ATTO is your friend!)

And Sensei (on OSX) can help show you the ACTUAL speeds being used in real time. 

So even with some ultra fast (3000MB/s) drives I have here, when looking at kontakt loading speeds, it rarely goes about 150-200MB/s. Spitfire (interestingly) seems to be another 50% faster. I have not looked at SINE recently.


----------



## strojo (Jan 5, 2021)

Here are the results from my NAS (Synology RS1619xs w/ RAID 5 7200rpm spinning drives via 10gbe):










These are the results from a locally attached Samsung 870 EVO SSD (non-NVME):


----------



## colony nofi (Jan 5, 2021)

So next step is to figure out some sort of Kontakt loading benchmark - as well as an "in use" benchmark. 
I wonder if it will require different testing of pre-load buffers etc due to the different drive performances.

ATTO is great for these figures so far. While the spinning rust saturates your network, it is showing issues in the very small read/writes which I suspect will impact kontakt performance.

And your random I/O's seem an order of magnitude faster on your internal SSD compared to the networked spinning rust array. I'm guessing this has much more to do with the drives being platters vs SSD's than the network though. Without looking at some previous tests, I'd say this will impact Kontakt usage.

Is there anyway you can throw the same SSD inside the NAS by itself (so not raid - just a single SSD) and run some further benchmarks?

We have done some preliminary tests which indicate a single 15.4TB U.2 NMVE drive by micron will (by some measures) be better than a raid 6 of 6 x 4TB SATA SSD's, and much much simpler to setup. (But also there will be some downsides... as with all these things)

Unfortunately that also comes at a cost - although perhaps not as much of a cost premium as we initially thought.

Can I ask what NAS you are running? I would suspect that there could be performance issues you may run into depending on the RAM and CPU that's in there. We are going to go with fairly beefy machines (2000 series Xeons and 64GB ram) after seeing another company we know run an Atom based truenas server (off the shelf) which couldn't keep up with the demands of 3 or 4 workstations. Do note though that the requirements for a NAS running multiple workstations (and not samples, but DAW sessions with video) are very different to those required for running sample libraries...

Anyway - no matter what happens, after we get the servers up and running, I'll do some benchmarking of kontakt for some shits and giggles.

Does anyone here know of a suite of kontakt drive usage benchmarks at all? Or if there are any ideas around over what a session for loading AND real-time usage might look like? Why do I feel like it will be quite a challenge to create such a test... I think at the right time that will need its own thread!


----------



## David Kudell (Jan 5, 2021)

I have a 96TB QNAP 10Gbe that I use for editing with 10GBe to my iMacPro. I’m almost positive I tried using it for samples and it didn’t work well. The reason is samples are lots of tiny files which is slow over a network. It works way better for large files like video.


----------



## strojo (Jan 6, 2021)

Further details on my setup:

Synology RS1619xs+ with Intel Xeon D-1527 4-core processor @ 2.2 base/2.7 turbo.
56GB of RAM installed
Two 500GB SSD drives setup as read/write cache
10gbe from the Synology to the Ubiquiti switch, and then 10gbe from the switch to my DAW

Here are the benchmarks with a Samsung SSD setup in the NAS as a single drive:











I am waiting on a replacement SSD from Seagate and will try the test with an SSD RAID5 setup once the volume is created (will take a bit of time to validate the drives on the Synology).


----------



## Ryan (Jan 6, 2021)

Take a look at Linus Tech tips youtube-video on the topic of file transfers via windows over Ethernet. With that type of speeds I see no problem with samples on a dedicated server/NAS.


----------



## colony nofi (Jan 6, 2021)

Ryan said:


> Take a look at Linus Tech tips youtube-video on the topic of file transfers via windows over Ethernet. With that type of speeds I see no problem with samples on a dedicated server/NAS.



Its a good video that I've seen before. 

Sample instruments have a very different requirement to editing. Editing is raw speed of large files. Samples - are kinda not. Speed is important, but random I/O is as well (thus spinning rust being much worse than SSD's, but also not massive improvements going further to NMVE drives in terms of usability)
But there seems to be a bit of a black hole regarding knowledge with this stuff - which is why I've started another thread on this board with the idea of a project that attempts to work some of this out. Some sort of monitoring / benchmarking to figure out (a) what is needed to evaluate a storage subsystem (what parameters we really need to look at other than just raw average speed - is there a particular file size that we need to look at / a range / ???) and (b) what actually works well (so some proper empirical tests.

So - work out what we are testing for and then test it. 



David Kudell said:


> The reason is samples are lots of tiny files which is slow over a network.


So - that's not entirely true. There are many different ways of setting up a NAS - the file system is important. There are also different approaches to setting up the network one can take in order to make things work better for specific types / sizes of files. 

I have yet to actually see what "lots of tiny files" actually means. This really needs to be qualified before we go further. We need to identify exactly what the disk behaviour is for kontakt (and other samplers). There's small files, then there are tiny files - and they act quite differently. How much does random I/O actually have to do with it etc. What are the requirements for the preload vs realtime playback. Is there a happy medium between the two etc etc.


----------



## Ben (Jan 6, 2021)

And don't forget streaming latency - it does not matter how fast your data is, if the required data-chunk arrives too late to process the current audio buffer. This will result in pops and clicks.

My guess is that you would need 10G ethernet + direct memory access enabled, like Linus did to decrease latency (like he mentioned on Windows it only works with a server or Windows for Workstation license; I have no idea if Mac supports this).
Also a NVMe SSD raid, or even a SATA SSD raid is quiet expensive - HDDs are simlpy too slow (especially regarding read-latency)

Tl;dr;: It's probably doable with a crazy expensive setup like Linus has, but not cost efficient for almost everyone.


----------



## David Kudell (Jan 6, 2021)

Here's a quick test. Note the nearly identical Blackmagic Disk Speed Tests but very different loading times for a sample library.

10GBe QNAP RAID 96TB with 8 Hard drives
Blackmagic Disk Speed Test: 805MB/sec
Time to load Berlin 2nd Violins Longs multi: 1min 8sec

Glyph Atom SSD 4TB RAID
Blackmagic Disk Speed Test: 861MB/sec
Time to load Berlin 2nd Violins Longs multi: 7 seconds

Like I mentioned, the RAID is perfect for what I use it for, video editing of huge 4K files. But it sucks as a sample drive. 

I suppose the 10Gbe RAID could be set up to function better as a sample drive. But you're still looking at closing the gap with something 10 times faster. I'm not a tech expert but I feel like Ethernet is not really going to match direct connect when it comes to latency.


----------



## strojo (Jan 7, 2021)

SSD Raid5 Results:


----------



## colony nofi (Jan 7, 2021)

Agree with a lot here.

All my previous testing has indicated the transfer size that kontakt uses is chuncks of audio approx 96KB in size (it opens and closes files in between for some reason!). @tack did a tonne of testing once and seemed to come up with a figure of closer to 106KB [EDIT : 118KB - see tack's message below] from memory - and I trust his figures above mine. But its close enough to look at ATTO benchmarks for 128KB transfers.

Here's his findings - which are already incredibly useful








Kontakt Patch Load Performance


Kontakt Patch Loads: NVMe vs SATA SSD (Windows) tl;dr I compared a 1TB Samsung 960 PRO NVMe with a 2TB Samsung 850 EVO SATA SSD and measured patch load times of a multi consisting of all section patches from Cinematic Studio Strings. The goal was to answer the question: will the added performanc...




docs.google.com





On the NAS, the file system also has tonnes of impact on performance. In the past I've managed setup things to support LARGER transfers very very well. That was using XFS, which I'm not sure is great for smaller files.

Now - I'm not in ANY way convinced that using a NAS for sample libs is a great idea. But I do get the feeling that its possible to get good enough performance out of an SSD system on a NAS if that happened to be something useful for someone's workflow. I'm thinking it could be quite amazing for larger composition houses for instance.

At our studios, we are NOT using the nas for sample libs or composition suites. We are just using them for projects for sound post - and that is very successful.


----------



## gamma-ut (Jan 8, 2021)

colony nofi said:


> it opens and closes files in between for some reason!


This might be a legacy design thing: too many open file handles is (or at least was) bad.

However, and I'm past my limit of knowledge of the OS at this point but I believe reopening a previously accessed file on OS X, for example, isn't that big an overhead as the app stores a shortcut to it (inode) rather than having to relocate the file from scratch.

However, the inode reference may only work for native OS X filesystems so might be a slowdown factor in external filesystems, such as those used by the typical NAS. 

Windows, I expect, has a similar approach.


----------



## tack (Jan 9, 2021)

gamma-ut said:


> However, and I'm past my limit of knowledge of the OS at this point but I believe reopening a previously accessed file on OS X, for example, isn't that big an overhead as the app stores a shortcut to it (inode) rather than having to relocate the file from scratch.


You're right that it won't generate any physical I/O because file metadata is cached, but I was worried about the unnecessary syscall overhead, particularly from kernel/userspace context switching. Given what I had reasoned about Kontakt's design, any increase in data latency would directly affect overall throughput because there's no I/O queuing (whether through queued asynchronous reads or (less ideal) by using multiple threads).

It turns out that the context switching overhead isn't that serious, but the syscalls aren't free, and there's a measurable performance difference.

I wrote this simple little Go program to simulate the condition. It just generates scattered reads across a large file, and opens and closes the file between each read.


```
package main
import (
    "fmt"
    "math/rand"
    "os"
    "time"
)
func main() {
    fi, err := os.Stat(os.Args[1])
    if err != nil {
        fmt.Printf("failed to open: %v\n", err)
        return
    }
    sz := fi.Size()
    rand.Seed(time.Now().UnixNano())
    // 118KB buffer because that's what I observed as an average block size in testing
    // Kontakt.
    buf := make([]byte, 118000)
    for {
        file, _ := os.Open(os.Args[1])
        file.ReadAt(buf, rand.Int63n(sz))
        file.Close()
    }
}
```

I ran it against a 3GB file on an Samsung 970 EVO (NVMe) on Windows 10. Then I moved the Open() and Close() calls to outside the for loop, and ran it again. At each invocation, I flushed the FS cache to ensure we generated physical reads.

The graph below is from PerfMon, showing bytes/sec in green and reads operations/sec in blue. The y axis units are a bit useless, but the first run peaked about 286MB/s and 2800 reads ops/sec. It tapers off like this as the filesystem cache warms up and successive reads stop generating physical IOs.

The second run is with the open/close moved outside the loop. There we peak at 338MB/s and 3634 reads/sec. The faster reads means we populate the cache faster, hence the steeper slope.






Findings were similar with a SATA SSD, just with slightly lower numbers: 247MB/s and 2500 reads/sec for run 1, and 280MB/s and 2750 reads/sec for run 2.

So it's not _significant_, but it's also not zero cost. And, like I mentioned above, Kontakt's apparent sensitivity to latency due to the lack of IO queuing would likely exacerbate this difference in practice. That's perhaps corroborated by the fact that with Kontakt, NVME provided around a 2x speed improvement when background-loading samples into memory, while in the above targeted test the delta was less.

(Edit: I realized Go's GC could be a factor with the first test because of the amount of garbage generated in the inner loop (file object), so I rewrote it in C and got the same outcome.)


----------



## gamma-ut (Jan 10, 2021)

tack said:


> It turns out that the context switching overhead isn't that serious, but the syscalls aren't free, and there's a measurable performance difference.
> 
> I wrote this simple little Go program to simulate the condition.


Nice work.



> Kontakt's apparent sensitivity to latency due to the lack of IO queuing would likely exacerbate this difference in practice.


I suspect this lack of queuing also plays into the poor NAS performance I've seen.


----------



## colony nofi (Jan 10, 2021)

tack said:


> You're right that it won't generate any physical I/O because file metadata is cached, but I was worried about the unnecessary syscall overhead, particularly from kernel/userspace context switching. Given what I had reasoned about Kontakt's design, any increase in data latency would directly affect overall throughput because there's no I/O queuing (whether through queued asynchronous reads or (less ideal) by using multiple threads).
> 
> It turns out that the context switching overhead isn't that serious, but the syscalls aren't free, and there's a measurable performance difference.
> 
> ...


This is such useful information. Thank you for putting the time you have into this.


----------



## Hendrixon (Jan 11, 2021)

@tack

First of all thank you for doing all this and documenting it so nicely.
I read your paper in the past and didn't know who did it to thank him/her, nice to see that it's a member of vi-c.
So... thanks 

Some thoughts:
1. Aren't nvme and ssd the same in their basic hardware form?
Could be we (mostly) see here the difference in interface (SATAIII vs M.2) controller stack overhead/efficiency ?
2. Your motherboard (at least the one you did all the tests on) has the M.2 sockets coming from the chipset. I wonder how all this will play out on M.2 that are direct to the cpu...
Maybe some of the limiting factors are bound to communication protocols between cpu and chipset over DMI, which would not be there when nvme is direct connected to cpu.


----------



## FireGS (Jan 11, 2021)

I have a QNAP TS932x with 5x drives in a RAID6 (seriously, ya'll, never use RAID5), and 4x 500gb SSD's in RAID10 used as cache going over 10gbe copper for samples, and it works like a charm. Hundreds of tracks in my template, loads fine, plays back in real time just peachy.


----------



## gamma-ut (Jan 11, 2021)

Hendrixon said:


> @tack
> 
> First of all thank you for doing all this and documenting it so nicely.
> I read your paper in the past and didn't know who did it to thank him/her, nice to see that it's a member of vi-c.
> ...


M2/NVME is a PCIe interface. There will be implementation differences that lead to variations in performance but bandwidth will generally be higher. Latency should be better though I can't swear to it: SATA was originally designed on the assumption of dealing with peripherals that exhibit millisecond response times whereas it's more like microseconds for PCI.


----------



## FireGS (Jan 11, 2021)

PCIE Gen 4 NVME is pretty sick though..


----------



## ReleaseCandidate (Jan 11, 2021)

Two or so years ago a used a NAS (a 'real' small Linux PC) with 32GB of RAM and 4TB of PCI SSDs and connected them using some Solarflare 10GBE cards (using SFP+ direct attach, not 'normal' copper cables) on Windows and Linux - I did _never_ notice that the SSDs wheren't local ones. Just don't route 'normal' traffic through that connection. And put as much RAM as you can afford in the NAS, to cache stuff.


----------



## Shubus (Jan 11, 2021)

This is a very interesting idea to run off NAS, but as my Mac only runs at 1gsec it doesn't work well for streaming samples here on my network. I have QNAP TS-870 Pro and I use it mostly for backups in RAID 10. Then the NAS itself is backed up to external HD's. So need some more hardware upgrades to try this. Meanwhile I stream all my samples off SDD's. The problem with this is I have to keep getting bigger and bigger SSD's.


----------



## David Kudell (Jan 12, 2021)

Just got an email announcement from Qnap about this. Now this would do it!









TS-h3088XU-RP | Modernize your IT using the ZFS-based, 25GbE-ready all-flash storage with high performance, high capacity, and low latency


The 2U rackmount TS-h3088XU-RP provides a cost-efficient all-flash storage solution for tackling I/O-intensive and latency-sensitive enterprise applications. With 30 drive bays for 2.5-inch SATA 6Gb/s SSDs, built-in dual-port 25GbE SFP28 SmartNIC, four 2.5GbE LAN ports, PCIe expandability, and...




www.qnap.com


----------



## colony nofi (Jan 13, 2021)

ReleaseCandidate said:


> Two or so years ago a used a NAS (a 'real' small Linux PC) with 32GB of RAM and 4TB of PCI SSDs and connected them using some Solarflare 10GBE cards (using SFP+ direct attach, not 'normal' copper cables) on Windows and Linux - I did _never_ notice that the SSDs wheren't local ones. Just don't route 'normal' traffic through that connection. And put as much RAM as you can afford in the NAS, to cache stuff.


Do you know what file system was being used?
(There should be no difference between using cat6e/cat7 vs SFP+ - but of course we all LOVE using SFP+ when we can!)


----------



## colony nofi (Jan 13, 2021)

David Kudell said:


> Just got an email announcement from Qnap about this. Now this would do it!
> 
> 
> 
> ...


This unit is bloody interesting at its $5k price point without drives.
Their software is really interesting (especially being able to sync two servers for complete redundancy) - and interesting that its using ZFS. 

There are nas's and then there are nas's! Right?!


----------



## ReleaseCandidate (Jan 14, 2021)

colony nofi said:


> Do you know what file system was being used?
> (There should be no difference between using cat6e/cat7 vs SFP+ - but of course we all LOVE using SFP+ when we can!)


I used XFS and Ext4, IIRC. ZFS had been too experimental at that time. 

SFP+ because the cards and the switch where cheaper without optical transceivers and you can't use CAT cables with them because the transceivers would draw too much power, so most people didn't want them.


----------



## jblongz (Jul 15, 2022)

Any new workflow tip? I’ve been eyeing the Qnap Thunderbolt Nas models. They allow IP over Thunderbolt for two computers and another 10gbe connection. So for 2 Thunderbolt equipped computers, no switch or adapters are needed. 

Im considering a solution for Mac and PC to share the same DAW projects hosted on the Qnap to reduce manual transfers. 1 gigabit has proven not to be enough, but Thunderbolt 3 should provide between 10gbe and 40gbe. Synology has no ptoduct in this category.


----------



## David Kudell (Jul 15, 2022)

jblongz said:


> Any new workflow tip? I’ve been eyeing the Qnap Thunderbolt Nas models. They allow IP over Thunderbolt for two computers and another 10gbe connection. So for 2 Thunderbolt equipped computers, no switch or adapters are needed.
> 
> Im considering a solution for Mac and PC to share the same DAW projects hosted on the Qnap to reduce manual transfers. 1 gigabit has proven not to be enough, but Thunderbolt 3 should provide between 10gbe and 40gbe. Synology has no ptoduct in this category.


I haven’t tried hosting my DAW projects on my Qnap but as I mentioned earlier it doesn’t work for samples even at 10GbE. I use mine for video editing and I do have my Premiere project files and all my media on the Qnap and we have 2 macs connected to it via 10GbE. I have the 10Gbe Qnap switch.

As for Thunderbolt, I tried that first but I had problems with it so I switched to 10GbE. Its been a few years so I don’t remember the exact issues but I think it would disconnect randomly. Some folks online said don’t use Thunderbolt but I thought I’d try it, but turned out they were right.

I could try having a Cubase session on the Qnap and see how it works.


----------



## vangakuz (Oct 31, 2022)

I am using Qnap with Mellanox 40gbe pcie card installed which can be bought on ebay even cheaper than new 10gbe. Inside my computer, the same 40gbe pcie card is installed.

*Everything run just like internal SSD attached inside the computer.*

Please note that you MUST have NVMe SSD cache in the NAS to cache HDD storage. If you use SSD only in the NAS then you don't need cache but this is extremely unlikely.

Assuming your computer has 2 ethernet ports, 1 is for internet which is plugged in the internet router. The other one is 10gbe which is connected to 10gbe switch that is connected to the NAS or you just directly connect 10gbe port to the NAS itself without a switch.

You must set static IP for both 10gbe on the NAS and computer. Suppose your they are 10.0.0.50 and 10.0.0.51 respectively. Set subnet for them that is different than common internet. I recommended 255.255.255.128 or 255.255.0.0

Because I only use 1 computer to access the library sample, I don't need a switch yet. It seems that a lot of people mistake that network and internet is the same thing but they are not. You don't need internet router to make this works.

Regardless of 10gbe or 40gbe, these setting must be followed and adjust accordingly in order to work, you CAN'T simply plug in everything and expect it to work at optimal speed, these settings must be manually adjusted mainly and mainly from window 10/11, I assume macOS should be similar:

- Under Network adapter, go into 10gbe ethernet card, choose Properties->Configure->Advance-?Flow Control, disable it. Flow control function enables the receiving end to require the sending end to suspend sending packets when congestion occurs. In my case, there is no need to care for congestion because only 1 pc accessing the library. If you use more than 3 pc connecting to the NAS to acess the library at the same time, you need to enable Flow control.

- On the same setting, choose the max value for Jumbo Frame which is around 9000 bytes. DO the same thing on NAS setting.

- Disable "Large send offload v2 (iPv4)", "Large send offload v2 (iPv6)", "Power Saving Mode". Basically, turn off any kind of Power Saving setting. What "Large send offload" do is offload CPU from network load. CPU nowadays are all strong enough to take care of those things.

- Adjust Transmit Buffer to 2048.

- Turn off "Allow the computer to turn off this device to save power" under Power Management tab.

- As I mentioned, go into Power Plan and always choose "High Performance".

From windows, mount the network drive, example in this case is \\10.0.0.50\<folder name>

There is even another way to have even a faster IO speed, iSCSI, which I also use. The downside of this is difficult set up, only 1 computer access at a time. If you have 2 computer access the same iSCSI at the same time then you definitely corrupt the storage.

With this, I have 32TB of HDD inside the NAS that virtually runs the speed of SSD at fraction of the cost compare to true SSD. Not to mention tons of other features of a NAS.

Edit: I forgot to mention that you need to pay attention to Block Size of the Share Folder. In order for the storage to handle small size file, then 4k or 8k Block Size (aka Allocation unit Size) are recommended. This can only be choose at the time of creating folder and cannot be modified later.


----------



## colony nofi (Oct 31, 2022)

This is all super interesting information - thanks.
Just a few follow up questions if you don't mind...
1) Which QNAP are you running on?
2) Do you have any indication of performance VS a native SATA SSD?
The two performance numbers that end up being interesting (the way we've been testing) is 
(a) load time of specific samples and
(b) max voices you can playback of a specific set of samples before disk errors (and if you are getting CPU errors first, we use simpler sample libraries - even going as far as making our own test libraries with different SIZED samples - which does show interesting results.
We did end up doing more tests of sample libs over 10GbE (on macs) and it never ended up being worthwhile. This was with ALL SSD array (8x4TB) in Raid 6 (ZFS). We have 2 x 2TB NVME in the unit, but cache never made ANY difference (which is kind of expected with the way ZFS works)
We are using 10GbE Network interconnects either native to the macs (mac studio ultra, m1 mini with 10GbE), direct connect to the NAS 10-GbE over copper, or thru a simple 10GbE QNAP switch (bare in mind we saw zero difference between having the switch in the middle or not!)
The use of sample libs for my tests was horrid. So it feels like setup had issues.
Having said ALL of that, our final use case (project drive, all audio tracks, and SFX for sound post) is totally amazing with this setup. Running huge post sessions (with nuendo file sizes that would give you conniptions on a 1GbE connection), things are great. We have tested 4 studios connected at once with zero impact on usability.

I have another NAS on order for my personal use - again 10GbE, but its for backups / projects only as well, as I abandoned my ideas for using it for samples. Maybe just maybe I should reconsider. It does have 2 x NVME slots and 8 other slots which could all be SSD... although I was hoping to have at least 4 drives as spinning rust to give me maximum backup + snapshot space (4 x 18TB in Raid 6)


----------

