Announcement

Collapse
No announcement yet.

Building A Linux HTPC / Storage Server With The SilverStone CS381

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bug77 View Post
    I'm still torn between buying or building a NAS for home use.
    Building is more flexible and you don't have to deal with weak ARM CPUs, but the cases available are just too big compared with off the shelf solutions.

    This case right here is not bad, but who needs 8+ disks at home?
    You'll probably want raidz2 these days. So there's only 6 disks with actual content. 6x2TB or 6x3TB isn't that much. Keep in mind the filesystems also require a small overhead.

    Comment


    • #12
      Lots of articles in the tech press on building your own NAS these days.

      FWIW:

      Recycled a 20 year old ATX case. Found a used server board on NewEgg. Bought a low power i3 with AVX support. Noctua case fan. Used Mellanox 10Gb adapter.

      Used LSI MegaRAID SAS/SATA controller along with the 6 port Intel SATA ports and 2 Asmedia SATA ports.

      Found a ton of cheap new Hyundai SSD's and put them in an IcyDock cage.

      OpenMediaVault based on Debian installed.

      Very cheap NAS cabinet on the 10Gb switch.

      Comment


      • #13
        I looked hard at the CS381 but eventually decided to build a server out of its predecessor, the DS380B. The older case only supports mITX, and not mATX like the CS380, but it is half the price, readily available with cheap shipping, and because my current (outgrown) server already has an mITX motherboard.

        Beware, a common complaint about the DS380B is that hard drives get too hot under load. The air from the side-mounted fans blows around the drive cage rather than flowing through the cage. Fortunately, there's a simple solution that you can 3D print and install to force air through the drive cage. This dramatically lowers drive temperatures and, IMO, makes this a reasonable chassis for a small homemade NAS.

        My DS380B arrives this week. I'll post an update to this article if I find something really objectionable while I build out my new server.

        Comment


        • #14
          Originally posted by caligula View Post

          You'll probably want raidz2 these days. So there's only 6 disks with actual content. 6x2TB or 6x3TB isn't that much. Keep in mind the filesystems also require a small overhead.
          Much to my surprise, ZFS zpools created with raidz or raidz2 are not currently expandable. (There are changes coming to add this to ZFS but they aren't available yet). I thought I could add another drive just like you do on Synology and it would just work. It doesn't.

          Instead, if you have a raidz or raidz2, and you need to grow it, then you have to create another raidz(2) from a new stack of drives and then add that to your zpool. It works, but if you want to keep the same redundancy and parity overhead of your original 6x2TB raidz2 array then you'll have to add another six drives to create a second, identical, raidz2. So, adding space incrementally can require purchase and housing for many new drives. (And, it only gets worse as the width of the raidz2 array increases).

          I've decided to abandon raidz(2) and fall back to a much simpler RAID-10 like ZFS configuration. My zpool will, instead, have several pairs of mirrored drives. Yes, the redundancy overhead is higher but I can extend the pool at any time by just adding two more drives in a mirrored pair. And, if a drive fails and is replaced, then the rebuild process is much easier on the array because only one drive has to be copied over to the replacement. If you use raidz(2) then rebuilds require reconstructing the failed drive from all the other drives. Finally, write performance should be better than raidz(2).

          There's a lot of discussion online about this and, of course I didn't save the links, so you'll have to do your own research if you want to check me on this. I found it surprising but it convinced me to abandon RAID-6 (i.e. raidz2) and just go with RAID-10 for my home server. Simpler maintenance, better performance, and easier future expansion.

          Comment


          • #15
            Originally posted by zxy_thf View Post

            If you're feeling ARM is too weak for your NAS, the ideal NAS for you probably already needs over 4 HDDs plus an OS SSD.
            For potential expansions in the future it may not be a bad idea to build a system that can handle 8+ disks.

            Actually I'm already regretting my previous NAS build's upgradability.
            I was more thinking about its ability to run Plex using current (and maybe future) codecs.

            Originally posted by M@GOid View Post

            There is also the fixing aspect to consider. A couple months ago Gamers Nexus channel on YT was discussing about the proprietary PSU of one of their NAS. It broke and they had a lot of headache in adapting a ATX one in place.

            So if you build one and down the road it need fixing outside warranty, you can do it yourself cheap and fast. Heck, even in warranty, you can use old spare parts as a stop gap until the RMA kicks in.

            Yeah, off the shelf comes with some built-in advantages. Usually I'm all about DIY (typing this from my custom built desktop PC), but in this case, the cases (pun intended) just seem too big for my needs.
            Originally posted by caligula View Post

            You'll probably want raidz2 these days. So there's only 6 disks with actual content. 6x2TB or 6x3TB isn't that much. Keep in mind the filesystems also require a small overhead.
            I have no idea what raidz2 is, I just need a 3 disk RAID5 with a fourth slot to use when migrating/growing the array. I'll probably need an OS drive, but these days it's hard to find a motherboard without an M2 slot

            Comment


            • #16
              Originally posted by bug77 View Post
              Yeah, off the shelf comes with some built-in advantages. Usually I'm all about DIY (typing this from my custom built desktop PC), but in this case, the cases (pun intended) just seem too big for my needs.
              Well, Mini-ITX for the rescue. There are ITX cases where you can put up to 4 3.5" HDDs. If you fancy a NAS looking unity, there are cases available, but you need to look at the PSUs they use. Some like this use 1U PSUs, that are easy to find from a good manufacturer, like Seasonic:

              Comment


              • #17
                Originally posted by elatllat View Post
                USB has the best future proofing; just add another drive to lvm/zfs/btrfs/distributedFS.
                You know, I tried that for a while. I started with a two drive NAS in 2012. Then I added an external USB 3 drive enclosure.

                Oh man, did that suck. You might think 5 Gbps is fast enough for hard drives but something about the USB protocol, plus maybe that enclosure's crappy JMicron chip made the speed pretty terrible when accessing drives in parallel.

                I wouldn't use external disks either. So many power cables and little AC adapter bricks hanging around. And you know if you need to move a network cable or something it snags on one and unplugs it. Then you have to re-add the drive to your array.

                Comment


                • #18
                  This new Silverstone case tries to do a better job with airflow, that's a big improvement over the DS380B, which was horrible.
                  I think I still prefer the Fractal Design Node 804. Lots of options for more fans or radiators. And it still takes at least 8 3.5 inch HDD. It also can be a mATX or mITX board.

                  I know the deficiencies with expanding ZFS, but for large drives, it's still better than common RAID. Combatting bitrot, scrubs, snapshots; all those don't happen in RAID. I figure once I outgrow the RAIDz2, the drive sizes will less than half the $/TB that I currently have and I'll want to start a new zpool anyways. I estimate at least 5 years of life, with probably 1 drive replacement.

                  I do suggest you move up to an enterprise board and ECC memory if using ZFS in a NAS. I currently use a Asus WS C246m Pro board. You can get a Xeon or go down to a i3 to use ECC memory. I have been burned by embedded chipsets, though I did like my Intel Atom C2750 for quite a while.

                  Comment

                  Working...
                  X