Announcement

Collapse
No announcement yet.

Proper way to plug/unplug an eSATA hard disk?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by crazycheese View Post
    openmediavault, 2xN WD RE4 drives, xeon or opteron, 2-4-8gb ecc ram for 10/40/80 client machines.
    Thanks for the response.

    While I hadn't heard of that particular NAS package, I was more looking for advice on good hardware.
    Originally, I had thought to just use my tower and whatever software, but, as I said, I learned that hot swap trays aren't available for my tower, and, indeed, it seems to be recommended to get a dedicated storage unit (not rackmount, though, for my use case).
    My needs are at least 6TB of usable storage. With that much storage, I wanted to have some amount of assurance of data integrity and safety (though I realise that a single raid system isn't a replacement for backups), so I'm looking at RAID 10. RAID 10 seems to be a nice balance of speed and safety (though, frankly, speed isn't a big concern), and it lets me reduce the risk of multidisk failures (b/c there is no parity needed to recalculate) when a disk does die.
    The reason why I want hot swap (even if there is a hot spare), is b/c I want to reduce the probability of near simultaneous disk failure leading to array destruction.
    For the RAID support, I've been a bit indecisive between mdraid, a zfs solution, or a high end raid card. Again, the thing I care more about is not losing the data.
    ZFS has a mature data scrubber (which is important, b/c I've noticed that I get a seemingly large number of file corruptions on infrequently used files), some of the raid cards provide a version of this feature, but I've heard that it isn't as reliable, and if a raid card dies, you can have problems recovering the array. A problem with ZFS, however, is that it doesn't seem to offer a raid 10 like solution, so I'd end up with the array rebuild problems caused by parity.

    Comment


    • #32
      Originally posted by liam View Post
      Thanks for the response.
      The reason why I want hot swap (even if there is a hot spare), is b/c I want to reduce the probability of near simultaneous disk failure leading to array destruction.
      For the RAID support, I've been a bit indecisive between mdraid, a zfs solution, or a high end raid card.
      ZFS has a mature data scrubber (which is important, b/c I've noticed that I get a seemingly large number of file corruptions on infrequently used files), some of the raid cards provide a version of this feature, but I've heard that it isn't as reliable, and if a raid card dies, you can have problems recovering the array. A problem with ZFS, however, is that it doesn't seem to offer a raid 10 like solution, so I'd end up with the array rebuild problems caused by parity.
      You can online scrub with mdadm, and RAID4/5/6 work great. RAID10 is good too.
      Can't have too much redundancy or hot-spares.

      All chips can die, therefore software raid is the most reliable & low-cost option, use multiple cheap SATA controllers to split out each drive (e.g. 8 drive raid6, use 4 controllers ~ although this isn't 100% redundant its statistically unlikely a concurrent failure will happen - unless its common mode).

      Hotswap drive bays are SUPER common, and I've seen many dozens of models;

      Something like this:


      ICY DOCK manufactures removable SSD HDD enclosure, SAS SATA mobile rack, DVR surveillance recording, video audio editing, SATA portable screwless hard drive enclosure

      Comment


      • #33
        Originally posted by liam View Post
        Thanks for the response.

        While I hadn't heard of that particular NAS package, I was more looking for advice on good hardware.
        Originally, I had thought to just use my tower and whatever software, but, as I said, I learned that hot swap trays aren't available for my tower, and, indeed, it seems to be recommended to get a dedicated storage unit (not rackmount, though, for my use case).
        My needs are at least 6TB of usable storage. With that much storage, I wanted to have some amount of assurance of data integrity and safety (though I realise that a single raid system isn't a replacement for backups), so I'm looking at RAID 10. RAID 10 seems to be a nice balance of speed and safety (though, frankly, speed isn't a big concern), and it lets me reduce the risk of multidisk failures (b/c there is no parity needed to recalculate) when a disk does die.
        The reason why I want hot swap (even if there is a hot spare), is b/c I want to reduce the probability of near simultaneous disk failure leading to array destruction.
        For the RAID support, I've been a bit indecisive between mdraid, a zfs solution, or a high end raid card. Again, the thing I care more about is not losing the data.
        ZFS has a mature data scrubber (which is important, b/c I've noticed that I get a seemingly large number of file corruptions on infrequently used files), some of the raid cards provide a version of this feature, but I've heard that it isn't as reliable, and if a raid card dies, you can have problems recovering the array. A problem with ZFS, however, is that it doesn't seem to offer a raid 10 like solution, so I'd end up with the array rebuild problems caused by parity.
        Gladly.

        ZFS has an order of magnitude of the problems:
        - within its own design, ZFS is an effort to destroy GPL and Linux (license and kernel) because stupid elitist solaris engineers thought that thats GPL that destroyed them, not better management idea, but specific implementation. It is like swimming in african river naked and proud and accusing crocodiles for eating you. Instead of understanding idea behind opensource, adapt and profit from it, they decided to close themself up and hope for the better. Read here, paragraph "GPL incompability".
        - ZFS is (c) oracle and oracle is well known for calling opencore software as opensource software. The difference is that they are in control, they will refuse to improve open version, they will use open version only to catch new ideas and get bug reports for free.
        - You cannot add additional drives to expand available disk space on exisitng ZFS. Oracle says patches welcome, but I highly doubt they are.
        - ZFS is only supported on solaris and BSD, because it wants be that way. It is very clear that without ZFS acting like that, solaris or bsd have very little use compared to linux.
        - ZFS is 128bit and designed for very high volume custers. I don?t think anyone can need that, except for filehosters.
        - The only reasonable features to use ZFS are in-place repairs (no fscking needed) and CRCing of all files - and you can workaround them with EXT or XFS. Well, 1st is not really an issue if you setup the server properly and I have not encountered the 2nd - again we are talking about really huge data storage. For me, just polling SMART is enough to inform if something goes wrong and I have never encountered such problems. CERN was concerned about flipping bits... on volume space of several exabytes,.. I have 2TBs mirrored with attached backup, no problems encountered. I use just normal xeon there with completely normal 4x sata that are driven by mdadm(linux soft raid) and can be set up in very easy way by openmediavault. I found using linux soft raid (together with LVM if you need to consider future disk expasion) to be rather efficient and cheap solution. You don?t need 300$+ raid controllers. raid 0,1,5,10 are supported. Openmediavault is developed by same person who developed freenas, its mature, idiot-proof, very good automated fire-and-forget point-and-click distribution that is built and maintained in very sane, non-contradicting way; additionally you can always ssh into the box, and it behaves as normal debian. The software however is not entirely bug free, but it is stable enough to be used as dedicated NAS.
        You can probably try and write module for OMV to add ZFS support, as there is ZFS-in-Kernel project already.


        And to add, many many modern standalone NAS on market feature ARM-based circuit powered by linux kernel ... with mdraid... So much talking about softraid reliability. For example WD my book world edition.
        Last edited by crazycheese; 26 February 2012, 03:23 PM.

        Comment


        • #34
          drive cage?

          Originally posted by liam View Post
          What would you suggest for home use?
          I bought a tower awhile back and had intended to fill it with raided drives, but then found out that to do hotswap you really need hotswap trays, which aren't made for the cmstacker.
          So now I've been looking at http://www.newegg.com/Product/Produc...aidage&x=0&y=0 but reviews are a bit hard to come by. QNAP also apparently makes quite nice ones but again, reviews are a problem.
          I had thought to use software raid since I've heard nothing but bad things about hardware raid, save the never specified high end raid cards.

          Sorry for the thread jacking
          I've had excellent success with this unit:

          The best deals on computers, computer parts, desktops, laptops, electronics, tablets, software, gaming, hard drives, CPUs, motherboards, cables, and much more. With fast shipping and great customer service from Houston, Texas!


          I have had 5 of them in daily use for a couple of years now and I have no complaints at all.

          And for RAID cards: again nothing but great success with Areca cards. I can't vouch for performance relative to others, but they meet my needs. Linux support is excellent, you can even get at the smart data for the individual drives, at least with the command line tools. The BIOS setup is a bit crude but quite effective and straightforward. I have one set up in RAID 5 and one in RAID 0 and they are both great. Hunt around on ebay and you can find used ones for cheap.
          Last edited by frantaylor; 27 February 2012, 03:15 AM.

          Comment


          • #35
            Originally posted by crazycheese View Post
            If they are well paid, they sure can.
            ??? I watched a "well paid" DEC field service guy attempt to pull a card out of an up-and-running VAX with 100 users logged in.

            If there is a point to be made, it's that people are fumble-fingered and forgetful, and you have to take these factors into account when you think about your data integrity.

            I'd also like to point out that if you wire up your RAID cage incorrectly, you might not find out about it until you eject the wrong failed drive. It's good to make sure you wired it up right.
            Last edited by frantaylor; 27 February 2012, 03:01 AM.

            Comment


            • #36
              Originally posted by Melchior View Post
              All chips can die, therefore software raid is the most reliable & low-cost option
              I won't dispute the low cost, but honestly I have seen hundreds of dead drives and maybe one dead disk controller in my experience.

              The extra reliability of hardware raid comes from the fact that the IO controller, with its battery backup is what's scribbling on your drives, not the main processor, so it may be less likely to corrupt your data. The IO processor is less likely to execute bad code and write random junk on your drives esp. during a power fluctuation.

              Maybe it's not so important in these days of multi-core processors but it sure can't hurt to have some extra processing power to help sort out IO requests. With these hardware RAID cards you set the kernel IO scheduler to "noop" so there is less kernel processing required.

              If price is your sticking factor then I suggest you look at the used market for a RAID card. This is server gear designed for many years of constant use, so it's not gonna die on you. You would be surprised at the low prices if you are willing to buy a 2 year old card. And of course you have to buy two, to have that hot spare at the ready. Mine is still in the box.
              Last edited by frantaylor; 27 February 2012, 03:46 AM.

              Comment


              • #37
                Originally posted by frantaylor View Post
                ??? I watched a "well paid" DEC field service guy attempt to pull a card out of an up-and-running VAX with 100 users logged in.

                If there is a point to be made, it's that people are fumble-fingered and forgetful, and you have to take these factors into account when you think about your data integrity.

                I'd also like to point out that if you wire up your RAID cage incorrectly, you might not find out about it until you eject the wrong failed drive. It's good to make sure you wired it up right.
                Well, I understand your points. Maybe in interprise grade cluster the cages are important, or if you drive 2+ disks that you have intention to hotswap, or if you have really shaky hand, or if you want good cooling. I mean Ive seen cages fed with chewing gum and packaging... Both cages and esata cables have very firm connectors designed for 1000x plugins/offs, I don?t see why you couldnt use just esata for hotswapping.

                Comment

                Working...
                X