Announcement

Collapse
No announcement yet.

Btrfs Getting RAID 5/6 Fixes In Linux 4.12 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Zucca View Post
    I had different experience with my raid10, but it's on 6 drives, so there's much more flexibility in various ways.
    Anyway when I was trying to find out what was causing all kinds of errors emerging from ata1 to ata6 (it was eventually kernel or hw bug on SATA controller) I started to unplug drives one after another. Of course plugging one back before pulling second out (hotswap cage made this easy). While one drive was un plugged btrfs knew it couldn't access it anymore and "marked it" as failed, since everything worked without a problem, no read timeouts etc... I watched dmesg and there was a clear event when the drive was dropped from the pool. After I replugged a drive, I ran commands to add it to the btrfs pool, replacing itself.
    This however isn't an analog for a case where one (or more) drive starts to slowly die. I don't have experience on how many errors and what kind of errors need to occur when btrfs decides to mark a drive dead. I swap my drives so often (to bigger ones). :P
    What I've read and understood is that the ro nightmare usually comes when you lose one drive and don't replace it but you balance the data over the remaining drives to preserve the redundancy --> not enough space --> lockup.
    Perhaps I should test raid10 again on a newer kernel. From my experience BTRFS does not yet recognize the drive if it's unplugged and replugged again UNLESS it shows up as the same /dev/sdX device. If /dev/sdX changes to /dev/sdY for that drive you have NOT restored the array to a working state. Also BTRFS does not have criteria for dropping a device even if you get tons of read/write/corruption errors on that drive. It will happily try to write to it and not mark it as unsuitable / defect.
    All of the above is based on own experience, I have not tried with kernel 4.9 or newer so perhaps it's worth a shot.

    http://www.dirtcellar.net

    Comment


    • #32
      I use Btrfs since Ubuntu 12.04 without any issues. Today, I have two desktops with Arch Linux and two notebooks, one with Arch and one with openSUSE, all with Btrfs. My Arch boxes are setup with snapshots for root fs TESTING, STABLE and OLDSTABLE and I keep the last two Kernel, which allows me to rollback the system and boot the snapshots. I have subvolumes for /home and the pacman cache which means, I keep this data even I do a rollback. Read only Snapshots for /home keeps my data safe from ransomeware and allows incremental backups with "send | receive". If someone considers Btrfs for a samba server with Windows clients, take a look at "Btrfs Enhanced Server Side Copy". Btrfs also has Shadow Copy support, like Zfs has.

      Comment


      • #33
        Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.

        Comment


        • #34
          Originally posted by mortn View Post
          Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
          Now add a new 3tb disk to the array, or corrupt one of them. Currently recovering a btrfs RAID5 system that wasn't updated to kernel 4.12 yet :/

          Comment


          • #35
            Originally posted by mortn View Post
            Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
            You've been lucky, that's still not safe at all.

            Comment


            • #36
              Originally posted by starshipeleven View Post
              You've been lucky, that's still not safe at all.
              Especially WD Green; that's terrifying.

              Comment


              • #37
                I use WD greens mainly too. I have disabled the head parking...

                Comment

                Working...
                X