Announcement

Collapse
No announcement yet.

Btrfs Getting RAID 5/6 Fixes In Linux 4.12 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zucca
    replied
    I use WD greens mainly too. I have disabled the head parking...

    Leave a comment:


  • rubdos
    replied
    Originally posted by starshipeleven View Post
    You've been lucky, that's still not safe at all.
    Especially WD Green; that's terrifying.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by mortn View Post
    Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
    You've been lucky, that's still not safe at all.

    Leave a comment:


  • rubdos
    replied
    Originally posted by mortn View Post
    Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
    Now add a new 3tb disk to the array, or corrupt one of them. Currently recovering a btrfs RAID5 system that wasn't updated to kernel 4.12 yet :/

    Leave a comment:


  • mortn
    replied
    Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.

    Leave a comment:


  • unicks
    replied
    I use Btrfs since Ubuntu 12.04 without any issues. Today, I have two desktops with Arch Linux and two notebooks, one with Arch and one with openSUSE, all with Btrfs. My Arch boxes are setup with snapshots for root fs TESTING, STABLE and OLDSTABLE and I keep the last two Kernel, which allows me to rollback the system and boot the snapshots. I have subvolumes for /home and the pacman cache which means, I keep this data even I do a rollback. Read only Snapshots for /home keeps my data safe from ransomeware and allows incremental backups with "send | receive". If someone considers Btrfs for a samba server with Windows clients, take a look at "Btrfs Enhanced Server Side Copy". Btrfs also has Shadow Copy support, like Zfs has.

    Leave a comment:


  • waxhead
    replied
    Originally posted by Zucca View Post
    I had different experience with my raid10, but it's on 6 drives, so there's much more flexibility in various ways.
    Anyway when I was trying to find out what was causing all kinds of errors emerging from ata1 to ata6 (it was eventually kernel or hw bug on SATA controller) I started to unplug drives one after another. Of course plugging one back before pulling second out (hotswap cage made this easy). While one drive was un plugged btrfs knew it couldn't access it anymore and "marked it" as failed, since everything worked without a problem, no read timeouts etc... I watched dmesg and there was a clear event when the drive was dropped from the pool. After I replugged a drive, I ran commands to add it to the btrfs pool, replacing itself.
    This however isn't an analog for a case where one (or more) drive starts to slowly die. I don't have experience on how many errors and what kind of errors need to occur when btrfs decides to mark a drive dead. I swap my drives so often (to bigger ones). :P
    What I've read and understood is that the ro nightmare usually comes when you lose one drive and don't replace it but you balance the data over the remaining drives to preserve the redundancy --> not enough space --> lockup.
    Perhaps I should test raid10 again on a newer kernel. From my experience BTRFS does not yet recognize the drive if it's unplugged and replugged again UNLESS it shows up as the same /dev/sdX device. If /dev/sdX changes to /dev/sdY for that drive you have NOT restored the array to a working state. Also BTRFS does not have criteria for dropping a device even if you get tons of read/write/corruption errors on that drive. It will happily try to write to it and not mark it as unsuitable / defect.
    All of the above is based on own experience, I have not tried with kernel 4.9 or newer so perhaps it's worth a shot.

    Leave a comment:


  • 89c51
    replied
    Originally posted by MartinK View Post
    [*]Btrfs also does not work particularly well with the ancient Android kernels Sailfish OS has to use in order to reuse the Android hardware adaptation.
    This is and always will be the problem.


    Leave a comment:


  • ldo17
    replied
    Originally posted by pcxmac View Post

    ... on spinning rust.
    It seems pretty fashionable nowadays to use this “rust” pejorative when referring to hard drives.

    Let me just point out that rust isn’t magnetic.

    Leave a comment:


  • dragon321
    replied
    Thank You all for answers.

    Leave a comment:

Working...
X