Announcement
Collapse
No announcement yet.
Btrfs Getting RAID 5/6 Fixes In Linux 4.12 Kernel
Collapse
X
-
Originally posted by mortn View PostBeen using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
Leave a comment:
-
Originally posted by mortn View PostBeen using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
Leave a comment:
-
Been using 4 x 3tb WD green in raid5 on my backup server for 2 years now without a single glitch! I even pulled out a disk to see if I could break it, but it keeps on working. Put it back and everything went fine. Everything just works. Keep reading about how this will fail, but my hands-on experience tells but otherwise.
Leave a comment:
-
I use Btrfs since Ubuntu 12.04 without any issues. Today, I have two desktops with Arch Linux and two notebooks, one with Arch and one with openSUSE, all with Btrfs. My Arch boxes are setup with snapshots for root fs TESTING, STABLE and OLDSTABLE and I keep the last two Kernel, which allows me to rollback the system and boot the snapshots. I have subvolumes for /home and the pacman cache which means, I keep this data even I do a rollback. Read only Snapshots for /home keeps my data safe from ransomeware and allows incremental backups with "send | receive". If someone considers Btrfs for a samba server with Windows clients, take a look at "Btrfs Enhanced Server Side Copy". Btrfs also has Shadow Copy support, like Zfs has.
Leave a comment:
-
Originally posted by Zucca View PostI had different experience with my raid10, but it's on 6 drives, so there's much more flexibility in various ways.
Anyway when I was trying to find out what was causing all kinds of errors emerging from ata1 to ata6 (it was eventually kernel or hw bug on SATA controller) I started to unplug drives one after another. Of course plugging one back before pulling second out (hotswap cage made this easy). While one drive was un plugged btrfs knew it couldn't access it anymore and "marked it" as failed, since everything worked without a problem, no read timeouts etc... I watched dmesg and there was a clear event when the drive was dropped from the pool. After I replugged a drive, I ran commands to add it to the btrfs pool, replacing itself.
This however isn't an analog for a case where one (or more) drive starts to slowly die. I don't have experience on how many errors and what kind of errors need to occur when btrfs decides to mark a drive dead. I swap my drives so often (to bigger ones). :P
What I've read and understood is that the ro nightmare usually comes when you lose one drive and don't replace it but you balance the data over the remaining drives to preserve the redundancy --> not enough space --> lockup.
All of the above is based on own experience, I have not tried with kernel 4.9 or newer so perhaps it's worth a shot.
Leave a comment:
Leave a comment: