Announcement

Collapse
No announcement yet.

Btrfs Will Finally "Strongly Discourage" You When Creating RAID5 / RAID6 Arrays

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by mazumoto View Post
    This is unfortunate ... a few years ago I got the impression that most problems were fixed (beside the write hole). I remember them changing the status for raid56 on their wiki from "experimental" to "mostly safe" or similar.
    I actually run a raid5 array of 5 disks for a few years now - because I cannot afford 3 more disks for raid1. So I definitely hope they keep (or start) working on raid56 again
    Is raid 5/6 really still a thing with BTRFS or ZFS?
    Even raid 0 and raid 10 can be questioned for SATA SSDs (higher sequential, lower iops) and we moving more and more toward NVMe.

    Comment


    • #22
      Originally posted by AmericanLocomotive View Post
      Do these bugs only exist when using software raid? What if you have a hardware raid5 controller?
      those bugs are btrfs specific. so any hw raid controller is unaffected.
      but if you dont use any raid level with btrfs you loose its bitrot correction abilities.

      hardware raid controllers are not a thing any more. software raid (zfs / mdadm - not btrfs) has way more benefits. just use a ups though

      Comment


      • #23
        Originally posted by skeevy420 View Post

        From one person who can't afford all the disks they like to another, you should look into migrating to ZFS. I've been rocking the same ZFS setup for many years on multiple different disks now.

        Code:
        [FONT=monospace][COLOR=#54ff54][B]skeevy42[email protected][/B][/COLOR][COLOR=#5454ff][B] ~ $[/B][/COLOR][COLOR=#000000] lsblk -f [/COLOR]
        NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
        sda
        ├─sda1 vfat FAT32 251.8M 0% /boot/efi
        └─sda2 btrfs GENTOO 439.3G 2% /var
        sdb
        ├─sdb1 zfs_member 5000 multimedia
        └─sdb9
        sdc
        ├─sdc1 zfs_member 5000 multimedia
        └─sdc9
        sdd
        ├─sdd1
        ├─sdd2 ntfs
        └─sdd3 ntfs [/FONT]
        ....
        BTRFS can handle this just fine. No need for ZFS.

        Comment


        • #24
          Originally posted by RussianNeuroMancer View Post
          thanks for sharing.

          Comment


          • #25
            Originally posted by RussianNeuroMancer View Post
            This is a nonsense article. RAID5/RAID6 is NOT stopping working at a magical date based on some theoretical statistics. At best it becomes less feasible on large filesystems with storage devices of low quality / or rather low URE numbers, in which case you make two arrays instead.

            Contrary to popular belief RAID does *NOT* protect you against data corruption. It protects you against drive failure.

            There is a lot of retry logic in both harddrives (some drives try a few times, other try many times) and in the Linux block layer as well - a unrecoverable read error does not always necessarily mean that it is unrecoverable every time it is read. Retry a few hundred times and perhaps you where able to recover the data anyway. Also a corruption may be completely unnoticeable as well. the URE may even happen on FREE SPACE not in use by the filesystem so you might not even notice it.

            Besides, it is worth mentioning again that all BTRFS "RAID" levels including "RAID5/6" is not really RAID in the traditional sense. Proper, decent RAID is done in the MD layer. For what it is meant for it's reliability is simply unmatched!

            http://www.dirtcellar.net

            Comment


            • #26
              Is BTRFS Raid 5/6 even fixable without changing the specification? All I know is that RAID has been an issue for at least 5 years now.

              Comment


              • #27
                Originally posted by darkbasic View Post
                I don't trust raid 1 on btrfs either, I remember a huuuge list of issues mentioned on the mailing list regarding its implementation. Most of them could be circumnvented if the user is aware of them, but since they don't care about documentation you cannot truly expect anyone to know all the pitfalls. I use btrfs only in non-raid configurations, for everything else I stick with zfs.

                P.S.
                The write hole is the least of the problems in raid5.
                RAID-1 on btrfs is the ideal configuration and the most tested.

                I can't believe you would run btrfs in a SINGLE copy mode when you could be running RAID-1 with two copies. btrfs recovery features are completely unavailable with a single copy. Your only option to recover from corruption with a single copy is to format and restore from backup if it is a metadata error, or restore the affected file(s) for a data error.

                I've successfully recovered from two separate drive failures in btrfs RAID-10. It was a lot more convenient to rebuild the 10 TB array than to restore it.

                I suppose by "pitfalls" you mean things like some data chunks being written in a SINGLE profile while mounted in degraded mode and requiring someone to know to rebalance everything after replacing the failed drive.

                I do think it would be nice for the distributions like SUSE, Redhat and Fedora to write some nice boot scripts to automatically handle repairing degraded arrays, perhaps even with a GUI.

                Comment


                • #28
                  Uff ,, good that ZFS still fully supports raidz

                  Comment


                  • #29
                    Originally posted by kiffmet View Post
                    Is BTRFS Raid 5/6 even fixable without changing the specification? All I know is that RAID has been an issue for at least 5 years now.
                    I think it's hard to say because none of the companies that actually employ btrfs developers seem to be interested in improving the raid situation. They still think it's a good idea for a btrfs array to go read-only without manual intervention even if there is no data loss, which completely flies in the face of RAID being used to improve availability in the face of hardware failure. Like the poor documentation, there's just a lot of apathy surrounding the parts of btrfs not actively used by facebook, oracle, or suse.

                    Comment


                    • #30
                      Originally posted by darkbasic View Post
                      I don't trust raid 1 on btrfs either, I remember a huuuge list of issues mentioned on the mailing list regarding its implementation. Most of them could be circumnvented if the user is aware of them, but since they don't care about documentation you cannot truly expect anyone to know all the pitfalls. I use btrfs only in non-raid configurations, for everything else I stick with zfs.

                      P.S.
                      The write hole is the least of the problems in raid5.
                      I think that the "problems" you are mentioning are about files marked as no copy on write. I follow the mailing list too and can not remember anything like you describe. Running BTRFS in single mode is only useful for detecting errors. You should try RAID1 on two devices once and try to inject random data in one of the devices and see how btrfs fixes it. You might be surprised about how well it works.
                      Last edited by waxhead; 07 March 2021, 06:24 PM.

                      http://www.dirtcellar.net

                      Comment

                      Working...
                      X