Announcement

Collapse
No announcement yet.

Btrfs RAID 0/1/5/6/10 Benchmarks On Linux 4.12

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs RAID 0/1/5/6/10 Benchmarks On Linux 4.12

    Phoronix: Btrfs RAID 0/1/5/6/10 Benchmarks On Linux 4.12

    With Btrfs RAID 5/6 seeing fixes in Linux 4.12, if you are re-evaluating the setup of a Btrfs native RAID array, here are some fresh benchmarks using four solid-state drives.

    http://www.phoronix.com/vr.php?view=24691

  • #2
    Is RAID 5/6 on BTRFS fully fixed yet? I know some fixes went through but is it actually fixed completely? I personally prefer RAID 10, but it would be nice to see the 5/6 functionality become production ready soon

    Comment


    • #3
      Originally posted by SuperIce97 View Post
      Is RAID 5/6 on BTRFS fully fixed yet? I know some fixes went through but is it actually fixed completely? I personally prefer RAID 10, but it would be nice to see the 5/6 functionality become production ready soon
      No it's not. It's better but not safe yet.

      Comment


      • #4
        During this testing process, we hadn't run into any reliability troubles with any of the Btrfs RAID levels tested.
        In all fairness that'd also require some more extensive testing over long periods of time.

        Originally posted by starshipeleven View Post
        No it's not. It's better but not safe yet.
        That's a bit misleading though. Nothing is. mdadm isn't safe either, nor is hardware RAID. They're more reliable, possibly, sure. But they're not "safe". Backups are always a requirement in that area.

        Comment


        • #5
          Originally posted by necrophcodr View Post
          That's a bit misleading though. Nothing is. mdadm isn't safe either, nor is hardware RAID. They're more reliable, possibly, sure. But they're not "safe". Backups are always a requirement in that area.
          Backups are a requirement always, regardless of RAID type or technology. RAID is never ever a substitute for backing up your data.

          That said, I've found mdadm RAID1 mirror to be exceptionally reliable and robust, and it's the only RAID I use for my most important data.

          Comment


          • #6
            For my own machine, I'm using lvm to make a RAID 1 with two 4 TB HDDs formatted in Ext4.

            Is the integrated RAID feature of Btrfs better than the one from lvm?
            (just focusing on Btrfs vs lvm, not Btrfs vs Ext4 since I'm pretty sure I don't need all the features of Btrfs)

            Comment


            • #7
              Originally posted by phoronix View Post
              Phoronix: Btrfs RAID 0/1/5/6/10 Benchmarks On Linux 4.12

              With Btrfs RAID 5/6 seeing fixes in Linux 4.12, if you are re-evaluating the setup of a Btrfs native RAID array, here are some fresh benchmarks using four solid-state drives.

              http://www.phoronix.com/vr.php?view=24691
              Michael:
              Are these tests done using the same RAID level for both data and metadata?! You could get very different results from mixing metadata raid levels. For example metadata raid1 and data raid5. Also when you post articles such as these I think you should point out that raid5/6 is NOT stable by far since your article may easily be misinterpreted as if it was.

              It would also be nice if you had some reliability tests for BTRFS (and mdadm, lvm, zfs etc....) I know that will be a lot of work, but perhaps you can do something automated. I am thinking of stuff that introduce corruptions, fail disks, etc...

              Comment


              • #8
                Originally posted by Creak View Post
                For my own machine, I'm using lvm to make a RAID 1 with two 4 TB HDDs formatted in Ext4.

                Is the integrated RAID feature of Btrfs better than the one from lvm?
                (just focusing on Btrfs vs lvm, not Btrfs vs Ext4 since I'm pretty sure I don't need all the features of Btrfs)
                I may be wrong but I think that Btrfs' RAID implementation replicates individual files, not entire volumes as LVM does, so in theory it's more flexible. In practice of course, if you want Ext4 with RAID you can only use LVM and if you want Btrfs with RAID, you must use its own RAID support (I don't remember why but there is some reason why it should not be used on top of LVM).

                BTW, Btrfs' other features (snapshots, subvolumes, CoW etc.) are EXTREMELY addictive. I also thought I didn't really need them but after having tried using them once, I can't imagine living without them. It's not even just for servers - I'm talking about a laptop used primarily as a development machine and day-to-day usage - so beware! ;-)

                Comment


                • #9
                  Originally posted by necrophcodr View Post
                  That's a bit misleading though. Nothing is. mdadm isn't safe either, nor is hardware RAID. They're more reliable, possibly, sure. But they're not "safe". Backups are always a requirement in that area.
                  It's not misleading. That feature is still experimental and it still lacks basic stuff. I'm not an anti-btrfs troll, I use it everyday (RAID1) and follow their mailing list.

                  Comment


                  • #10
                    Better question are there any known bugs in btrfs that eat your data? Did they managed to fix them all or there are some left.

                    Comment

                    Working...
                    X