Announcement

Collapse
No announcement yet.

RAID 5/6 Continues Being Improved For Btrfs With Linux 3.20

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RAID 5/6 Continues Being Improved For Btrfs With Linux 3.20

    Phoronix: RAID 5/6 Continues Being Improved For Btrfs With Linux 3.20

    Chris Mason has sent in his pull request of the Btrfs file-system changes for the Linux 3.20 (4.0?) kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I hope that btrfs attempts to integrate the raid-n work from Andreas(of snapraid). However, given the difficulty btrfs seems to be having with implementing raid5/6 I'm not holding my breath for that feature.
    What boggles my mind, though, is why the work wasn't accepts by upstream md. That would've provided the Linux stack with a great advantage over any other enterprise solution (to my knowledge).

    Comment


    • #3
      Originally posted by liam View Post
      I hope that btrfs attempts to integrate the raid-n work from Andreas(of snapraid). However, given the difficulty btrfs seems to be having with implementing raid5/6 I'm not holding my breath for that feature.
      What boggles my mind, though, is why the work wasn't accepts by upstream md. That would've provided the Linux stack with a great advantage over any other enterprise solution (to my knowledge).
      If you take a look on how btrfs does RAID, you will see it haves far more interesting set of advantages in its core design. You see, technically, this design can contain arbitrary mix of RAID levels and actually, various subvolumes or even files can use different RAID schemes. It also could be far more flexible in how it allocates blocks, e.g. it does not really needs drives to be exactly same size, etc. It can work on arbitrary mix as long as there is enough devices to match requested storage scheme.

      Currently it is not fully implemented. But underlying structures were meant to deal with it and they have plans to implement things like this in future. Looks like good architecture work from Mr. Mason.

      Comment


      • #4
        Current status?

        I'd like to know if there's any documentation or blog posts etc regarding the current state of RAID 5/6 in btrfs. I'd like to use it but have been avoiding it until it's "ready".

        Comment


        • #5
          Raid it's old.

          I'm tired of software raids, I can't even remember how many times my 4 raid1 server re-syncronized hdXA to hdXB for every stupid reason (powerfailure, shutdown problems duo graphics driver bugs, ecc..).
          The only way I can accept a Raid configuration it's using raid cards with battery backup.
          raid5 or 6? come on! this are stupid and complexity configurations, and for what? better go with raid1 or raid10 with fast and cheap HBAs suitable for get maximum performance from SSDs.
          Anyway, I move away from Raid, there are better solutions object level replication with better sync strategies in case of failures.
          Ceph and Gluster offer object level replication, great availability and scale out/scale up possibilities.

          The wikipage http://en.wikipedia.org/wiki/Btrfs talk about an object level raid but no implementation it's available right now, what a shame.

          Comment


          • #6
            Originally posted by SystemCrasher View Post
            If you take a look on how btrfs does RAID, you will see it haves far more interesting set of advantages in its core design. You see, technically, this design can contain arbitrary mix of RAID levels and actually, various subvolumes or even files can use different RAID schemes. It also could be far more flexible in how it allocates blocks, e.g. it does not really needs drives to be exactly same size, etc. It can work on arbitrary mix as long as there is enough devices to match requested storage scheme.

            Currently it is not fully implemented. But underlying structures were meant to deal with it and they have plans to implement things like this in future. Looks like good architecture work from Mr. Mason.
            That's why btrfs only improve slowly and has difficulties to compete with ZFS, btrfs has far too much ambition on its features.

            Comment


            • #7
              Originally posted by sp82 View Post
              I'm tired of software raids, I can't even remember how many times my 4 raid1 server re-syncronized hdXA to hdXB for every stupid reason (powerfailure, shutdown problems duo graphics driver bugs, ecc..).
              Not sure what the problem was there, but surely you're not talking about btrfs raid? I have a btrfs raid1 with two disks, and so far I had no issues whatsoever. That device is always-on without battery or any sort of power backup, and lately we had a few occasions where the power in our flat went out. I do regular btrfs scrubs to detect possible data corruption, and the only time when something needed to be fixed was when the raid was still powered by a RasPi, due to its very weak USB capabilities (frequent USB disconnects every day, and the 2 disks are connected through USB).

              I'd say btrfs raid is pretty awesome. "Normal" software/hardware raid doesn't know which of the 2 possible copies (in a 2 disk raid1 setup) is corrupted/clean, while for btrfs it's easy to tell and fix because of checksums.

              I have some doubts about stability though, seeing all the btrfs patches landing in each and every kernel release makes me feel a little uncomfortable. Let's hope they reach feature completeness soon and then we see a year or two of heavy bug fixing (if required), and then the code base can go to sleep / maintenance mode, and then it will be very reliable.

              Comment


              • #8
                Originally posted by sp82 View Post
                I'm tired of software raids, I can't even remember how many times my 4 raid1 server re-syncronized hdXA to hdXB for every stupid reason
                Welcome to AHCI. This is a hardware issue. Luckily, kernels that are even remotely modern will read the RAID auto partition information and build the raid even though the underlying device information has changed. That assumes that you set up the partition information correctly, of course.

                raid5 or 6? come on! this are stupid and complexity configurations, and for what? better go with raid1 or raid10 with fast and cheap HBAs suitable for get maximum performance from SSDs.
                That's a desktop user talking, clearly. RAID5/6 are generally not applicable to desktop users. Home users might choose to build a NAS/media server using it, I suppose.

                Anyway, I move away from Raid, there are better solutions object level replication with better sync strategies in case of failures. Ceph and Gluster offer object level replication, great availability and scale out/scale up possibilities.
                HAHAHAHA!! And RAID is complex, you say?!

                Comment


                • #9
                  Originally posted by SystemCrasher View Post
                  If you take a look on how btrfs does RAID, you will see it haves far more interesting set of advantages in its core design. You see, technically, this design can contain arbitrary mix of RAID levels and actually, various subvolumes or even files can use different RAID schemes. It also could be far more flexible in how it allocates blocks, e.g. it does not really needs drives to be exactly same size, etc. It can work on arbitrary mix as long as there is enough devices to match requested storage scheme.

                  Currently it is not fully implemented. But underlying structures were meant to deal with it and they have plans to implement things like this in future. Looks like good architecture work from Mr. Mason.
                  I'm not advocating md over btrfs, in terms of design. I was only pointing out a well written library that sees much use and contains, fast, support for raid redundancy up to, iirc, 5 lost disks. Given btrfs design I'd be amazed if they could integrate this work as is, but the actual implementation is tricky hence why I'm sad that it's not making its way upstream.
                  I'm aware of btrfs' many positive qualities but, aiui, the movement from raid 6 to anything beyond is quite difficult so I'd be happily surprised if their design allowed for arbitrary levels of redundancyredundancy in any meaningful way.

                  Comment


                  • #10
                    Originally posted by Shaman666 View Post
                    Welcome to AHCI. This is a hardware issue. Luckily, kernels that are even remotely modern will read the RAID auto partition information and build the raid even though the underlying device information has changed. That assumes that you set up the partition information correctly, of course.



                    That's a desktop user talking, clearly. RAID5/6 are generally not applicable to desktop users. Home users might choose to build a NAS/media server using it, I suppose.



                    HAHAHAHA!! And RAID is complex, you say?!

                    I'm talking about raid5 and raid6 are complex than raid10.

                    Comment

                    Working...
                    X