Announcement

Collapse
No announcement yet.

Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

    Phoronix: Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

    The Btrfs file-system updates have landed now in Linux 5.15 mainline with some exciting new features and improvements...

    https://www.phoronix.com/scan.php?pa...nux-5.15-Btrfs

  • #2
    Under the native Btrfs RAID [de]generate modes, RAID0 can function off a single device and RAID10 can function with two devices rather than needing two devices for RAID0 and four devices for RAID10. This Btrfs RAID degenerate mode was added to assist when converting or removing devices from an array while preserving the profile type.
    I'll admit to being a little bit shocked that it doesn't work that way already -- isn't that the point of using a mirrored RAID type?

    Comment


    • #3
      Michael, this is somewhat off topic though not entirely because this will impact the performance and reliability of end user experiences with any filesystems. Have you been following the recently revealed bait and switch tactics in the SSD space involving properly spec'd hardware sent to reviewers, then changing out the electronics for cheaper off spec ICs (like TLC storage being changed to QLC without notification or model number changes) when actually shipped to customers?

      https://www.tomshardware.com/news/sa...-ssd-parts-too
      https://www.extremetech.com/computin...and-the-p2-ssd

      If you've reported on it already, I apologize. I missed it.

      Comment


      • #4
        I'll admit to being a little bit shocked that it doesn't work that way already -- isn't that the point of using a mirrored RAID type?
        ​​​​​​
        That's because it makes no sense in any real world usage : RAID 0 is not mirror, it's stripping, and on a single disk there can't be any stripping.

        Raid 10 with 2 drives is just a raid 1 (mirror on 2 disks without any stripping layer on top of it).

        Comment


        • #5
          Originally posted by Quidnam View Post

          I'll admit to being a little bit shocked that it doesn't work that way already -- isn't that the point of using a mirrored RAID type?
          I think it is worth reminding everyone that BTRFS "RAID" is not like regular RAID.

          RAID0 in this case means ONE instance of the data - striped over as many devices as possible.
          RAID10 is TWO instances of the data, striped over as many devices as possible.

          Therefore RAID0 can work with only one device (effectively making it SINGLE device), and RAID10 with only two devices (effectively making it a two device RAID1).

          It is important to remember that if you add devices to a single device RAID0 setup , BTRFS will - with new data - stripe across as many devices as it can, which means that if you loose one disk your data is toast. Same with "RAID10" if you loose more than TWO devices your data might be toast.
          BTRFS RAID10 is actually technically worse (and better) than regular RAID10 since it puts the start of the stripe

          RAID0 with one device is really just the same as SINGLE, and RAID10 with two devices is really the same as RAID1 with two devices. I think that the issue here has been earlier arguments where people wanted BTRFS to "fallback" to SINGLE profiles or RAID1 profile to keep working somewhat. This would create multiple storage profiles on the filesystem and a RAID10 that was "degraded" to RAID1 for example would not know to spread it's copies over more storage device if they became available sometime later. E.g. a rebalance would not create the storage profile that is expected if some chunks autodegrade to RAID1 for example.

          I agree that this should have been fixed a long time ago , and I am also constantly frustrated by the "RAID" terminology used by BTRFS.

          Another thing to keep in mind is that BTRFS really was intended to support different storage profiles pr. subvolume. It would be great to have for example /var/cache set to RAID0 and other parts of the filesystem RAID10 or RAID1. When/if that happens BTRFS really need to use the correct profile type and not auto-degraded to other profiles as that would obviously make a mess. I am not too familiar with the details of RAID10 in BTRFS , but there may be benefits to having BTRFS try to allocated more like a traditional RAID10 as that potentially could make BTRFS survive more than two disk failures.

          Anyway the good news is that these (non-critical) things which should have been fixed ages ago slowly but surely are being taken care of.

          I personally have a wish-list that I hope will happen sometime in the future:
          1. Per. subvolume storage profiles
          2. Dirty chunks bitmap (for fast scrub after a unclean unmount)
          3. Readmirror optimizations
          4. Ability to make storage device groups and assign subvolumes by weight to them
          5. An option to reserve SPARE SPACE (a spare device makes zero sense for BTRFS).
          6. Auto-redundancy (make extra copies that can be dropped if the space is needed). Not used space is wasted space just like with memory. This could help read-speed and recovery if you have lots of free space anyway.

          http://www.dirtcellar.net

          Comment


          • #6
            Is RAID5 on Btrfs still broken?

            Comment


            • #7
              Originally posted by waxhead View Post
              I personally have a wish-list that I hope will happen sometime in the future:
              1. Per. subvolume storage profiles
              2. Dirty chunks bitmap (for fast scrub after a unclean unmount)
              3. Readmirror optimizations
              4. Ability to make storage device groups and assign subvolumes by weight to them
              5. An option to reserve SPARE SPACE (a spare device makes zero sense for BTRFS).
              6. Auto-redundancy (make extra copies that can be dropped if the space is needed). Not used space is wasted space just like with memory. This could help read-speed and recovery if you have lots of free space anyway.
              I'd add subvolume encryption.

              Comment


              • #8
                Originally posted by jacob View Post
                Is RAID5 on Btrfs still broken?
                https://btrfs.wiki.kernel.org/index.php/Status#RAID56

                Comment


                • #9
                  Originally posted by jacob View Post
                  Is RAID5 on Btrfs still broken?
                  Afaik, yes, and it is pretty much discouraged for SSD and large HDD so not sure if even makes sense to fix as even a correct implementation has a decent chance of full data loss.
                  Mdraid or lvm can do it for those that really need/want it.

                  Comment


                  • #10
                    I think that the "degenerate RAID" feature is very usefull for home users of RAID1 arrays (small and medium desktops), where it assists in replacing the failed disk drives (where it is assumed that a user needs to go buy a new drive on drive failure). Unfortunately, the btrfs developers seem to have concentrated their effors on RAID0 arrays first.

                    AFAIK, on RAID1 btrfs arrays, the "degenerete" mode still allows only read-only access. This is unlike most hardware RAID controllers and unlike mdadm software RAID. So, when a drive in btrfs raid1 fails, then a user needs to run to the nearest hardware store to buy a replacement (OS cannot function well in read-only mode).
                    Last edited by xfcemint; 31 August 2021, 07:06 PM.

                    Comment

                    Working...
                    X