Announcement

Collapse
No announcement yet.

Btrfs Adds Degenerate RAID Support, Performance Improvements With Linux 5.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    linner
    Senior Member

  • linner
    replied
    The last time I tried BTRFS RAID5/6 arrays I tested by purposefully failing drives and it completely broke. Totally unreliable in my experience.

    Leave a comment:

  • DanglingPointer
    Phoronix Member

  • DanglingPointer
    replied
    Originally posted by waxhead View Post

    RAID5/6 still has the write hole, but if you use RAID1c3 or RAID1c4 for metadata and remember to scrub immediately after an unclean unmount you should be good at least in theory. I would not not yet use it myself (and I have tested , working backups).
    Have been using RAID5 on multiple workstation servers for almost 8 years now. No data loss! However I've got a UPS.

    I do have RAID1 for metadata and RAID5 for data and scrub quarterly in a year. Also use space_cache=v2.

    Leave a comment:

  • waxhead
    Premium For Life

  • waxhead
    replied
    Originally posted by jacob View Post
    Is RAID5 on Btrfs still broken?
    RAID5/6 still has the write hole, but if you use RAID1c3 or RAID1c4 for metadata and remember to scrub immediately after an unclean unmount you should be good at least in theory. I would not not yet use it myself (and I have tested , working backups).

    Leave a comment:

  • pal666
    Senior Member

  • pal666
    replied
    Originally posted by Quidnam View Post
    I'll admit to being a little bit shocked that it doesn't work that way already -- isn't that the point of using a mirrored RAID type?
    raid0 is not a mirror

    Leave a comment:

  • xfcemint
    Senior Member

  • xfcemint
    replied
    I think that the "degenerate RAID" feature is very usefull for home users of RAID1 arrays (small and medium desktops), where it assists in replacing the failed disk drives (where it is assumed that a user needs to go buy a new drive on drive failure). Unfortunately, the btrfs developers seem to have concentrated their effors on RAID0 arrays first.

    AFAIK, on RAID1 btrfs arrays, the "degenerete" mode still allows only read-only access. This is unlike most hardware RAID controllers and unlike mdadm software RAID. So, when a drive in btrfs raid1 fails, then a user needs to run to the nearest hardware store to buy a replacement (OS cannot function well in read-only mode).
    xfcemint
    Senior Member
    Last edited by xfcemint; 31 August 2021, 07:06 PM.

    Leave a comment:

  • mppix
    Senior Member

  • mppix
    replied
    Originally posted by jacob View Post
    Is RAID5 on Btrfs still broken?
    Afaik, yes, and it is pretty much discouraged for SSD and large HDD so not sure if even makes sense to fix as even a correct implementation has a decent chance of full data loss.
    Mdraid or lvm can do it for those that really need/want it.

    Leave a comment:

  • _r00t-
    Junior Member

  • _r00t-
    replied
    Originally posted by jacob View Post
    Is RAID5 on Btrfs still broken?
    https://btrfs.wiki.kernel.org/index.php/Status#RAID56

    Leave a comment:

  • mppix
    Senior Member

  • mppix
    replied
    Originally posted by waxhead View Post
    I personally have a wish-list that I hope will happen sometime in the future:
    1. Per. subvolume storage profiles
    2. Dirty chunks bitmap (for fast scrub after a unclean unmount)
    3. Readmirror optimizations
    4. Ability to make storage device groups and assign subvolumes by weight to them
    5. An option to reserve SPARE SPACE (a spare device makes zero sense for BTRFS).
    6. Auto-redundancy (make extra copies that can be dropped if the space is needed). Not used space is wasted space just like with memory. This could help read-speed and recovery if you have lots of free space anyway.
    I'd add subvolume encryption.

    Leave a comment:

  • jacob
    Senior Member

  • jacob
    replied
    Is RAID5 on Btrfs still broken?

    Leave a comment:

  • waxhead
    Premium For Life

  • waxhead
    replied
    Originally posted by Quidnam View Post

    I'll admit to being a little bit shocked that it doesn't work that way already -- isn't that the point of using a mirrored RAID type?
    I think it is worth reminding everyone that BTRFS "RAID" is not like regular RAID.

    RAID0 in this case means ONE instance of the data - striped over as many devices as possible.
    RAID10 is TWO instances of the data, striped over as many devices as possible.

    Therefore RAID0 can work with only one device (effectively making it SINGLE device), and RAID10 with only two devices (effectively making it a two device RAID1).

    It is important to remember that if you add devices to a single device RAID0 setup , BTRFS will - with new data - stripe across as many devices as it can, which means that if you loose one disk your data is toast. Same with "RAID10" if you loose more than TWO devices your data might be toast.
    BTRFS RAID10 is actually technically worse (and better) than regular RAID10 since it puts the start of the stripe

    RAID0 with one device is really just the same as SINGLE, and RAID10 with two devices is really the same as RAID1 with two devices. I think that the issue here has been earlier arguments where people wanted BTRFS to "fallback" to SINGLE profiles or RAID1 profile to keep working somewhat. This would create multiple storage profiles on the filesystem and a RAID10 that was "degraded" to RAID1 for example would not know to spread it's copies over more storage device if they became available sometime later. E.g. a rebalance would not create the storage profile that is expected if some chunks autodegrade to RAID1 for example.

    I agree that this should have been fixed a long time ago , and I am also constantly frustrated by the "RAID" terminology used by BTRFS.

    Another thing to keep in mind is that BTRFS really was intended to support different storage profiles pr. subvolume. It would be great to have for example /var/cache set to RAID0 and other parts of the filesystem RAID10 or RAID1. When/if that happens BTRFS really need to use the correct profile type and not auto-degraded to other profiles as that would obviously make a mess. I am not too familiar with the details of RAID10 in BTRFS , but there may be benefits to having BTRFS try to allocated more like a traditional RAID10 as that potentially could make BTRFS survive more than two disk failures.

    Anyway the good news is that these (non-critical) things which should have been fixed ages ago slowly but surely are being taken care of.

    I personally have a wish-list that I hope will happen sometime in the future:
    1. Per. subvolume storage profiles
    2. Dirty chunks bitmap (for fast scrub after a unclean unmount)
    3. Readmirror optimizations
    4. Ability to make storage device groups and assign subvolumes by weight to them
    5. An option to reserve SPARE SPACE (a spare device makes zero sense for BTRFS).
    6. Auto-redundancy (make extra copies that can be dropped if the space is needed). Not used space is wasted space just like with memory. This could help read-speed and recovery if you have lots of free space anyway.

    Leave a comment:

Working...
X