Announcement

Collapse
No announcement yet.

Btrfs RAID 5/6 Sub-Page Support Readied For Linux 5.19

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Chugworth View Post
    If Btrfs could ever get reliable RAID5/6 and native encryption, then I wouldn't have much need for ZFS anymore. Although still the Btrfs implementation of send/receive does make me a little nervous in that it seems to be working at the file level, whereas in ZFS it's working at the block level. That also makes me think that if Btrfs ever did get encryption, it wouldn't be as slick as the ZFS implementation. But as it stands they're two different tools with two different uses. Btrfs works great as a root filesystem in a single disk or mirrored scenario, and ZFS works great as a data storage filesystem.
    At this point I think it's very much the other way around. With ZFS landing the ability to add additional disks to a RAID, I don't see many feature reasons to prefer BTRFS.

    Comment


    • #12
      Originally posted by commodore256 View Post
      Does this mean Raid5/6 won't melt your Btr?
      Still broken. As other have noted, It'll take yet another large overhaul of the on-disk format, at minimum, to fix it.

      Considering RAID has been just as broken for a decade now, I've stopped hoping that it'll ever get fixed. I don't even think this is so much a "fix" to RAID as it is the porting of a separate feature to it, while leaving it just as broken as it was before.

      Comment


      • #13
        Originally posted by Developer12 View Post

        At this point I think it's very much the other way around. With ZFS landing the ability to add additional disks to a RAID, I don't see many feature reasons to prefer BTRFS.
        zfs is also (slowly) adding reflink support, which was one of the biggest advantages of btrfs.
        ## VGA ##
        AMD: X1950XTX, HD3870, HD5870
        Intel: GMA45, HD3000 (Core i5 2500K)

        Comment


        • #14
          Originally posted by Developer12 View Post

          This exists on ZFS in the form of L2 ARC. The ability to expand ZFS RAIDs with additional disks is also nearing completion (final testing and review).
          True but the disks still have to be the same size where as with BTRFs you can do mixed sizes
          Last edited by mdedetrich; 05 May 2022, 07:26 AM.

          Comment


          • #15
            Originally posted by Chugworth View Post
            If Btrfs could ever get reliable RAID5/6 and native encryption, then I wouldn't have much need for ZFS anymore. Although still the Btrfs implementation of send/receive does make me a little nervous in that it seems to be working at the file level, whereas in ZFS it's working at the block level. That also makes me think that if Btrfs ever did get encryption, it wouldn't be as slick as the ZFS implementation. But as it stands they're two different tools with two different uses. Btrfs works great as a root filesystem in a single disk or mirrored scenario, and ZFS works great as a data storage filesystem.
            I thought the same and took btrfs for a spin in my NAS mirror. It is miles behind ZFS. Even something as simple as mirror of 2 disks is implemented in such cumbersome way, that I had hard time grasping wtf I was doing. ZFS was probably the simplest and most powerful piece of software J ever laid hands on. Even if they catch up I'll not use it for something where I care about the data and being able to maintain the pool in the long run. I am not saying that btrfs is bad but it cannot compete with ZFS at this point and other than the licensing issues (including necessity to use dkms module) there's never been a reason to choose btrfs over zfs.

            Comment


            • #16
              Originally posted by plantroon View Post

              I thought the same and took btrfs for a spin in my NAS mirror. It is miles behind ZFS. Even something as simple as mirror of 2 disks is implemented in such cumbersome way, that I had hard time grasping wtf I was doing. ZFS was probably the simplest and most powerful piece of software J ever laid hands on. Even if they catch up I'll not use it for something where I care about the data and being able to maintain the pool in the long run. I am not saying that btrfs is bad but it cannot compete with ZFS at this point and other than the licensing issues (including necessity to use dkms module) there's never been a reason to choose btrfs over zfs.
              ZFS has a very simple and organized approach for keeping snapshots, whereas in Btrfs each snapshot is basically treated as its own subvolume and can be moved around like a directory. There are advantages and disadvantages to both ways. The Btrfs approach comes in handy for a root filesystem. If I want to rollback to an earlier root snapshot, then while the system is active I can just:
              1. Rename the active root subvolume
              2. Create a new root subvolume from an existing root snapshot
              3. Set the new root subvolume as the default subvolume
              4. Copy the correct boot files for that snapshot in case there were changes
              5. Reboot
              6. Delete the old root subvolume
              I've never really investigated what the ZFS approach for root rollbacks is, but with the way that ZFS works I don't see how it could be as easy as the Btrfs approach. You couldn't rollback the root dataset while it's in use, and if you clone to a new dataset, the other existing snapshots would still be associated with the old dataset.

              Comment


              • #17
                Originally posted by Chugworth View Post
                ZFS has a very simple and organized approach for keeping snapshots, whereas in Btrfs each snapshot is basically treated as its own subvolume and can be moved around like a directory. There are advantages and disadvantages to both ways. The Btrfs approach comes in handy for a root filesystem. If I want to rollback to an earlier root snapshot, then while the system is active I can just:
                1. Rename the active root subvolume
                2. Create a new root subvolume from an existing root snapshot
                3. Set the new root subvolume as the default subvolume
                4. Copy the correct boot files for that snapshot in case there were changes
                5. Reboot
                6. Delete the old root subvolume
                I've never really investigated what the ZFS approach for root rollbacks is, but with the way that ZFS works I don't see how it could be as easy as the Btrfs approach. You couldn't rollback the root dataset while it's in use, and if you clone to a new dataset, the other existing snapshots would still be associated with the old dataset.
                True, btrfs has this feature with snapshots that ZFS lacks. I never used root volume snapshots myself, I can only see myself using it back in the days when I was playing around with Linux distros and messing my system up every few days.

                Btw, ZFS has dataset's snapshots available at the root of each dataset in special .zfs directory.

                I hate how mounting works in btrfs when you have a mirror though, that was my biggest gripe with it.

                Comment


                • #18
                  Originally posted by jochendemuth View Post
                  Yeah - good one. I keep taking that for granted in my ZFS pool.
                  Oh, is that documented somewhere? I seem to find only questions if mixed pools have trim support. You did get that I mean effectively ssd speeds with hdd storage size?

                  Comment


                  • #19
                    Originally posted by plantroon View Post

                    [...] Even something as simple as mirror of 2 disks is implemented in such cumbersome way, that I had hard time grasping wtf I was doing[...]
                    Could you elaborate a bit ? It seems to me that the BTRFS raid1 is quite good in handling the raid1

                    1) create a raid1 filesystem from scratch:
                    Code:
                    # mkfs.btrfs -d raid1 -m raid1 /dev/sda /dev/sdb
                    2) transforming an existing filesystem adding a 2nd disk and changing the profile to a raid1
                    Code:
                    # btrfs dev add /dev/sdb /mnt/btrfs-filesystem
                    # btrfs bal start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs-filesystem
                    What is cumbersome is to understand which is the current profile....

                    Comment


                    • #20
                      Originally posted by ferry View Post

                      Oh, is that documented somewhere? I seem to find only questions if mixed pools have trim support. You did get that I mean effectively ssd speeds with hdd storage size?
                      Developer12 answered this post #10:

                      Originally posted by Developer12
                      This exists on ZFS in the form of L2 ARC.
                      I simply used a fast SSD to create a L2 ARC on top of my pool of HDs. Thanks to the relatively recent addition of permanent L2 ARC I have a nice hot/cold storage setup for reads. Easily sufficient to saturate my 10G network.

                      ZFS does not have a write-cache in the traditional sense, so tweaking a bunch of ZIL parameters allowed me to trick ZFS into allowing bursts of data writes large enough for my use cases. YMMV.

                      Comment

                      Working...
                      X