Announcement

Collapse
No announcement yet.

XFS Copy-On-Write Support Being Improved, Always CoW Option

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by macemoneta View Post
    BTRFS is supported on pretty much every distribution at this point. It's simply not the default at installation (except for SUSE). Google using BTRFS for Chromebooks is arguably "acceptance". Major corporations use it. We've been running it (RAID1 and single) 24/7/365 for 7-8 years without issue, through multiple hardware failures.
    And I've never had any issues with ext4 and yet there were many people affected by corruption bugs.

    btrfs is not reliable. People keep saying it is, but there's always a new problem that breaks itself.

    Comment


    • #22
      Originally posted by doublez13 View Post

      I was not aware Google was using BTRFS on the Chromebooks. Do you have a link?
      No, I have three Chromebooks (Acer, ASUS, Lenovo); just run 'mount' in crosh shell if you have dev mode enabled. Even without dev mode, if you have crostini you can see the use of BTRFS in the VM as well:

      Code:
      [FONT=courier new]$ mount | grep -i btrfs[/FONT]
      [FONT=courier new]/dev/vdb on / type [B]btrfs[/B] (rw,relatime,discard,...[/FONT]
      [FONT=courier new]/dev/vdb on /dev/wl0 type [B]btrfs[/B] (rw,relatime,discard,...[/FONT]

      Comment


      • #23
        Originally posted by profoundWHALE View Post

        And I've never had any issues with ext4 and yet there were many people affected by corruption bugs.

        btrfs is not reliable. People keep saying it is, but there's always a new problem that breaks itself.
        BTRFS is highly reliable. The problem is that in-development features are used by people that then complain when they lose data. It's like the people that run Chromebooks on the dev channel (or even canary), then complain when they have to powerwash. BTRFS features that are stable are documented in the BTRFS wiki. Use stable features and you won't have a problem.

        Last edited by macemoneta; 19 February 2019, 11:19 PM.

        Comment


        • #24
          Originally posted by starshipeleven View Post
          Possible, but not as bad as btrfs as it's not also doing checksums and other stuff, so its CoW is lighter.
          Thats funny. To me checksums are a major feature of btrfs - especially for single drive or JBOD systems.

          Originally posted by starshipeleven View Post
          ZFS has much much much better caching systems (basically integrates something like bcache), its RAID actually works correctly (i.e. it reads from both drives in RAID1), and it is more optimized overall.
          Why is it important to read from both drives? When having checksums you already know your data is OK and you can possibly make better use of the available read bandwidth. Scrubbing should be done periodically anyway.

          Personally I have only positive experience with btrfs as a desktop filesystem. It is extremely nice to be able to snapshot the rootfs before a major upgrade :-) Also btrfs found bit errors on a failing (single) drive by using its checksums. My VM file directory is marked nodatacow and everything just works nicely.

          Comment


          • #25
            Originally posted by quikee View Post
            AFAIK this is already what journal based FS does, but a little bit more complicated (journal writes) so that a FS can't be left in a corrupt state when a power loss happens (fsck just clears the partial writes).
            Journaled filesystems protect only metadata. So that in case of unclean shutdown your FS isn't fucked, but since files are updated in place you will lose data.
            And this is done for performance, as you can try setting ext4 to be fully journaled (both metadata and data) with "data=journal" mount option (in fstab or in the mount command) but it will tank performance A LOT.

            CoW filesystems are supposed to be able to do that job with less performance penalty, as they are designed from scratch to do it, it's not a feature bolted on after the fact, as journaling is.

            Log-structured filesystems are not CoW but actually provide full protection due to how they work. (F2FS for normal drives, and pretty much each and every raw flash filesystem, yaffs, jffs, UBIFS, LogFS, and also UFS which was designed and used on optical drives)

            Comment


            • #26
              Originally posted by Veto View Post
              Thats funny. To me checksums are a major feature of btrfs - especially for single drive or JBOD systems.
              Checksumming is mostly useless on single drives or any other device where you don't have parity to fix issues it finds. I mean ok it will warn you of issues, but imho btrfs makes sense once you are in RAID1 or better.

              For dumb data drives I use ext4 and the par2drive script (a wrapper for par2cmdline tool) https://github.com/bobafetthotmail/par2drive which checksums and stores a small amount of parity (which is enough to deal with random corruption).


              Why is it important to read from both drives?.
              it boosts read speed by reading some chunks from drive A while other chunks of the same file are read from drive B. Similar to how RAID0 works.

              Of course this works only on read in a RAID1 as on write you still have to write two full copies.

              And yes, mdadm (software RAID) does it, ZFS does it.

              Scrubbing should be done periodically anyway.
              This is a place where you would actually want btrfs to read both drives (at the same time or not) as you thought I said.
              But no it does not. Scrub still works the same as reading, it will read randomly from one drive or the other, not from both.

              Comment


              • #27
                Originally posted by starshipeleven View Post
                Journaled filesystems protect only metadata. So that in case of unclean shutdown your FS isn't fucked, but since files are updated in place you will lose data.
                And this is done for performance, as you can try setting ext4 to be fully journaled (both metadata and data) with "data=journal" mount option (in fstab or in the mount command) but it will tank performance A LOT.

                CoW filesystems are supposed to be able to do that job with less performance penalty, as they are designed from scratch to do it, it's not a feature bolted on after the fact, as journaling is.

                Log-structured filesystems are not CoW but actually provide full protection due to how they work. (F2FS for normal drives, and pretty much each and every raw flash filesystem, yaffs, jffs, UBIFS, LogFS, and also UFS which was designed and used on optical drives)
                There is a catch you still will find logs in one form or another with zfs and bcachefs. zfs logs are zil and slog and bcachefs is going the journal route. Lack of a journal/log cripples btrfs in some benchmarks.

                The fact CoW can result in higher IO traffic in some case this undermines the advantage as well.

                Write-Ahead Logging that is in theory possible to do with a journal that postgreql users in database could also provide roll back snapshots yet not be copy on write and possibly be lower IO..

                File system design is a set of trade off without really any magic bullets.



                Comment


                • #28
                  Originally posted by oiaohm View Post

                  Kent Overstreet developer of bachefs talks against btrfs and zfs. Apparently you have not read this.



                  Lets disregard the zfs documented design mistake here that btrfs copies.



                  That part of what you said you can equally say about zfs and btrfs I will explain more latter.


                  This bit is ignoring that zfs is a incompatible license in many countries so Fedora has not been willing to try zfs full stop. There is another reason to go xfs.



                  SUSE is major EU distribution with paid resources so btrfs is going to keep on development.

                  You have not really asked the question why has Redhat turned away from btrfs and does this apply to zfs. When you ask these questions you will find a list like the one Kent Overstreet listed against both btrfs and zfs.

                  Lets start with a developer working with Redhat issue against btrfs.



                  Does this problem effect zfs yes it does.

                  I will give you the problems.

                  1) You only have 1 backup program this is cp. You now need to extract 1 snapshot from btrfs or zfs to be placed on a new drive. Due to the source drive failing you do not have time and cannot install any other software.

                  The way LVM currently and the way XFS is lining to perform snapshots you are able to abstract snapshots with only the cp command. Yes XFS due to design.

                  2) Checksum offloading to raid controller??? How with btrfs and zfs.

                  This is one of the big differences about the xfs path instead embedded a stack of stuff into the file system driver like raid they are working on improving the communication with block devices like raid controllers. Yes raid controllers in hardware can be check-summing every block of data read from hard-drive.

                  I could keep on going.

                  You need to re-read my post in context with the reply for which it was intended.

                  Comment


                  • #29
                    Last night played with solaris 11.4 and napp-it.
                    Frustrating, don't do this, I'm writing this not for the first time.Solaris ZFS is different beast from zol, zol or whatever. Not in favor of the last two. My humble opinion is zol and zof are waste of the time of talented dev,managers.etc.
                    Until big red evil fails and GPL/GnU the whole solaris, better to concentrate at XFS or similar FS.
                    ZFS is designed with other Solaris features in mind ,lunux deserve the same, not an reverse engineering full with legal ambiguity.

                    Comment


                    • #30
                      Originally posted by starshipeleven View Post
                      Checksumming is mostly useless on single drives or any other device where you don't have parity to fix issues it finds. I mean ok it will warn you of issues, but imho btrfs makes sense once you are in RAID1 or better.

                      ...

                      it boosts read speed by reading some chunks from drive A while other chunks of the same file are read from drive B. Similar to how RAID0 works.

                      Of course this works only on read in a RAID1 as on write you still have to write two full copies.

                      And yes, mdadm (software RAID) does it, ZFS does it.
                      Og
                      This is a place where you would actually want btrfs to read both drives (at the same time or not) as you thought I said.
                      But no it does not. Scrub still works the same as reading, it will read randomly from one drive or the other, not from both.
                      btrfs does support DUP profile for both data and metadata. Btrfs also tries to place each copy apart from eachother physically on the drive. This increases the chance that one copy is still good in case of a physical defect.

                      Scrub does read both copies. Normal read only read one copy which is good for performance. (Right now btrfs use modulo of the PID which can be improved by looking at the queue length. Right now many processes can read from the same disk leaving the other disk idle. In practice this works quite well but it sucks in theory)

                      http://www.dirtcellar.net

                      Comment

                      Working...
                      X