Announcement

Collapse
No announcement yet.

Bcachefs Lands More Fixes Ahead Of Linux 6.7 Stable Debut

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by varikonniemi View Post

    Indeed, fix them before they eat someone's data. Unlike ZFS that was eating data for decades before someone noticed
    The bug from last month was very difficult to trigger and requires very specific conditions (it only causes programs to read zeroes when reading back recent in-flight writes), and was only very recently found when aggravated by other changes in both ZFS and coreutils. It's a strong testament to ZFS' reliability that we're down to such hard-to-trigger bugs, with this possibly being the first ever to risk actual data corruption.

    It's now been patched, and a whole class of testcases added to the ztest testsuite to catch issues like it in the future. When was the last time someone subjected BTRFS to a comprehensive battery of torture tests, much less hooked it up to CI?

    That might well have been the first time a bug in ZFS could lead to data corruption in it's entire history. It seems every few months we see another round of critical BTRFS fixes grace the pages of phoronix. It was only last cycle that another rash of them landed: https://www.phoronix.com/news/Btrfs-Linux-6.6 BcacheFS gets a pass for being so new but there's little that can be said in defence of BTRFS's track record.

    There's no hope left for BTRFS but I might expect BcacheFS to eventually supersede ZFS.
    Last edited by Developer12; 23 December 2023, 10:38 PM.

    Comment


    • #42
      Originally posted by Developer12 View Post
      The bug from last month was very difficult to trigger and requires very specific conditions (it only causes programs to read zeroes when reading back recent in-flight writes), and was only very recently found when aggravated by other changes in both ZFS and coreutils. It's a strong testament to ZFS' reliability that we're down to such hard-to-trigger bugs, with this possibly being the first ever to risk actual data corruption.
      As far as I understand this is/was a bug just uncovered more reliable but was there for a long time.

      It's now been patched, and a whole class of testcases added to the ztest testsuite to catch issues like it in the future. When was the last time someone subjected BTRFS to a comprehensive battery of torture tests, much less hooked it up to CI?
      Called xfstests (because of the origin) and is run all the time by different developers/CI:


      That might well have been the first time a bug in ZFS could lead to data corruption in it's entire history.
      Some time ago, when I was more involved with this (~10years) there were bugreports and posts in forums about damaged filesystems/zpools with the same advice over and over again: Recreate the filesystem and restore the backup. It's not the first time ZFS is eating data.

      Comment


      • #43
        Originally posted by PuckPoltergeist View Post
        Called xfstests (because of the origin) and is run all the time by different developers/CI:
        Btw. bcachefs seems so superior, it doesn't need even tests. At least I didn't found them

        Comment


        • #44
          Originally posted by PuckPoltergeist View Post

          Btw. bcachefs seems so superior, it doesn't need even tests. At least I didn't found them
          Have to revert this. Xfstests is covering this too, just no special tests for bcachefs

          Comment


          • #45
            Originally posted by PuckPoltergeist View Post
            As far as I understand this is/was a bug just uncovered more reliable but was there for a long time.
            It's been theoretically possible for a long time, but it's never been found because it is absurdly hard to trigger on anything before changes that were made in 2.1.4. Even then, it didn't become very likely until further changes landed in 2.2, and was only actually noticed in the presence of further changes in coreutils' access semantics.

            To put things in perspective, this bug has probably taken longer to find than unsoundness in the linux kernel's core arm64 atomic primitives that were found by marcan recently.

            Originally posted by PuckPoltergeist View Post
            Called xfstests (because of the origin) and is run all the time by different developers/CI:
            Stale. Looks like the last time there was any BTRF-related activity was 9+ years ago. All of the following activity (the best google could dig up) is from circa 2014:




            Originally posted by PuckPoltergeist View Post
            Some time ago, when I was more involved with this (~10years) there were bugreports and posts in forums about damaged filesystems/zpools with the same advice over and over again: Recreate the filesystem and restore the backup. It's not the first time ZFS is eating data.
            When people who've lost data to BTRFS within the last few _months_ come out of the woodwork every time it's brought up, I'm going to need to see some receipts for that.
            By contrast, the guys from joyent have said on record that they only time they EVER lost a customer's data on ZFS was when someone manually wrote garbage all over actively-running kernel code.
            Last edited by Developer12; 24 December 2023, 11:38 PM.

            Comment


            • #46
              Originally posted by Developer12 View Post
              Stale. Looks like the last time there was any BTRF-related activity was 9+ years ago. All of the following activity (the best google could dig up) is from circa 2014:
              Nice try kiddo:


              Comment


              • #47
                Again, let's return to the discussion once the first instance of confirmed bcachefs bug eating data.
                Last edited by varikonniemi; 25 December 2023, 12:20 PM.

                Comment


                • #48
                  Originally posted by PuckPoltergeist View Post

                  Nice try kiddo:

                  Are you stupid? yeah, that's the *entire* xfstests repo, dumbass. Did you mean to say that ext4 sees more attention?

                  Show me the last time someone made a BTRFS-specific change of any type. Even just to hook up testing of a setting.

                  Show me the last time BTRFS passed the tests without failures or regressions.

                  Comment


                  • #49
                    Originally posted by Developer12 View Post

                    Are you stupid? yeah, that's the *entire* xfstests repo, dumbass. Did you mean to say that ext4 sees more attention?

                    Show me the last time someone made a BTRFS-specific change of any type. Even just to hook up testing of a setting.

                    Show me the last time BTRFS passed the tests without failures or regressions.
                    I dunno, seems active enough to me https://github.com/kdave/xfstests/co...er/tests/btrfs is there something else to be looking for?

                    Comment


                    • #50
                      Originally posted by Developer12 View Post

                      Are you stupid? yeah, that's the *entire* xfstests repo, dumbass. Did you mean to say that ext4 sees more attention?

                      Show me the last time someone made a BTRFS-specific change of any type. Even just to hook up testing of a setting.

                      Show me the last time BTRFS passed the tests without failures or regressions.
                      I was thinking, you're able to read git history when it's presented with a nice GUI. Sorry sweety, my fault

                      But Quackdoc has already pointed to the relevant facts in the repo of the Btrfs maintainer

                      Comment

                      Working...
                      X