Announcement

Collapse
No announcement yet.

OpenZFS 2.2.1 Released Due To A Block Cloning Bug Causing Data Corruption

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Currently there are at least two fundamental issues in OpenZFS, one may also affect the older, stable 2.1.x branch.

    System information Type Version/Name Distribution Name Gentoo Distribution Version (rolling) Kernel Version 6.5.11 Architecture amd64 OpenZFS Version 2.2.0 Reference https://bugs.gentoo.org/917224 ...

    I build and regularly test ZFS from the master branch. A few days go I built and tested the commit specified in the headline of this issue, deploying it to three machines. On two of them (the ones ...


    2.2.1 doesn't fix that.
    Last edited by Yoshi; 23 November 2023, 04:55 AM. Reason: fixed typo

    Comment


    • #22
      Originally posted by curfew View Post
      This statement is nonsensical. It was found because it ate someone's data. If it didn't, well, then it wouldn't even be a real bug because it doesn't affect anyone.

      You should've finished off with "hopefully it was found and fixed before too many people unknowingly updated to the broken version."
      You are incorrect. Updates introduce things that by nature are not as well tested as things that has been used for years without anyone noticing.

      define real bug please.

      http://www.dirtcellar.net

      Comment


      • #23
        Originally posted by woddy View Post
        with Btrfs I have never lost any data, while it turns out that OpenZFS users have lost data ... stop saying that Btrfs is unreliable.​
        Fallacy of composition: Just because you (one member of the population) never lost any data using Btrfs does not mean no one in that population has. You seem to conclude from your own experience with Btrfs that there are fewer data losses with Btrfs than OpenZFS in a given population of users.



        Comment


        • #24
          There's just one truly robust fs under Linux and that's ext4|3|2. Everything else is for those who loves to play with fire.

          Comment


          • #25
            Originally posted by sophisticles View Post
            One of the reasons why i stick with the tried and true ext4 on all Linux installs I do. Played around with XFS, ReiserFS​, BTRFS, and use exFAT when i need to be able to share files with a Windows install, but for pure Linux it's ext4 all the way.

            The only time i ever had a problem was LUKS over ext4, but then again I have come to despise full disk encryption and recommend to everyone not to use it, regardless of whether it's BitLocker, True/Veracrypt, LUKS, or whatever.
            Can you provide more detail/background on the issue with LUKS and ext4?

            I don't quite understand what you mean by 'LUKS over ext4', which, to me, implies running a LUKS layer 'on top of' an ext4 filesystem, which is the opposite way round to what I would expect. I tend to think of the hardware as the bottom layer, then the (optional) partitions, then LUKS, then LVM, then a filesystem. I don't use RAID as I don't need the performance.

            It is, of course, quite possible to use a file and a loopback block device to offer up some blocks you can then run LUKS on - I do this occasionally - so if this is what you were doing (ext4 file -> loopback -> LUKS) then I'd like to know what you experienced, as I've had no problems that I know of caused by LUKS.

            Comment


            • #26
              Originally posted by avis View Post
              There's just one truly robust fs under Linux and that's ext4|3|2. Everything else is for those who loves to play with fire.
              It is funny when all the COW features etc are supposed to make drive data more robust and resilient against sudden power loss.

              Comment


              • #27
                Originally posted by avis View Post
                There's just one truly robust fs under Linux and that's ext4|3|2. Everything else is for those who loves to play with fire.
                it might be robust by itself, but using a filesytem without full checksumming today is a not negligible risk

                Comment


                • #28
                  That's pretty funny. I have a zpool running on a couple of SSDs to run some VMs for a lab setup and my ingest process for cataloging ISOs involves cping. I was so perplexed as to why I kept consistently getting checksum issues but that explains it. I updated to the fix version after verifying I was affected by checking the flags and the ingest process works once again. It's probably at least partially on me for using bleeding edge OpenZFS version and I'll make sure to not do that again. I usually run LTS kernel + newest out-of-tree drivers but for filesystems that's a mistake.

                  All that said, I'm still apprehensive about using btrfs because I've gotten burned 3 separate times using it. I am sure it's much better now but this is the first time I've gotten burned by ZFS and it really wasn't a big deal. If I was using ZFS on root, which IMO is at least partially insane to do with an out-of-tree FS, then I'd also make sure to use a known rock-solid version as many exist.

                  Comment


                  • #29
                    Originally posted by AlanTuring69 View Post
                    All that said, I'm still apprehensive about using btrfs because I've gotten burned 3 separate times using it. I am sure it's much better now but this is the first time I've gotten burned by ZFS and it really wasn't a big deal. If I was using ZFS on root, which IMO is at least partially insane to do with an out-of-tree FS, then I'd also make sure to use a known rock-solid version as many exist.
                    I'm pretty sure that given any filesytem, no matter how old or stable it claims to be, there's someone somwhere with an horror story involving data loss.
                    btrfs got a bad reputation because it was merged too soon into mainline and declared stable, and that's it.

                    Comment


                    • #30
                      Originally posted by cynic View Post

                      I'm pretty sure that given any filesytem, no matter how old or stable it claims to be, there's someone somwhere with an horror story involving data loss.
                      btrfs got a bad reputation because it was merged too soon into mainline and declared stable, and that's it.
                      I've not lost a single file to fat32/ntfs/ext{2|3|4}, the only FS'es that I trust.

                      I've used exfat a lot as well but I cannot vouch for it yet because I've never stored anything serious on it.

                      A friend of mine around 15 years ago lost quite a lot of files to NTFS but that was not an FS fault per se. As his HW was misbehaving, he had hard resets over 30 times in row in a single day at which point the FS became inconsistent and chkdsk couldn't revive it.
                      Last edited by avis; 23 November 2023, 03:15 PM.

                      Comment

                      Working...
                      X