Announcement

Collapse
No announcement yet.

Btrfs Sends In Fixes For Linux 6.10 & Restores "norecovery" Mount Option

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I remember when all the sturm und drang on Phoronix involved btrfs.

    Now it involves bcachefs ... and btrfs has morphed into hoe & humm.

    Oh my! How time flies when we are having fun!

    Comment


    • #12
      If btrfs can get MUCH better recovery tools then I'll consider using it again.

      I recently lost a bit of data because my system CRASHED/LOCKED UP and the two btrfs partitions I had mounted corrupted themselves with NO POSSIBILITY OF RECOVERY, yes ALL the tools could not recover the data, and I did a hour of research on howto recover my data...........

      MEANWHILE EXT4 can easily clean a partition on startup without ANY data loss, I even tested this. HELL even NTFS is better because you can run it past windows chkdsk tool and it will almost always fix any issues.

      So yeah, goodbye to btrfs for me until it gets A LOT better at recovery options, like automatic boot-time recovery also!

      Comment


      • #13
        Originally posted by theriddick View Post
        If btrfs can get MUCH better recovery tools then I'll consider using it again.

        I recently lost a bit of data because my system CRASHED/LOCKED UP and the two btrfs partitions I had mounted corrupted themselves with NO POSSIBILITY OF RECOVERY, yes ALL the tools could not recover the data, and I did a hour of research on howto recover my data...........

        MEANWHILE EXT4 can easily clean a partition on startup without ANY data loss, I even tested this. HELL even NTFS is better because you can run it past windows chkdsk tool and it will almost always fix any issues.

        So yeah, goodbye to btrfs for me until it gets A LOT better at recovery options, like automatic boot-time recovery also!
        if those two partitions crashed so hard that btrfs couldn't recover any data then ETX4 and NTFS would not have helped. The only reason you have been able to clean a EXT4 partition without data loss is because the damage wasn't as severe that time.

        Comment


        • #14
          Originally posted by F.Ultra View Post

          if those two partitions crashed so hard that btrfs couldn't recover any data then ETX4 and NTFS would not have helped. The only reason you have been able to clean a EXT4 partition without data loss is because the damage wasn't as severe that time.
          False, I already tested this issue. BTRFS is a failure when it comes to recovery. The ONLY thing that happened for BTRFS was a crash, same thing happened to NTFS which needed chkdsk to fix and the ext4 repaired on boot automagically.

          THIS IS NOT the first time I've had this issue with BTRFS, in-fact it has happened several times in past, its just been a while since it happened last. And it will be the last time, no more!

          BTRFS can just go die afaic.

          OH BTW there is a massive record of these sorts of btrfs failures online, you know what people say? YEAH don't use it, they found out the hard way as well, its recovery tools are shockingly useless! Go do the research yourself and stop making BS UP!

          Comment


          • #15
            Originally posted by timofonic View Post

            All this is very weird. Why was deprecated? Why was changed to other name? What about non-Btrfs filesystems too? What if wanting to use the same systemd feature on non-Btrfs filesystems?

            Can someone please explain this to us mere mortals? It's very weird for me

            Thanks in advance!
            At the time of the deprecation, I didn't even notice other fses also
            support the same "norecovery" option.​

            The original problem of "norecovery" is that it doesn't follow the
            regular "no*" mount option which always has a corresponding enabling one.
            E.g. btrfs has "datacow" and "nodatacow" mount options, so are
            "datasum"/"nodatasum", "datacow"/"nodatacow", "ssd"/"nossd",
            "acl"/"noacl", "barrier"/"nobarrier".​
            The name was c​hanged to be more in line with how other rescue operations work on BTRFS and it was depreciated years and years ago. Like a lot of things, nobody noticed the change until it affected something. It seems like it all happened over being too pedantic that there wasn't a recovery option that corresponded to norecovery while not being thorough enough to see how other file systems do things. Hyperfocusing on a single task happened.

            I got a good laugh out of this comment on github:

            Originally posted by poettering
            I figure one has to check for some mount option first that definitely doesnt exist. If that test succeeds nonetheless we are on a kernel where btrfs wasnt taught the new mount api yet, and hence assume norecover is the way to got.

            What a clusterfuck. I wish kernel folks had any sense of "we dont break userspace" actually means.

            Comment


            • #16
              Originally posted by theriddick View Post

              False, I already tested this issue. BTRFS is a failure when it comes to recovery. The ONLY thing that happened for BTRFS was a crash, same thing happened to NTFS which needed chkdsk to fix and the ext4 repaired on boot automagically.

              THIS IS NOT the first time I've had this issue with BTRFS, in-fact it has happened several times in past, its just been a while since it happened last. And it will be the last time, no more!

              BTRFS can just go die afaic.

              OH BTW there is a massive record of these sorts of btrfs failures online, you know what people say? YEAH don't use it, they found out the hard way as well, its recovery tools are shockingly useless! Go do the research yourself and stop making BS UP!
              you cannot call this false unless you also invented a time machine, went back in time before your system crash, changed the fs to EXT4 and then saw if you could recover from that very specific situation.

              Personal anecdote is that I had an ageing PSU that couldn't handle the sudden power spikes of my 7900xtx so I had several total system lockups per day for weeks until I understood that the issue was the PSU. None of my btrfs drives even needed recovery after any of those and yet I was often doing write intensive work at the time of crash. At work I've also had large servers with 40+ drives in a single btrfs partition survive hardware failure of drives with zero hiccups.

              On the other hand I have had several EXT4 and NTFS (from the Windows 2000 days and earlier when I still had to use Windows for work) that have died with zero recovery possibility.

              Btrfs is the most reliable fs I have every used in my 42 years of computing.

              Comment


              • #17
                Originally posted by theriddick View Post
                I recently lost a bit of data because my system CRASHED/LOCKED UP and the two btrfs partitions I had mounted corrupted themselves with NO POSSIBILITY OF RECOVERY, yes ALL the tools could not recover the data, and I did a hour of research on howto recover my data...........
                which kernel version?

                Comment


                • #18
                  Originally posted by cynic View Post

                  which kernel version?
                  I think it was 6.8 at the time, but I've had it happen in the past on different kernel versions. It's just btrfs inability to reliably recover partitions that have data corruption which is common when a system hard locks. This is on a SSD+NVMe btw.

                  F.Ultra just going to ignore your dribble here after. A system lockup is a system lockup, and this is not a isolated incident that only IV'E experienced when using BTRFS.
                  Last edited by theriddick; 25 May 2024, 01:36 AM.

                  Comment


                  • #19
                    Originally posted by theriddick View Post
                    [...]
                    THIS IS NOT the first time I've had this issue with BTRFS, in-fact it has happened several times in past, its just been a while since it happened last. And it will be the last time, no more!


                    Originally posted by F.Ultra View Post
                    [...]
                    Personal anecdote is that I had an ageing PSU that couldn't handle the sudden power spikes of my 7900xtx so I had several total system lockups per day for weeks until I understood that the issue was the PSU. None of my btrfs drives even needed recovery after any of those and yet I was often doing write intensive work at the time of crash. At work I've also had large servers with 40+ drives in a single btrfs partition survive hardware failure of drives with zero hiccups.
                    I have the same F.Ultra experiences, for me BTRFS is rock solid and I used it even with broken power-supply which crashed my hard disk. Yes the system went Read Only but no data loss or filesystem damage.

                    theriddick, if you are experiencing so many problem with BTRFS, may be you have some HW problem ? Or you have some uncommon configurations ?

                    Comment


                    • #20
                      Originally posted by theriddick View Post

                      I think it was 6.8 at the time, but I've had it happen in the past on different kernel versions. It's just btrfs inability to reliably recover partitions that have data corruption which is common when a system hard locks. This is on a SSD+NVMe btw.

                      F.Ultra just going to ignore your dribble here after. A system lockup is a system lockup, and this is not a isolated incident that only IV'E experienced when using BTRFS.
                      F.Ultra is 100% correct. And *if* btrfs in fact was unrecoverable (which I sincerely doubt) the difference is that btrfs won't give you garbage out while with ext4 or (god forbid) ntfs you might get the files back , but you don't know if they are corrupted or not. And even if you do get the files out ok then other metadata may be corrupted.
                      Personally I have mostly good experience with NTFS (on Windows), but have had undeleteable files, missing files, corrupted directories (and filenames) which I never had on btrfs.

                      I started to really use btrfs for stuff back in 2013 and are probably a bit more paranoid than most people when it comes to filesystems. I do have backups and did test btrfs by injecting random bits and bytes at random positions to see if btrfs could recover from that and it perfectly does. The exception is the btrfs version of "raid"5/6 which *is* a fiasco and is also no recommended for use.

                      I *think* that your problem might be that you use btrfs on a single disk with the metadata storage profile set to single instead of duplicate. That used to be the default, and it was a bad default too. But seriously , using btrfs (or any other filesystem for that matter) on a single disk can never be very robust by nature. For btrfs to work as I think most people expect you really need a 3x disks or storage devices and use the btrfs "raid"1 storage profile for both data and metadata.

                      I am running a project with a friend, that machine has a btrfs with 11 drives, 216 fixed read errors and have had countless power failures. That machine is mostly built out of "junk" from 2010 parts and some harddrives about the same age. No failures and all is working.

                      I got another box with currently 21 drives and yet another box with 26 drives which are all btrfs. No issues.

                      And finally I got a dodgy webserver which has a 5 disk btrfs that serves my website (www.dirtcellar.net) which as of now has MILLIONS (yes, millions) of fixed read and write errors because two of those disks are terrible and should have been replaced years ago, but I am too lazy to bother since btrfs keeps it running (and yes, I have backups!)

                      My desktop that I use to write this also runs on 4x SSD's which are also btrfs. I used to have 8x ssd's but had to remove 4x of them (which I did online just for the record) since I needed a 5 1/4th slot for other stuff.

                      This is my experience with btrfs from 11 years of serious use. I only lost a btrfs filesystem once and that was with a non LTS 5.2 kernel which had a ugly regression. I still managed to recover what I wanted from the filesystem without using backups.

                      Now I am not saying that you are incorrect with your experience with btrfs, but I doubt it was unrecoverable and I also doubt you have set up btrfs correctly. I can only speak for myself, but when I run in total 67 (was 71) storage devices that are managed by btrfs without issues it sounds a bit weird that I have not run into any issues at all while you have. May I ask - how many devices have you tried btrfs on ?

                      http://www.dirtcellar.net

                      Comment

                      Working...
                      X