Announcement

Collapse
No announcement yet.

DragonFlyBSD's HAMMER2 Gets Basic FSCK Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DragonFlyBSD's HAMMER2 Gets Basic FSCK Support

    Phoronix: DragonFlyBSD's HAMMER2 Gets Basic FSCK Support

    DragonFlyBSD's now-default HAMMER2 file-system now has initial file-system checking "fsck" support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I don't think a filesystem is proper if it doesn't have some checking measure, like fsck or chkdsk.

    Comment


    • #3
      So the same problem btrfs had for years.. A fsck that can only verify that yes, it is broken, but not fix it, because supposedly it can't break?

      Yay..

      Comment


      • #4
        Originally posted by carewolf View Post
        So the same problem btrfs had for years.. A fsck that can only verify that yes, it is broken, but not fix it, because supposedly it can't break?

        Yay..
        Btrfs (and Zfs) can also fix a lot of issues with scrubbing. Also afaik ZFS still does not have a fsck either.

        I guess making a fsck for a next-gen filesystem isn't that easy, especially if you lack reports and feedback from the field. So far only btrfs has one as they (mostly Qu Wenruo from SUSE) add up all the reports of people with broken fs and take the time to write code to fix them in btrfs tools.
        Last edited by starshipeleven; 24 September 2019, 03:56 AM.

        Comment


        • #5
          Originally posted by carewolf View Post
          So the same problem btrfs had for years.. A fsck that can only verify that yes, it is broken, but not fix it, because supposedly it can't break?

          Yay..
          The btrfs method is to read off a copy of the data, then rebuild the filesystem. In my experience so far, this is only necessary when using hardware that is not the greatest. If your hardware writes data correctly, and flushes it to permanent storage correctly, then btrfs will work.

          For example, my laptop uses some Hynix NVMe drive and from the look of the failure, it did not properly save data that it claimed had been written. When the laptop's battery shut down because it was running in my laptop bag, that resulted in corrupted btrfs metadata. If there had been multiple copies of the metadata, btrfs could have recovered. But since there was only one copy, it had no way to figure out the right versions.

          So I took my backups (you have backups right?) and also took a copy with "btrfs restore" and compared them. It's possible that I lost some data but nothing I'd notice, and nothing was corrupted or older than the backup copies.

          Comment


          • #6
            Originally posted by Zan Lynx View Post

            The btrfs method is to read off a copy of the data, then rebuild the filesystem. In my experience so far, this is only necessary when using hardware that is not the greatest. If your hardware writes data correctly, and flushes it to permanent storage correctly, then btrfs will work.
            Being safe to the real world is part of the job of file systems. This includes recovering as much as possible when the real world comes by a corrupts the data or metadata. Btrfs had the problem for years that it would just assert and give up whenever something wasn't perfect. That deservedly earned it the nickname bitrotFS. Now that it has real fsck it is finally useful as a real FS, but it took many years.

            So I took my backups (you have backups right?)
            Why revert to the disk state a month ago, when I could just use ext4 until btrfs stoped sucking?

            Comment


            • #7
              Originally posted by carewolf View Post
              Why revert to the disk state a month ago, when I could just use ext4 until btrfs stoped sucking?
              IF the hardware pulled the same trick on your ext4 then you'd have no idea that your SSD had silently reverted to an old version of a set of blocks that ought to have been written.

              I guess if you don't see an error message in the kernel, it's all OK.

              Comment


              • #8
                Originally posted by Zan Lynx View Post

                IF the hardware pulled the same trick on your ext4 then you'd have no idea that your SSD had silently reverted to an old version of a set of blocks that ought to have been written.

                I guess if you don't see an error message in the kernel, it's all OK.
                If I can keep reading 99.999% of the files, then yes it is all OK. I can move all the data on the damaged drive to a new drive, and restore the one broken file from backup. If one byte in one metadata file makes the whole disk refuse to work, then it is not OK, that is just fucking stupid. The latter is what happened for me twice with btrfs with 2 years interval. First was a btrfs data-corruption bug, the second a partial hardware failure correctly caught by SMART but not correctable.

                And stop defending old bugs that have already been fixed. They are fixed because they were bugs.
                Last edited by carewolf; 24 September 2019, 05:07 PM.

                Comment

                Working...
                X