Announcement

Collapse
No announcement yet.

Btrfs Enjoys More Performance With Linux 6.3 - Including Some 3~10x Speedups

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    kreijack Thank you!

    Comment


    • #72
      Originally posted by EphemeralEft View Post

      They probably meant how the backup software finds changes for incremental backups. It's probably relying on file metadata rather than reading every file for every backup. If the file is corrupted but the metadata is unchanged then the corrupted copy wouldn't be stored in the backup.
      ok fine, in this case you're right.

      anyway there are still many cases when you can lose the good copy: if you move the file, if you rename it or if you do the first backup when the source file is already corrupted.

      I think my original point is still valid: a backup cannot guarantee the integrity of your data as a checksummed filesystem does.





      Comment


      • #73
        Originally posted by dreamcat4 View Post
        i am kindda split between btrfs and zfs...

        on one hand, with btrfs (as your root drive / boot disk), you can setup distros and tools which does rollback / rollforwards snapshotting for system updates, and just in a really well supported and maintained way, have it on your chosen distro and working with your systems updates software or tooling. great feature

        however in most (or nearly all) other respects, i am much more firmly preferring zfs over btrfs. And it's not like in the way that a person prefers pistachio ice cream over coconut flavor. No, it's in a way in which certain features or functionality i know and can make work with zfs. But just isn't seems to be possible in the land of BTRFS world. For whatever underlying technical reason(s)

        And if not actually broke, then the zfs version just stands out as being in hard terms superior to the btrfs equivalent. For example: one of the features that I really believe (very strongly in) is dual boot disks in raid1. Why? Well for 0 downtime of course! You don't want to have to restore your boot disk from backup because.... well for one thing that requires another PC, and for another reason it stops you from actually using your computer, which might be essential of necessary for, oh IDK... ordering a new boot disk to replace the failed one. Or any number of unexpected reasons.

        OK so i have my 2 idential media. And i have 3 options to make them mirror drives (raid1), it can be zfs, btrfs, or madm. OK, but with btrfs based solution when 1 of the 2 disks fails, and drops out. Then the filesystem automatically goes into readonly mode, (and also maybe it requires a reboot or whatever). Then i cannot keep on using my PC! I can see all my files, and it is indeed easier to recover process. Because instead of restoring from a backups, i can just buy a new media, and install the new disk in the computer. And it should then be recoverable in simple ways.

        However compare this to zfs, where i can have same raid1 dual disks (mirror). And then when the time comes. When one of those disks inevitably fails. Then the system will keep on chugging along in a degraded state. I might need to setup some special system notification or alarm to alert me of that happening. To make me aware. However i can still keep on business as usually. I order my new media, the new hard drive. And for the next 2-3 days i am still chugging along in the degraded state no problem! So long as I am not so unlucky to be caught out by an unlikely drive failure of both disks at same time.

        Or i might have a replacement disk already to hand (more likely, thinking ahead). OK then! so what's the downtime? Well probably literally zero, if it's sata 2.5". Because I can hotplug those disks without even requiring a system reboot. And that is about as good as it can be. However lets say i do need to reboot, then I can still keep on chugging along while the new disk is being sync'd up. No problem!

        Now a lot of people use madm for dual boot disks. However it's more hassles to setup. And madm needs to be used in combination anyhow with either zfs or btrfs. So it seems kindda silly to then have the same general feature capability on both btrfs and zfs. Since I need them for other purpose.

        There are also other considerations around the specific features, like somebody here mentioned earlier in this thread. How zfs is better suited for proton game drive. And all that other stuff. I won't bother getting into because it's too long post now.

        But just to recap - those are the 2 main features where neither side wins: the boot snapshots (system updates roll forwards / rollback) vs the dual mirror boot disks support.

        I suppose maybe if i can do good boot snapshots on zfs (as good as btrfs) then that would become a draw on that point. Instead of such a clear win for btrfs. But (clearly) btrfs has better general and out of box support for that feature amongst popular distros, etc. So it's not like a general 'win', it would just be a personal "i can hack this now, and do for my own personal needs", rather than "this is what other people can generally do, see, you should do that". No no. Others need a good out of box experience. That is being well supported.
        You can mount btrfs rw in a degraded state, but it requires a special mount option.

        Comment


        • #74
          ah ok

          Comment


          • #75
            Originally posted by dreamcat4 View Post
            ah ok
            It's the "degraded" mount option to be precise.

            But you sound dismissive for some reason. I just pointed out that one of the assumptions you made (can't mount degraded btrfs) is wrong, which might affect your preference over zfs.

            Comment


            • #76
              Originally posted by fong38 View Post

              Output of compsize pointed to the sd card on my Deck (compress-force=zstd:15):
              Code:
              Processed 143972 files, 2353000 regular extents (3084188 refs), 77993 inline.
              Type Perc Disk Usage Uncompressed Referenced
              TOTAL 86% 437G 505G 514G
              none 100% 287G 287G 290G
              zlib 39% 120M 302M 310M
              zstd 68% 149G 217G 224G
              prealloc 100% 3.8M 3.8M 872K​
              Don't have the exact numbers for deduplication but I do remember it being around another 15-20GB of saved storage (with duperemove -b16k). Source games tend to deduplicate well in particular
              So no earth shattering benefits of using compression then, that said every single GB is of course interesting and the read overhead should be small while the write overhead is something that we can ignore since it only plays a part when installing a game anyway. Thanks for some numbers!

              Comment


              • #77
                Originally posted by binarybanana View Post

                You can mount btrfs rw in a degraded state, but it requires a special mount option.
                But you should be very careful with that:

                Comment


                • #78
                  Originally posted by Berniyh View Post
                  But you should be very careful with that:
                  https://btrfs.wiki.kernel.org/index....RW_if_degraded
                  That's good to know. Still, for me, mounting ro only is no deal breaker as I can use overlayfs to store changes somewhere else while the disks are busy repairing. Or could/would, if I was using btrfs raid stuff and this was hitting me.

                  Comment


                  • #79
                    Originally posted by binarybanana View Post
                    That's good to know. Still, for me, mounting ro only is no deal breaker as I can use overlayfs to store changes somewhere else while the disks are busy repairing. Or could/would, if I was using btrfs raid stuff and this was hitting me.
                    well like 2 of the other guys said (or same guy in 2 different comment), it is not as good as the way how zfs handles it. And apparently potentially some risky situation during recovery. I cannot remember what he said, but you know....

                    When you compare to zfs degraded pools which are normally just pretty darn fine / dependable.

                    Comment


                    • #80
                      Originally posted by dreamcat4 View Post

                      well like 2 of the other guys said (or same guy in 2 different comment), it is not as good as the way how zfs handles it. And apparently potentially some risky situation during recovery. I cannot remember what he said, but you know....

                      When you compare to zfs degraded pools which are normally just pretty darn fine / dependable.
                      I just tested this and it seems like those restrictions (other than requiring degraded as mount option) don't apply any more, or in a weird way. I bound two files (disk images) to a pair of loop devices and I can mount one of the devices alone and other than a message in dmesg it doesn't do anything weird. I can mount it, write to it, unmount as often as I like. But the other device I can't mount even once without the other present because it gives me an error, which seems wrong?

                      The command to create the file system was:
                      Code:
                      mkfs.btrfs -f -L a -d raid1 -m raid1 /dev/loop0 /dev/loop1
                      Then

                      Code:
                      # losetup -d /dev/loop1
                      # mount -t btrfs /dev/loop0 /mnt -o degraded
                      mount: /mnt/foo: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.​
                      But

                      Code:
                      # losetup -d /dev/loop0
                      # mount -t btrfs /dev/loop1 /mnt -o degraded
                      # echo $?
                      0
                      That means if the right HDD dies you're safe but if the wrong one dies you're screwed. Or maybe I did something wrong, I never used btrfs RAID but supposedly RAID0/1 were OK?

                      Comment

                      Working...
                      X