Announcement

Collapse
No announcement yet.

Bcachefs Gets "Bad@$$" Snapshots, Still Aiming For Mainline Linux Kernel Integration

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • geearf
    replied
    Originally posted by S.Pam View Post
    No, don't believe it is available in the fs itself.
    That's too bad.

    Leave a comment:


  • evil_core
    replied
    Originally posted by some_canuck View Post
    zfs is still better
    In comparison to totally unstable BCacheFS certainly yes, but in comparison to BTRFS I'm not sure.

    Maybe ZFS was perfectly stable in the past, but after aggressive optimizations andaddition of features in last few years, it become even bugger bug-ridden clusterfuck than BTRFS.
    There are many open bug reports about ZFS self-corrupting itself (when using basic features like encryption, snapshotting ad replication) , after 0.7.9 (0.8.x and 2.x.x still affected) and devs have no clue how to fix them:


    System information Type Version/Name Distribution Name Debian Distribution Version 9 Linux Kernel 4.19.0-0.bpo.6-amd64 Architecture AMD64 ZFS Version zfs-0.8.0-596_g4d5b4a33d SPL Version 0.8.0-596_...

    System information Type Version/Name Distribution Name Debian Distribution Version testing (bullseye) Linux Kernel 5.10.19 Architecture amd64 ZFS Version 2.0.3-1 Describe the problem you're observi...

    System information Type Version/Name Distribution Name Arch Linux Distribution Version rolling Kernel Version 5.14.8-arch1-1 Architecture x86_64 OpenZFS Version zfs-2.1.1-1 Describe the problem you...

    System information Type Version/Name Distribution Name Debian Distribution Version Buster Linux Kernel 5.10.0-0.bpo.5-amd64 Architecture amd64 ZFS Version 2.0.3-1~bpo10+1 SPL Version 2.0.3-1~bpo10+...


    Is it ZFS filesystem that we can rely on? Probably not. Are other filesystems better? (not sure).
    I can guess XFS+mdadm+LUKS would have much less bugs, because it's much more simpler solution and smaller code-base, but it totally lacks silent corruption protection.

    Leave a comment:


  • S.Pam
    replied
    Originally posted by geearf View Post

    But it won't give me the compression mode used (1-20 for zstd), will it?
    No, don't believe it is available in the fs itself.

    Leave a comment:


  • geearf
    replied
    Originally posted by cynic View Post

    you can use compsize to check if a file is already compressed and with what algorithm without the need to store that information on a DB
    But it won't give me the compression mode used (1-20 for zstd), will it?

    Leave a comment:


  • cynic
    replied
    Originally posted by geearf View Post

    Yeah it is somewhat doable that way, and I wrote a script for that, but it is a hassle as I have to know exactly how a file was compressed before trying to recompress it (I can easily store the result of the script in a DB, but if a restart happens between recompression and the DB call it's pretty much lost work, and I definitely would not want to block the computer on a recompression of a really big file). Also, that's likely my fault, but I do not know how to do it per chunk and not per file.
    you can use compsize to check if a file is already compressed and with what algorithm without the need to store that information on a DB

    Leave a comment:


  • cynic
    replied
    Originally posted by S.Pam View Post

    No, you need to use defrag to recompress. Balance does not alter the data itself.
    yup, you're right! thanks for pointing out!

    Leave a comment:


  • S.Pam
    replied
    Originally posted by cynic View Post

    hum ok, I got what you meant now.

    rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.

    what you want to do is probably doable with a user space script, but I'm not 100% sure.
    No, you need to use defrag to recompress. Balance does not alter the data itself.

    Leave a comment:


  • geearf
    replied
    Originally posted by cynic View Post

    hum ok, I got what you meant now.

    rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.

    what you want to do is probably doable with a user space script, but I'm not 100% sure.
    Yeah it is somewhat doable that way, and I wrote a script for that, but it is a hassle as I have to know exactly how a file was compressed before trying to recompress it (I can easily store the result of the script in a DB, but if a restart happens between recompression and the DB call it's pretty much lost work, and I definitely would not want to block the computer on a recompression of a really big file). Also, that's likely my fault, but I do not know how to do it per chunk and not per file.

    Leave a comment:


  • cynic
    replied
    Originally posted by geearf View Post
    2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.

    (3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
    hum ok, I got what you meant now.

    rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.

    what you want to do is probably doable with a user space script, but I'm not 100% sure.

    Leave a comment:


  • geearf
    replied
    Originally posted by cynic View Post

    Josef Bacik just published a patchset that (beside other improvements) is going to fix this. It won't be mainlined very soon though.



    what do you mean with a background thread for recompression?
    isn't a rebalance enough for your usecase?


    1- Oh awesome! I believe he told me ~half a decade ago he was working on something that would solve this, but I kind of gave up on that after a couple years. What's the block for mainlining it? Is it too dangerous? It's not like I'm in hurry though since I now use bcache for all my slow BTRFS drives, and it works fine.

    2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.

    (3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)

    Thank you!

    edit: ok I read it at https://lore.kernel.org/linux-btrfs/...panda.com/T/#t I see why it's going to take a while.
    Last edited by geearf; 10 November 2021, 06:46 PM.

    Leave a comment:

Working...
X