Originally posted by S.Pam
View Post
Announcement
Collapse
No announcement yet.
Bcachefs Gets "Bad@$$" Snapshots, Still Aiming For Mainline Linux Kernel Integration
Collapse
X
-
Originally posted by some_canuck View Postzfs is still better
Maybe ZFS was perfectly stable in the past, but after aggressive optimizations andaddition of features in last few years, it become even bugger bug-ridden clusterfuck than BTRFS.
There are many open bug reports about ZFS self-corrupting itself (when using basic features like encryption, snapshotting ad replication) , after 0.7.9 (0.8.x and 2.x.x still affected) and devs have no clue how to fix them:
System information Type Version/Name Distribution Name Debian Distribution Version 9 Linux Kernel 4.19.0-0.bpo.6-amd64 Architecture AMD64 ZFS Version zfs-0.8.0-596_g4d5b4a33d SPL Version 0.8.0-596_...
System information Type Version/Name Distribution Name Debian Distribution Version testing (bullseye) Linux Kernel 5.10.19 Architecture amd64 ZFS Version 2.0.3-1 Describe the problem you're observi...
System information Type Version/Name Distribution Name Arch Linux Distribution Version rolling Kernel Version 5.14.8-arch1-1 Architecture x86_64 OpenZFS Version zfs-2.1.1-1 Describe the problem you...
System information Type Version/Name Distribution Name Debian Distribution Version Buster Linux Kernel 5.10.0-0.bpo.5-amd64 Architecture amd64 ZFS Version 2.0.3-1~bpo10+1 SPL Version 2.0.3-1~bpo10+...
Is it ZFS filesystem that we can rely on? Probably not. Are other filesystems better? (not sure).
I can guess XFS+mdadm+LUKS would have much less bugs, because it's much more simpler solution and smaller code-base, but it totally lacks silent corruption protection.
Leave a comment:
-
Originally posted by geearf View Post
Yeah it is somewhat doable that way, and I wrote a script for that, but it is a hassle as I have to know exactly how a file was compressed before trying to recompress it (I can easily store the result of the script in a DB, but if a restart happens between recompression and the DB call it's pretty much lost work, and I definitely would not want to block the computer on a recompression of a really big file). Also, that's likely my fault, but I do not know how to do it per chunk and not per file.
Leave a comment:
-
Originally posted by cynic View Post
hum ok, I got what you meant now.
rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.
what you want to do is probably doable with a user space script, but I'm not 100% sure.
- Likes 1
Leave a comment:
-
Originally posted by cynic View Post
hum ok, I got what you meant now.
rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.
what you want to do is probably doable with a user space script, but I'm not 100% sure.
Leave a comment:
-
Originally posted by geearf View Post2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.
(3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.
what you want to do is probably doable with a user space script, but I'm not 100% sure.
Leave a comment:
-
Originally posted by cynic View Post
Josef Bacik just published a patchset that (beside other improvements) is going to fix this. It won't be mainlined very soon though.
what do you mean with a background thread for recompression?
isn't a rebalance enough for your usecase?
2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.
(3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
Thank you!
edit: ok I read it at https://lore.kernel.org/linux-btrfs/...panda.com/T/#t I see why it's going to take a while.Last edited by geearf; 10 November 2021, 06:46 PM.
Leave a comment:
Leave a comment: