Originally posted by S.Pam
View Post
Announcement
Collapse
No announcement yet.
Bcachefs Gets "Bad@$$" Snapshots, Still Aiming For Mainline Linux Kernel Integration
Collapse
X
-
Originally posted by S.Pam View PostIsn't the point of btrees that you do not use a journal?
Anyway, bcachefs is essentially a journalled filesystem with built-in per-extent overlayfs. It's not a copy-on-write filesystem in the same sense as btrfs is (it doesn't use copy-on-write data structures internally).Last edited by intelfx; 06 November 2021, 09:44 AM.
- Likes 2
Comment
-
Originally posted by cynic View Post(and doing great!)
I also wish there was a background thread for recompression, but maybe that's asking too much (Ken told me he might do it in the future, though with a single dev who knows when that'll be).
Comment
-
Originally posted by DrYak View PostKent Overstreet (and the bcachefs fans): complain that btrsf took a decade to develop and still isn't mature in all its features.
Comment
-
I am happy with this new file system, however I have been using Btrfs for 4 years and I have to say that I find it fantastic.
In openSUSE everything is pre-set for automated snapshots, until a year ago I used ext4 only for the data partition, but now the data partition is also in Btrfs, checksum, snapshots etc. these are fundamental things for me as is reliability and in 4 years I have never had any problems.
So unless this new file system allows me to do other things that are useful to me and not available in Btrfs, I will continue to use Btrfs.
Comment
-
Originally posted by geearf View PostI also wish there was a background thread for recompression, but maybe that's asking too much
Originally posted by geearf View Post(Ken told me he might do it in the future, though with a single dev who knows when that'll be).
Originally posted by pgeorgi View PostI suppose the difference is that bcachefs is developed out of tree for a decade, while btrfs was fast-tracked into Linux upstream long before it was ready.
*No point bringing in vaporware or something that doesn't even store files yet.
Originally posted by cynic View Post
I think there are at least a couple of full time person working on it (and doing great!), but it's still not a lot.
*The boundaries of what a project is are a bit ambiguous for my claim, I know. Is Linux a single project? Not in the sense I'm using it right now. I mean more like a given driver in the case of the kernel.
Comment
-
Originally posted by sinepgib View PostI'd like that for deduplication. Tho I'm not positive it isn't able right now, just I haven't delved enough to find out how to use it.
What worries me about these one-man-army project is that these single devs may eventually stop working for whatever reason and a really promising project get abandoned. Forming a dev community is always critical for success, even if the vision is provided by a single individual. I don't know that it isn't the case tho.
1- Yeah that'd be good, assuming deduplication don't cause too much fragmentation though.
2- Yup, that's pretty much what he did with bcache, and now the guy in charge of it is nice but not as knowledgeable as Ken was...
Comment
-
Originally posted by geearf View PostNot that great when my ~10Tb partitions take minutes to load without help from bcache.
Originally posted by geearf View PostI also wish there was a background thread for recompression, but maybe that's asking too much (Ken told me he might do it in the future, though with a single dev who knows when that'll be).
isn't a rebalance enough for your usecase?
Comment
-
Originally posted by cynic View Post
Josef Bacik just published a patchset that (beside other improvements) is going to fix this. It won't be mainlined very soon though.
what do you mean with a background thread for recompression?
isn't a rebalance enough for your usecase?
2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.
(3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
Thank you!
edit: ok I read it at https://lore.kernel.org/linux-btrfs/...panda.com/T/#t I see why it's going to take a while.Last edited by geearf; 10 November 2021, 06:46 PM.
Comment
-
Originally posted by geearf View Post2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.
(3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.
what you want to do is probably doable with a user space script, but I'm not 100% sure.
Comment
Comment