Announcement

Collapse
No announcement yet.

Bcachefs Gets "[email protected]$$" Snapshots, Still Aiming For Mainline Linux Kernel Integration

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by S.Pam View Post
    Isn't the point of btrees that you do not use a journal?
    btree is just hierarchical data structure with fast search

    Comment


    • #22
      Originally posted by S.Pam View Post
      Isn't the point of btrees that you do not use a journal?
      That's the point of copy-on-write, not btrees. Ext4 also uses btrees internally.

      Anyway, bcachefs is essentially a journalled filesystem with built-in per-extent overlayfs. It's not a copy-on-write filesystem in the same sense as btrfs is (it doesn't use copy-on-write data structures internally).
      Last edited by intelfx; 06 November 2021, 09:44 AM.

      Comment


      • #23
        Originally posted by cynic View Post
        (and doing great!)
        Not that great when my ~10Tb partitions take minutes to load without help from bcache.
        I also wish there was a background thread for recompression, but maybe that's asking too much (Ken told me he might do it in the future, though with a single dev who knows when that'll be).

        Comment


        • #24
          Originally posted by DrYak View Post
          Kent Overstreet (and the bcachefs fans): complain that btrsf took a decade to develop and still isn't mature in all its features.
          They're going to use it anyway eventually. People keep complaining about that in Wayland threads as well, but they will also have to make that switch at some point.

          Comment


          • #25
            I am happy with this new file system, however I have been using Btrfs for 4 years and I have to say that I find it fantastic.
            In openSUSE everything is pre-set for automated snapshots, until a year ago I used ext4 only for the data partition, but now the data partition is also in Btrfs, checksum, snapshots etc. these are fundamental things for me as is reliability and in 4 years I have never had any problems.
            So unless this new file system allows me to do other things that are useful to me and not available in Btrfs, I will continue to use Btrfs.

            Comment


            • #26
              Originally posted by geearf View Post
              I also wish there was a background thread for recompression, but maybe that's asking too much
              I'd like that for deduplication. Tho I'm not positive it isn't able right now, just I haven't delved enough to find out how to use it.

              Originally posted by geearf View Post
              (Ken told me he might do it in the future, though with a single dev who knows when that'll be).
              What worries me about these one-man-army project is that these single devs may eventually stop working for whatever reason and a really promising project get abandoned. Forming a dev community is always critical for success, even if the vision is provided by a single individual. I don't know that it isn't the case tho.


              Originally posted by pgeorgi View Post
              I suppose the difference is that bcachefs is developed out of tree for a decade, while btrfs was fast-tracked into Linux upstream long before it was ready.
              In my ideal world, all sufficiently developed* filesystem should be included in staging, and when on-disk layout gets stabilized (or a proper versioning scheme implemented) and enough testing to call it "safe" has been done it gets promoted to the regular fs subsystem.

              *No point bringing in vaporware or something that doesn't even store files yet.

              Originally posted by cynic View Post

              I think there are at least a couple of full time person working on it (and doing great!), but it's still not a lot.
              I'd argue you do not need a lot of people for a single project*, and that in some cases adding people actually brings problems. Although that may be the case more for cathedral and corporate environments than it is for bazaars.

              *The boundaries of what a project is are a bit ambiguous for my claim, I know. Is Linux a single project? Not in the sense I'm using it right now. I mean more like a given driver in the case of the kernel.

              Comment


              • #27
                Originally posted by sinepgib View Post
                I'd like that for deduplication. Tho I'm not positive it isn't able right now, just I haven't delved enough to find out how to use it.


                What worries me about these one-man-army project is that these single devs may eventually stop working for whatever reason and a really promising project get abandoned. Forming a dev community is always critical for success, even if the vision is provided by a single individual. I don't know that it isn't the case tho.

                1- Yeah that'd be good, assuming deduplication don't cause too much fragmentation though.

                2- Yup, that's pretty much what he did with bcache, and now the guy in charge of it is nice but not as knowledgeable as Ken was...

                Comment


                • #28
                  Originally posted by geearf View Post
                  Not that great when my ~10Tb partitions take minutes to load without help from bcache.
                  Josef Bacik just published a patchset that (beside other improvements) is going to fix this. It won't be mainlined very soon though.

                  Originally posted by geearf View Post
                  I also wish there was a background thread for recompression, but maybe that's asking too much (Ken told me he might do it in the future, though with a single dev who knows when that'll be).
                  what do you mean with a background thread for recompression?
                  isn't a rebalance enough for your usecase?



                  Comment


                  • #29
                    Originally posted by cynic View Post

                    Josef Bacik just published a patchset that (beside other improvements) is going to fix this. It won't be mainlined very soon though.



                    what do you mean with a background thread for recompression?
                    isn't a rebalance enough for your usecase?


                    1- Oh awesome! I believe he told me ~half a decade ago he was working on something that would solve this, but I kind of gave up on that after a couple years. What's the block for mainlining it? Is it too dangerous? It's not like I'm in hurry though since I now use bcache for all my slow BTRFS drives, and it works fine.

                    2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.

                    (3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)

                    Thank you!

                    edit: ok I read it at https://lore.kernel.org/linux-btrfs/...panda.com/T/#t I see why it's going to take a while.
                    Last edited by geearf; 10 November 2021, 06:46 PM.

                    Comment


                    • #30
                      Originally posted by geearf View Post
                      2-Hmmm, I don't know what rebalancing would do for recompression so maybe I'm wrong here, but this is what I'd like: write to disk as fast as possible (maybe using zstd:2 maybe not using compression, if it could be automatic it'd be great, but manually setting is ok), then have it marked as not compressed(-enough) and recompress it with a low priority thread in the background, so that it does not affect CPU or IO too much. The whole thing should be atomic and per chunk of course.

                      (3- If BTRFS had tier storage support, you may not need 2- when you have tiers: simply write quickly to the fastest device, then let it slowly propagate to the slower device with much better compression.)
                      hum ok, I got what you meant now.

                      rebalance does rewrite all data on the disk, appling the compression option you specified at mount. So, if you just changed compression option and wanted to apply it to all files, a balance would just work.

                      what you want to do is probably doable with a user space script, but I'm not 100% sure.

                      Comment

                      Working...
                      X