Announcement

Collapse
No announcement yet.

Bcachefs Lands Big Scalability Improvement, Disables Debug Option By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by oleid View Post

    Apparently at least compression. You can only compress the whole file system.
    While it's nice that you can fiddle with compression per dataset in ZFS, a lot of people would be fine just setting the top level parent dataset to lz4 and not thinking about it again. In most cases it will be faster than no compression while still giving some nice space savings. For me at least, setting the recordsize per dataset is one of the killer knobs that ZFS allows you to turn. Your typical large file WORM data on a NAS? 1M is great. Your dataset for KVM VMs? 64K. The performance deltas can be pretty extreme when you tune these things properly for your workload.

    Comment


    • #32
      Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

      While it's nice that you can fiddle with compression [...]
      Sorry, I ment encryption. I agree, per dataset compression is not that relevant.
      Last edited by oleid; 09 November 2023, 12:57 AM.

      Comment


      • #33
        Originally posted by skeevy420 View Post

        You can do that with ZFS without homed.....and if it has the same password as your home user you can do that with PAM when you log in.....it's in the Arch Wiki if you're curious......
        Seems to work, thanks! But oh boy, ZFS is really a lot more complicated than btrfs. But on the other hand, certain things are more polished.

        Comment


        • #34
          Originally posted by skeevy420 View Post

          It won't. While it's nice that this exists, it has similar mount and subvolume option limitations as that other buttery smooth in-tree fs where subvolumes inherit the parent mount's options so they don't always have per-subvolume options. It wouldn't surprise me if Bcachefs needs a 2.0 to fix that. BTRFS will likely need the same.
          The question is if dynamic trees in BCachefs require a change in the on disk format or not. That seems to be all that is needed for per volume compression keys, according to the author.

          Comment


          • #35
            Can you run indexing benchmarks next time you test bcachefs? I've heard filelight and git really excel on this filesystem.

            Comment


            • #36
              Originally posted by oleid View Post
              bcachefs seems to support compression, but how about encryption like ext4 via fscrypt interface?
              Fscrypt is not encryption, it is a compliance checkbox.

              Exposing metadata means it can only protect locally unique data (like saved passwords and session cookies). If an adversary has knowledge of the sizes and directory structure of some group of files from another source (such as if you uploaded them somewhere), they can detect the presence of those files on your "encrypted" ext4.

              Comment

              Working...
              X