Announcement

Collapse
No announcement yet.

An Exciting Btrfs Update With Encoded I/O, Fsync Performance Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Both BTRFS and ZFS are current gen, and the future belongs to BCacheFS!

    It's the only tiered filesystem, that support fancy caching, encryption, erasure-coding, built-in volume-manager, etc.

    Comment


    • #32
      Originally posted by portablenuke View Post

      Things ZFS can do BTRFS can't: exporting space as a block device, parity RAID, convert a folder into a dataset, have nice tools.

      ZFS has the ability to expose space in the pool as a block device. Creating an 8GB ZFS volume is analogous to creating an 8GB logical volume with LVM. BTRFS doesn't have this ability.

      BTRFS is limited to RAID 1, RAID 10, or none. ZFS's RAIDZ can do parity striping across disks.

      ZFS can convert an existing folder into a data set and back. BTRFS requires the normal "rename and swap directories" dance when creating a subvolume from a normal directory.

      ZFS's tools cover the features nicely. BTRFS tools don't have everything covered. Specifically, qgroups are pretty raw.


      Things BTRFS can do ZFS can't: shrink the pool, use the full space between non-symmetric disks.

      ZFS pools are expected to grow, and the pool will allocate only as much space as the smallest disk even if the disk is larger.


      I'm sure there is more, but these are big differences I've come across.
      nice, thanks for the answer!

      another thing that btrfs can do and ZFS can't (i belive) is change the redundancy level on the fly.

      Comment


      • #33
        Originally posted by Britoid View Post
        All these fancy ZFS features that if you use on anything other than enterprise-level SSDs you'll get extreme amounts of tear.

        BTRFS will likely never support raid5 or raid6 properly, it just doesn't fit well with its filesystem design. But btrfs raid is file level so you can use raid 1 with 4 drives with and get usable space and redundancy similar to raid 5.
        No you cannot, btrfs raid 1 requires 1 copy of each file so your available space will always be 50%, with raid 5 you get disks-1 of available space.

        Comment


        • #34
          Originally posted by Danny3 View Post
          If only they would upgrade the Zstd code too to the upstream version, everything would be perfect!
          I think Xanmod uses the updated version, if you are comfortable with using that. It had higher performance in the last phoronix benchmark too, so best of both worlds.

          Comment


          • #35
            Every time BTRFS is mentioned, someone from ZFS arena appears claiming its better. The openZFS driver for windows hasn't received a update for 3yrs btw.

            I still sometimes boot into windows to do special windows only tasks (I don't want to); so winbtrfs works pretty well with the exception of a crash bug in specific cases which is being investigated/fixed atm.

            ZFS does sound like a better fit for server/workstation use where someone is doing fancy raid and partitioning stuff. For general desktop user, btrfs is just better imo.
            Last edited by theriddick; 22 March 2022, 05:10 PM.

            Comment


            • #36
              Originally posted by portablenuke View Post
              ZFS has the ability to expose space in the pool as a block device. Creating an 8GB ZFS volume is analogous to creating an 8GB logical volume with LVM. BTRFS doesn't have this ability..
              In what the capacity of exposing space of the pool as block device, is different from creating a file in a filesystem and using it as block device with the help of loopback ?

              It is not BTRFS related. IIRC This observation was done by Dave Chinner (on a bit different way) about supporting snapshot of filesystem mounted on a loopback device backed by a file reflinked.

              Comment


              • #37
                Originally posted by atomsymbol View Post

                ZFS supports using SSDs as caches for HDDs. The same can be achieved with bcache + btrfs/ext4/...
                True, this is a way for solving the performance issue when you mix cow/sync/raid5/6/7...

                Moreover another gain is that setting btrfs w/raid and bcache is a delicate operation when you want to have all the redundancy that raid offers even in the case that a cache disk fails. ZFS take care of these details; for btrfs+bcache the details are in charge of the admin.

                Comment


                • #38
                  skeevy420 and others: I'd like to know your setup for partitioning etc... You mentioned your root is EXT4 and you use ZFS for others. I'd appreciate if people can elaborate. Thanks!

                  Comment


                  • #39
                    I use BTRFS RAID-5. I have it scrubbing every week (specifically, two hours every day, before a scrub cancel to pause it, and a resume the next day), which helps me sleep a little better at night.

                    I've had problems with BTRFS - most notably, delayed writes not being written, leading to data loss that could only be recovered as most of the content from the last six hours had been cached elsewhere - but I've not had problems with the RAID part. Indeed, upgrades like xxhash64 as a hashing option made it an efficient option, and RAID1 not using all disks can be useful too, if you don't need raidc4 redundancy with four disks. (Hardware RAID would still be nice to have, but mostly thanks to its battery-backed write cache.)

                    Comment


                    • #40
                      Originally posted by Britoid View Post
                      All these fancy ZFS features that if you use on anything other than enterprise-level SSDs you'll get extreme amounts of tear.

                      BTRFS will likely never support raid5 or raid6 properly, it just doesn't fit well with its filesystem design. But btrfs raid is file level so you can use raid 1 with 4 drives with and get usable space and redundancy similar to raid 5.

                      I don't see them as "Fancy Features", i use this on my Workstation for the past (nearly) 4 years. I have 3 Intel 660p's on my Kubuntu Installation, 1 for Read caching, 2 for Mirrored write caching for my 2 x 4TB Drives.

                      I don't believe 660p's are "Enterprise Grade" and they don't have the largest TBW... I'd say its doing just fine, and my workstations is performing like a beast where i get to enjoy the space of 4TB's in Mirrored Raid. I'm considering an upgrade of 2 more 4TB's, exporting the pool and then putting 4 x 4 TB in to Raid Z with my SSD's doing read-write cache.

                      This isn't to shit on BTRFS (which used from 2011-2013), but OpenZFS has got all my practical needs covered really well!

                      Comment

                      Working...
                      X