Announcement

Collapse
No announcement yet.

An Exciting Btrfs Update With Encoded I/O, Fsync Performance Improvements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Danny3 View Post
    If only they would upgrade the Zstd code too to the upstream version, everything would be perfect!
    I think Xanmod uses the updated version, if you are comfortable with using that. It had higher performance in the last phoronix benchmark too, so best of both worlds.

    Comment


    • #32
      Every time BTRFS is mentioned, someone from ZFS arena appears claiming its better. The openZFS driver for windows hasn't received a update for 3yrs btw.

      I still sometimes boot into windows to do special windows only tasks (I don't want to); so winbtrfs works pretty well with the exception of a crash bug in specific cases which is being investigated/fixed atm.

      ZFS does sound like a better fit for server/workstation use where someone is doing fancy raid and partitioning stuff. For general desktop user, btrfs is just better imo.
      Last edited by theriddick; 22 March 2022, 05:10 PM.

      Comment


      • #33
        Originally posted by portablenuke View Post
        ZFS has the ability to expose space in the pool as a block device. Creating an 8GB ZFS volume is analogous to creating an 8GB logical volume with LVM. BTRFS doesn't have this ability..
        In what the capacity of exposing space of the pool as block device, is different from creating a file in a filesystem and using it as block device with the help of loopback ?

        It is not BTRFS related. IIRC This observation was done by Dave Chinner (on a bit different way) about supporting snapshot of filesystem mounted on a loopback device backed by a file reflinked.

        Comment


        • #34
          Originally posted by atomsymbol

          ZFS supports using SSDs as caches for HDDs. The same can be achieved with bcache + btrfs/ext4/...
          True, this is a way for solving the performance issue when you mix cow/sync/raid5/6/7...

          Moreover another gain is that setting btrfs w/raid and bcache is a delicate operation when you want to have all the redundancy that raid offers even in the case that a cache disk fails. ZFS take care of these details; for btrfs+bcache the details are in charge of the admin.

          Comment


          • #35
            skeevy420 and others: I'd like to know your setup for partitioning etc... You mentioned your root is EXT4 and you use ZFS for others. I'd appreciate if people can elaborate. Thanks!

            Comment


            • #36
              I use BTRFS RAID-5. I have it scrubbing every week (specifically, two hours every day, before a scrub cancel to pause it, and a resume the next day), which helps me sleep a little better at night.

              I've had problems with BTRFS - most notably, delayed writes not being written, leading to data loss that could only be recovered as most of the content from the last six hours had been cached elsewhere - but I've not had problems with the RAID part. Indeed, upgrades like xxhash64 as a hashing option made it an efficient option, and RAID1 not using all disks can be useful too, if you don't need raidc4 redundancy with four disks. (Hardware RAID would still be nice to have, but mostly thanks to its battery-backed write cache.)

              Comment


              • #37
                Originally posted by Britoid View Post
                All these fancy ZFS features that if you use on anything other than enterprise-level SSDs you'll get extreme amounts of tear.

                BTRFS will likely never support raid5 or raid6 properly, it just doesn't fit well with its filesystem design. But btrfs raid is file level so you can use raid 1 with 4 drives with and get usable space and redundancy similar to raid 5.

                I don't see them as "Fancy Features", i use this on my Workstation for the past (nearly) 4 years. I have 3 Intel 660p's on my Kubuntu Installation, 1 for Read caching, 2 for Mirrored write caching for my 2 x 4TB Drives.

                I don't believe 660p's are "Enterprise Grade" and they don't have the largest TBW... I'd say its doing just fine, and my workstations is performing like a beast where i get to enjoy the space of 4TB's in Mirrored Raid. I'm considering an upgrade of 2 more 4TB's, exporting the pool and then putting 4 x 4 TB in to Raid Z with my SSD's doing read-write cache.

                This isn't to shit on BTRFS (which used from 2011-2013), but OpenZFS has got all my practical needs covered really well!

                Comment


                • #38
                  It's going to be real fun to see the BTRFS reactions when OpenZFS lands the patchset for adding disks to a RAID-5/6 later this year.

                  No only is RAID-5/6 terminally broken on BTRFS, but expansion has been probably the only real feature BTRFS has had over ZFS.

                  At this rate somebody'll evolve RedoxFS into the next ZFS replacement and merge that into the kernel before BTRFS's RAID is usable. In other words: the end of the universe.

                  Comment


                  • #39
                    Originally posted by Developer12 View Post
                    It's going to be real fun to see the BTRFS reactions when OpenZFS lands the patchset for adding disks to a RAID-5/6 later this year.

                    No only is RAID-5/6 terminally broken on BTRFS, but expansion has been probably the only real feature BTRFS has had over ZFS.

                    At this rate somebody'll evolve RedoxFS into the next ZFS replacement and merge that into the kernel before BTRFS's RAID is usable. In other words: the end of the universe.
                    Most of the anti ZFS posts in this thread read like Linus inspired pedantic dick waving that misses the forest for the trees. At the end of the day, ZFS is used for huge storage arrays of mission critical data in production. Nobody in their right mind would do that with BTRFS. I wouldn't even do that with the 8 disks in the old T630 sitting next to me. There's a reason even solutions targeted more at the SMB (not the protocol) space like Proxmox or TrueNAS SCALE use ZFS.

                    This doesn't mean BTRFS is bad, it just doesn't handle this use case well at all, and storing lots of a data in a redundant / efficient way is an extremely important use case. BTRFS is great for root, I'm typing this on a Tumbleweed system and was glad when Fedora made it the default. But claiming that ZFS doesn't do anything important that BTRFS can't do is just asinine.

                    Comment


                    • #40
                      Originally posted by Developer12 View Post
                      It's going to be real fun to see the BTRFS reactions when OpenZFS lands the patchset for adding disks to a RAID-5/6 later this year.

                      No only is RAID-5/6 terminally broken on BTRFS, but expansion has been probably the only real feature BTRFS has had over ZFS.
                      .
                      When I asked in the other BTRFS thread, someone mentioned the extent tree v2 changes allow for RAID5/6 issues to be fixed (although the patches specific to RAID5/6 havent been submitted yet). In which case it isnt terminally broken. It would be good if someone can actually confirm this.

                      Comment

                      Working...
                      X