Announcement

Collapse
No announcement yet.

Arch Linux Powered CachyOS Now Defaults To Btrfs Rather Than XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by muncrief View Post
    I'm curious as to why the default filesystem was changed from ZFS to Btrfs. From what I've read Btrfs is much more prone to errors and far less mature than ZFS. Am I incorrect?
    From what I understand they support ZFS, but it was never the default.

    Btrfs and ZFS are different tools for different tasks. If you're running a NAS with multiple drives then ZFS is the obvious choice. But on a desktop or laptop with one (or two) drives, Btrfs makes more sense.

    Comment


    • #12
      Originally posted by Chugworth View Post
      From what I understand they support ZFS, but it was never the default.

      Btrfs and ZFS are different tools for different tasks. If you're running a NAS with multiple drives then ZFS is the obvious choice. But on a desktop or laptop with one (or two) drives, Btrfs makes more sense.
      Thank you for your response Chugworth. That makes sense. Both my desktop and media server have a large amount of storage, around 6 TB and 20 TB respectively, and I rely on ZFS to detect silent corruption so ZFS is more appropriate in my case. But if Btrfs is reliable I can see that it makes more sense for the average user. It's built-in to the kernel as well, so that may have played into the default change. But so long as CachyOS continues to transparently support ZFS I will continue to use it and support it financially.

      Comment


      • #13
        Originally posted by Chugworth View Post
        From what I understand they support ZFS, but it was never the default.

        Btrfs and ZFS are different tools for different tasks. If you're running a NAS with multiple drives then ZFS is the obvious choice.
        Correction: if you are prepared to throw entire racks of disks into your NAS, then ZFS is the obvious choice

        Btrfs is far more flexible than ZFS when it comes to changing the topology of your pool. If you have, like me, started with just a single HDD and later expanded that storage to 2, then 3, then 4 disks — then I just do not see how you would have used ZFS without recreating the pool at each point.


        In other words, ZFS might be infinitely more advanced (and don't get me wrong, it is), but all that is worth nothing if you can't use it within the physical (and, thus, financial) constraints you possess.
        Last edited by intelfx; 09 June 2024, 05:44 PM.

        Comment


        • #14
          Originally posted by Chugworth View Post
          From what I understand they support ZFS, but it was never the default.

          Btrfs and ZFS are different tools for different tasks. If you're running a NAS with multiple drives then ZFS is the obvious choice. But on a desktop or laptop with one (or two) drives, Btrfs makes more sense.
          I use CachyOS with ZFS on my desktop and it makes a lot of sense assuming you know what you're doing. Take compression alone. BTRFS offers one mount option globally and then you have to run chattr on every file or directory you want to be different than what is set in FSTAB. You then have to keep up with everything you do manually. With ZFS you can create multiple datasets per pool and have individual settings per dataset. That allows you to do things like having a datasets used for high compression for things like PKGDEST, SRCDEST, or even the entirety of /usr. If you do that you can remove the package compression steps from makepkg and you'll still store the package and its sources with zstd-19 instead of lz4 like $HOME uses...well, my $HOME. It's Zstd-3 by default. You also won't be double-compressing; putting an already compressed file on a compressed file system. If you want to see what all is done all you have to run is "zfs get compression" and you'll get something that looks like this:

          zpcachyos compression on default
          zpcachyos/ROOT compression lz4 local
          zpcachyos/ROOT/cos compression lz4 local
          zpcachyos/ROOT/cos/home compression lz4 local
          zpcachyos/ROOT/cos/root compression lz4 local
          zpcachyos/ROOT/cos/varcache compression lz4 local
          zpcachyos/ROOT/cos/varlog compression lz4 local


          Mind you, the CachyOS ZFS setup is extremely barebones, that's literally it with lz4 set, and I think it could be configured better for desktop needs and user protections. By that I mean:
          • zpcachyos/ROOT/cos/varlog should probably have a quota set to prevent erroneous logs from filling the drive. Or recommend the user to set one.
          • varlog and varcache should be moved to /ROOT/cos/VAR/varblah so all of /var can be LZ4. It's the guaranteed always read/write directories with potentially active data.
          • zpcachyos/ROOT/cos/usr should be created and set to Zstd-19. There's only a compression write penalty on package upgrades since it's read-only outside of that.
          • /usr/local should be symlinked to /var/usrlocal if the aforementioned is done; what Silverblue does.
          • /opt should be Zstd-19'd, too. It's almost always used as proprietary /usr with AUR stuff. Again, is essentially read-only outside of upgrading.
          • Hell, most of ROOT outside of /srv, /var, and /root could be Zstd-19 and nobody would notice outside of packages installing around 75-79mbps.
          • zpcachyos/HOME/$USER pointing towards /home/$USER would be nice. That would allow $HOME/.cache and $HOME to use different compressors.
          The way the Calamares ZFS module is written none of that compression or quota stuff can be accomplished until post-install. We can only define one set of options for datasets and my non-programming ass only knows that some things between 266 and the end needs changing to accommodate that. Perhaps in the header file, too. That's about as much as I know. Dear Chat GPT

          I've wondered how much $HOME/.cache and compression affects things like KDE window animations and Steam shader compiles...but that's why I use lz4 if there's any chance of RW happening.

          Desktop or server, the power of per dataset options is what makes ZFS a superior choice over most everything else.

          Comment


          • #15
            Originally posted by muncrief View Post
            I'm curious as to why the default filesystem was changed from ZFS to Btrfs. From what I've read Btrfs is much more prone to errors and far less mature than ZFS. Am I incorrect?
            XFS!!! XFS to btrfs!

            Update: Oh! I did not see the rest of the replies in the second page. Never mind.

            Comment


            • #16
              Originally posted by intelfx View Post

              Correction: if you are prepared to throw entire racks of disks into your NAS, then ZFS is the obvious choice

              Btrfs is far more flexible than ZFS when it comes to changing the topology of your pool. If you have, like me, started with just a single HDD and later expanded that storage to 2, then 3, then 4 disks — then I just do not see how you would have used ZFS without recreating the pool at each point.

              In other words, ZFS might be infinitely more advanced (and don't get me wrong, it is), but all that is worth nothing if you can't use it within the physical (and, thus, financial) constraints you possess.
              Well, I did. Attaching a 2nd disk to a single disk to create a mirror or RAID0 is trivial. Google it.

              Going from that to a 3 disk RAID required some trickery, but it wasn't difficult. Add 3rd disk, detach 2nd disk, create fake disk, create pool with 3rd, 2nd, and fake, copy data, replace fake with first disk. See here for better instructions.

              Nowadays there's RAIDZ Expansion and Reflow.

              Comment


              • #17
                Originally posted by skeevy420 View Post

                Well, I did. Attaching a 2nd disk to a single disk to create a mirror or RAID0 is trivial. Google it.
                I can imagine that attaching a second disk to create a mirror is trivial, but are you sure about RAID0?

                Anyway, good, that's one very trivial special case.

                Originally posted by skeevy420 View Post
                Going from that to a 3 disk RAID required some trickery, but it wasn't difficult. Add 3rd disk, detach 2nd disk, create fake disk, create pool with 3rd, 2nd, and fake, copy data, replace fake with first disk. See here for better instructions.
                And what's the redundancy of your pool while you are doing this "trickery"?

                (In case you are not sure, the answer is "none". Bonus points for calculating the extra I/O incurred by this operation and the total probability of encountering an uncorrectable error during this I/O.)

                Originally posted by skeevy420 View Post
                Nowadays there's RAIDZ Expansion and Reflow.
                1. This doesn't help with converting single/mirror into a raidz (I don't consider the disaster above a "solution", especially not for a filesystem that prides itself on reliability and data safety);
                2. This will forever degrade the performance of your pool due to an extra level of indrection, and will waste space occupied by the old (pre-reflow) data unless you rewrite it.
                Thanks, but I'll pass.
                Last edited by intelfx; 09 June 2024, 07:38 PM.

                Comment


                • #18
                  What is that default desktop picture Michael put in the article? I don't recognize the DE or WM?

                  Comment


                  • #19
                    Originally posted by kylew77 View Post
                    What is that default desktop picture Michael put in the article? I don't recognize the DE or WM?
                    That's GNOME

                    Comment


                    • #20
                      Originally posted by intelfx View Post

                      I can imagine that attaching a second disk to create a mirror is trivial, but are you sure about RAID0?

                      Anyway, good, that's one very trivial special case.
                      Yep, I'm sure. Unlike a mirror, you can't undo it once done, but if you have two HDDs of the same size can be worth doing.

                      And what's the redundancy of your pool while you are doing this "trickery"?

                      (In case you are not sure, the answer is "none". Bonus points for calculating the extra I/O incurred by this operation and the total probability of encountering an uncorrectable error during this I/O.)
                      1. This doesn't help with converting single/mirror into a raidz (I don't consider the disaster above a "solution", especially not for a filesystem that prides itself on reliability and data safety);
                      2. This will forever degrade the performance of your pool due to an extra level of indrection, and will waste space occupied by the old (pre-reflow) data unless you rewrite it.
                      Thanks, but I'll pass.
                      There's not any redundancy before or after either, so what's your point? There's added safety and fault tolerance with extra disks, BTRFS, ZFS, or pick your poison, but not redundancy. Redundancy is a disk or two separate from what you're using.

                      If you're working within the limitations of a desktop form factor and your motherboard, sometimes you have to do what you're able, performance penalties be damned. The good part about ZFS is that your data should be safe during all of it. Mine was. Anecdotally, lots of other people have done that routine and their data was safe, too. It's just a disk to disk transfer and the file system should be irrelevant if your hardware isn't bad or your power doesn't go out.

                      And Expansion isn't the best option, I'll admit, but if you're running low on disk storage space or can't afford to buy 3 disks to create a new pool to add a 3rd disk in the most efficient manner possible or you're unwilling to do fake disk hacks, at least Expansion is an option.

                      Comment

                      Working...
                      X