Announcement

Collapse
No announcement yet.

Btrfs Sees Minor Performance Optimizations With Linux 6.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by ahrs View Post
    Not by default, it's still considered experimental and yes that's a conservative approach. There's nothing wrong with investing in ZFS but they could have deployed BTRFS by default today just like Fedora and OpenSUSE (at least until the ZFS support is in good shape to be made the default).
    Perhaps, but they appear to be driving hard on ZFS for Canonical...

    Im running TW on my desktop w/the default BTRFS, which is OK, and have one of my notebook stack running cachyos w/bcachefs which gets used from time to time (and updated at least once a week, Arch base... bad things tend to happen if you say, oh let it go for 8m w/no updates... generally recoverable but a PITA)... can't do direct comparisons as desktop is booting(for now) off of spinning rust and notebook is nvme(plan to re-image desktop to a SATA SSD as time permits, and I can do a little checking about how well that may or may not work, o.w. it'll have to be a fresh install... really my main difference w/OSes and SSD/nvme is boot time. Once things are cached in memory there is no real noticeable difference... and TW on rust is excruciatingly slow... and sleep got borken on the 020924 update and is still broken, so it's even worse now... leave it on all the time or shut it down and reboot... and by broken sleep I mean that it sleeps fine, wakes up, displays sddm, BUT then reboots... have not had time time(or did not want to spend the time) checking further in logs etc, and I was only able to ssh in once but noticed nothing amiss in dmesg, but since that time Ive not been fast enough to ssh in and/or am not sure if the net actually comes back before reboot...)

    Comment


    • #12
      Originally posted by Danny3 View Post
      Too bad that big mainstream distros like Debian, Ubuntu, OpenSUSE, Fedora doesn't use it by default!
      BTRFS features at this point greatly outweigh whatever downsides it may still have.

      As for its compression mode, I wonder now how much out of sync is Zstd code again, compared to upstream, which is at 1.5.6 version.
      It has piss poor compression compared to OpenZFS. By that I mean that there's no native LZ4, none of the faster Zstd levels are available, and you have to pick one compressor for the entire file system so you can't do advanced file system management techniques like disabling compression in makepkg and setting the file system set to use Zstd 19 for /path/to/packages while using LZ4 for places like /var and /home. It also means that you'll compress your package with makepkg and then BTRFS will try to compress it again which will cause disk write lag due to Zstd compression testing since it doesn't have Early Abort like OpenZFS. You can use that compression tip with any program that saves compressed files.

      I say all that while my makepkg directory is using OpenZFS with LZ4 and makepkg is set to use Zstd with:

      Code:
      COMPRESSZST=(zstd -c -T0 --ultra -22 --long --auto-threads=logical -)
      That compresses packages faster than OpenZFS with Zstd 19. --long adds a 2GB decompression requirement on the files for 1-5% more compression. -T0 only uses 8 threads by default on my 7800X3D. Setting --auto-threads=logical lets it use all 16 threads.

      If you're overly keen you can use two compressors with BTRFS. You can use the one set in the mount options and you can use CHATTR to set one file at a time to Zlib-3. Using CHATTR for compression disables COW on that file. That kind of defeats the purpose of using BTRFS and a COW file system.....

      If you need a single file system where one CODEC set one way fits every single use-case, BTRFS can work well. The problem with that scenario for desktops is that Zstd will be set somewhere between 2 and 8. 2 is SSD and 8 is an HDD. An NVME would need Zstd set to the unavailable-to-BTRFS -500 or more to not be compressor write speed throttled with BTRFS. If you use an NVMe root with BTRFS with any form of compression enabled you'll have compressor throttled writes.

      The only compressor that should be used regarding high speed writes is LZ4 since anything else, including writes without compression, will be hit with a write penalty. That limits us to F2FS and OpenZFS or DMVDO + Something else with compression disabled.

      The fact that we have LZ4 via DMVDO for any Linux file system makes BTRFS's built-in compression rather moot due to the CHATTER/COW limitations. Nowadays people will almost always be better off using LZ4 via DMVDO for BTRFS, disabling BTRFS compression entirely, and having makepkg or any other program use a specialized Zstd command like I do with OpenZFS and its native LZ4.

      The BTRFS downsides really suck if you've used OpenZFS for any significant amount of time to make you overly thing about how data is written where and why.​​

      Comment


      • #13
        Originally posted by skeevy420 View Post

        ...​​
        I have a hard time finding any use for zstd compression levels zstd levels above 6. The gains were not that substantial (even with recordsize 1M) and my system always crawled to a near halt whenever I dared to write anything into subvols with high zstd compression set (on a 7840HS that is). Perhaps this works well for backups of specific files and such, though I can already use borg for that purpose on btrfs.

        On the other hand I really missed the ability to run offline deduplication tools like duperemove and reflinks - which are possible now, although hidden behind a flag, explicitly requiring cp --reflink=always when copying across subvols and thus also not automatically working with dolphin and such, and being less space efficient than btrfs due to the implementation.

        For my usecases (bunch of games sharing a lot of identical assets like Valve games, multiple unmodded and modded copies of the same game, Unity projects, etc.) those features manifest as much more significant space savings than higher compression levels and block sizes do.

        Comment


        • #14
          <rant>
          CoW filesystems are wrong
          </rant>

          Comment


          • #15
            Originally posted by coder View Post
            Finally! I was naively expecting folios to be somewhat of an overnight win, but I guess each filesystem driver needs to be updated to use them.
            moving to folios can be done quickly, but actually enabling the full benefits means large folios which takes more effort.

            that being said, it's important to note that there's been a bug lurking in large folios in xfs, which is looking like it's a core issue. See this thread avis mentioned recently.



            until this is identified I'd be actually pretty nervous to upgrade to this new kernel. Although this bug doesn't seem to corrupt data, just livelock the system. [edit: just catching up with the thread it seems this issue has been pinpointed to a bug in xarrays]
            Last edited by fitzie; 18 September 2024, 03:18 PM. Reason: add note on latest update from the lore thread

            Comment


            • #16
              I'd really like to see btrfs finally implement the originally promised hybrid (SSD+HDD) hot relocate feature. How hard could it be? The driver on a multi-disk FS has to decide where to place a new file anyway (on the SSD), while files of low interest (no recent read/writes) could be moved away to the HDD.

              Comment


              • #17
                Originally posted by Gryffus View Post
                <rant>
                CoW filesystems are wrong
                </rant>
                You could drop the rant and explain why

                Though personally I've never used one and continue to do so.

                My FS history is quite stupid really:

                FAT32 -> NTFS
                ext2 -> ext3 -> dabbling with XFS for a short while, it was crap in the early 00s -> back to ext4.

                Comment


                • #18
                  Originally posted by skeevy420 View Post
                  It has piss poor compression compared to OpenZFS....​​
                  Actually BTRFS has 4 ways of setting compression

                  1. in fstab with the compress=zstd:level|lzo|zlib
                  2. with btrfs filesystem defrag -czstd object
                  3. with chattr +c object
                  4. with btrfs property set object compression zstd

                  That is way too many if you ask me, and I honesty think that only number 4 is the correct way of setting, changing or clearing the desired compression setting.
                  (And yes, I know that simply setting the property is not enough and you have to read/write to apply the compression)

                  You can actually set different compression algorithms on different objects such as /home /var/ /usr /here /there etc...
                  And I think you are incorrect that using option 3 disables COW on the file. +C disables COW , +c enables compression.



                  http://www.dirtcellar.net

                  Comment


                  • #19
                    Originally posted by waxhead View Post

                    Actually BTRFS has 4 ways of setting compression

                    1. in fstab with the compress=zstd:level|lzo|zlib
                    2. with btrfs filesystem defrag -czstd object
                    3. with chattr +c object
                    4. with btrfs property set object compression zstd

                    That is way too many if you ask me, and I honesty think that only number 4 is the correct way of setting, changing or clearing the desired compression setting.
                    (And yes, I know that simply setting the property is not enough and you have to read/write to apply the compression)

                    You can actually set different compression algorithms on different objects such as /home /var/ /usr /here /there etc...
                    And I think you are incorrect that using option 3 disables COW on the file. +C disables COW , +c enables compression.

                    The GP is right in that all 4 of those methods are more or less shit. ZFS' properties are indeed better — more flexible, more coherent and more consistent than Btrfs' 10 half-assed ways of adjusting the filesystem behavior.

                    The good news is that the mechanisms are there (even more, the mechanisms are better than what ZFS has). Someone just needs to invent some actually good APIs and tooling to manipulate those mechanisms instead of the existing "made by Predators for Aliens" crap. I actually have some private patches to that end, but nobody will accept them in their current form, so they stay private (until I get some free time and energy on my hands). Perhaps I should start a Patreon page...
                    Last edited by intelfx; 18 September 2024, 08:46 PM.

                    Comment


                    • #20
                      Originally posted by browseria View Post

                      Fedora switched to BTRFS by default in Fedora 33, which was in 2020 - that's 4 years now.
                      OpenSUSE did it even earlier than that - in January 2018 - that's 6 years ago.
                      I don't know what you are talking about.

                      ref. https://fedoramagazine.org/btrfs-coming-to-fedora-33/
                      Ubuntu doesn't use it by default, but offers it as an installation option and it's fully integrated (installation into subvolumes, etc).

                      Comment

                      Working...
                      X