Announcement

Collapse
No announcement yet.

Btrfs Zstd Compression Benchmarks On Linux 4.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I've got a server which primarily is a file server, CPU usage and the like aren't my major concerns, compression rates are. When the amount of data you're using is measured in terabytes you can fit a lot more data on if the compression technique is more aggressive.

    I started out with lzo because I didn't know any better when I first used btrfs, I just saw it on the Arch wiki or an online tutorial, tried it and it worked (and I got noticeably more data on my disks). I've since enabled zlib on the latest disk I added more recently. I'm very happy with the results.
    Last edited by kaprikawn; 14 November 2017, 08:34 AM. Reason: typo

    Comment


    • #12
      I wonder as well about the compression ratio on a file server in a real world scenario. In my case I'm interested if it is worth to switch from lzo to zstd.

      Comment


      • #13
        Originally posted by phoronix View Post
        Here are some benchmarks of Zstd Btrfs compression compared to the existing LZO and Zlib compression mount options.
        The 'Compile Bench' test shows with btrfs defaults lower numbers and higher CPU utilization at the same time. How's that possible? Which cpufreq governor was in use? Performance or something else?

        Comment


        • #14
          The real advantage over LZO is the compression ratio.

          Comment


          • #15
            Fedora just release 4.14 through the standard update channels and I have been pleased with BTRFS with zstd compression enabled. I used to use the standard compression, zlib, for a while and while I was pleased with the compression, the performance on my laptop's small ssd would leave me hanging from time to time. After switching from zlib to zstd, I have not noticed any slow downs and it feels almost as fast as when I had ext4 on the system. ZSTD is an impressive compression algorithm that these benchmarks don't give it justice to in everyday usage.

            Comment


            • #16
              Originally posted by s_j_newbury View Post

              There was a patch enabling lz4 floating around for a while. I gave it a go and found it worked well, but lz4hc support was buggy and would fail on decompression at times. I believe, because of the unresolved lz4hc issue which presumably could have also affected standard compression although it never showed up, the code wasn't accepted upstream.
              From the btrfs wiki:
              The LZ4 algorithm was considered but has not brought significant gains.

              Comment


              • #17
                Originally posted by kaprikawn View Post
                I've got a server which primarily is a file server, CPU usage and the like aren't my major concerns, compression rates are. When the amount of data you're using is measured in terabytes you can fit a lot more data on if the compression technique is more aggressive.

                I started out with lzo because I didn't know any better when I first used btrfs, I just saw it on the Arch wiki or an online tutorial, tried it and it worked (and I got noticeably more data on my disks). I've since enabled zlib on the latest disk I added more recently. I'm very happy with the results.
                Do you notice any improvement in startup times of the system or maybe apps as opposed to no compression at all?

                Comment


                • #18
                  Originally posted by Hi-Angel View Post
                  Do you notice any improvement in startup times of the system or maybe apps as opposed to no compression at all?
                  The system is a headless server, sorry, I can't help with these questions. The disks I have btrfs on are just big HDDs with videos and roms and stuff on that I access through Samba or NFS. I don't even remember whether the OS disk is formatted with btrfs.

                  Comment


                  • #19
                    Originally posted by kaprikawn View Post

                    The system is a headless server, sorry, I can't help with these questions. The disks I have btrfs on are just big HDDs with videos and roms and stuff on that I access through Samba or NFS. I don't even remember whether the OS disk is formatted with btrfs.
                    You're not really going to gain anything, perhaps even slow things down if your content is already highly compressed. If you store your ROMs uncompressed that can work, but then deduplication is an even bigger win if you have many similar ROMs.

                    Comment


                    • #20
                      Originally posted by s_j_newbury View Post

                      You're not really going to gain anything, perhaps even slow things down if your content is already highly compressed. If you store your ROMs uncompressed that can work, but then deduplication is an even bigger win if you have many similar ROMs.
                      BTRFS disables compression for files where compression gives no benefits. As of now it determined like this: BTRFS tries to compress first n bytes (don't remember the n), and if it didn't work, marks the file as useless to compress. It sometimes gives false positives, so prone to change in the future — but at least it does sort out cases like you mentioned.

                      Comment

                      Working...
                      X