Announcement

Collapse
No announcement yet.

Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Weasel View Post
    This is terrible, as decompression performance is far more important than compression. Should be looked into ASAP.
    Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.

    Comment


    • #12
      Originally posted by terrelln View Post

      Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
      Hi, can you explain why is not possible yet to choose negative compression values on kernel? Things like file compression or zram needs faster zstd levels.

      Thanks.

      Comment


      • #13
        terrelln
        Sorry for the offtopic. Is it possible to use Zstd decompression on old CPUs such as 68000 and MIPS from PSX and N64? SH4, etc.

        Comment


        • #14
          Originally posted by mat1210 View Post
          the article mentions a minor increase in decompression time, but the patch linked mentions a fix for it below it?
          This is my interpretation as well; a nice improvement in decompression time.

          Comparison of the two updates:
          Component Level D. time delta (patch 1) "Read time delta" (patch 2)
          Btrfs 1 +1.8% -7.0%
          Btrfs 3 +3.2% -3.9%
          Btrfs 5 +0.0% -4.7%
          Btrfs 7 +0.4% -5.5%
          Btrfs 9 +1.5% -2.4%
          Squashfs 1 +1.0 % -9.1%​
          Note 1: I didn't copy the compression results over to this chart, which was posted in (what I call) patch 1.
          Note 2: Different wording used for the benchmarks in patch 1 vs patch 2, looks like the same benchmark to me, I might be mistaken.
          Note 3: I assume "patch 1" was benchmarked against the previously current Zstd version in the Linux kernel and "patch 2" was benchmarked against "patch 1".


          Source: https://lore.kernel.org/lkml/2023112...gmail.com/t/#u


          Originally posted by terrelln View Post

          Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
          Or am I interpreting the patch notes wrong?


          Also, I see you are testing with an i9-9900K, do you know whether the same results are seen on AMD CPUs or even different architectures?​
          Last edited by Eudyptula; 21 November 2023, 09:08 PM.

          Comment


          • #15
            Originally posted by HD7950 View Post

            Hi, can you explain why is not possible yet to choose negative compression values on kernel? Things like file compression or zram needs faster zstd levels.

            Thanks.
            Our team was just discussing this, as this thread shows that there is demand. The version of zstd in the kernel does support negative compression levels. Now it is up to the users of zstd to support selecting negative compression levels. I'd be happy to review patches to do that. We plan on chatting with the btrfs folks, and working on exposing negative compression levels there.

            Comment


            • #16
              Originally posted by timofonic View Post
              terrelln
              Sorry for the offtopic. Is it possible to use Zstd decompression on old CPUs such as 68000 and MIPS from PSX and N64? SH4, etc.
              Yes, it should be possible, and I don't think there should be any issues getting Zstd working, as long as you have sufficient memory available. If you run into problems with Zstd on these platforms, please open an issue so we can fix it.

              Comment


              • #17
                Originally posted by Eudyptula View Post
                This is my interpretation as well; a nice improvement in decompression time.

                Initial patch:
                Component Level D. time delta (patch 1) "Read time delta" (patch 2)
                Btrfs 1 +1.8% -7.0%
                Btrfs 3 +3.2% -3.9%
                Btrfs 5 +0.0% -4.7%
                Btrfs 7 +0.4% -5.5%
                Btrfs 9 +1.5% -2.4%
                Squashfs 1 +1.0 % -9.1%​
                ​Note 1: I didn't copy the compression results over to this chart, which was posted in (what I call) patch 1.
                Note 2: Different wording used for the benchmarks in patch 1 vs patch 2, looks like the same benchmark to me, I might be mistaken.

                Source: https://lore.kernel.org/lkml/2023112...gmail.com/t/#u



                Or am I interpreting the patch notes wrong?

                Also, I see you are testing with an i9-9900K, do you know whether the same results are seen on AMD CPUs or even different architectures?​
                The "initial patch" results are actually the overall results for the entire series. The first patch's table shows a regression in decompression speed, which the second patch mitigates.

                I didn't benchmark the kernel (de)compression on AMD or AArch64. We do benchmark our releases on Aarch64 and AMD, and we saw similar results for the v1.5.5 release, but I don't know if I have an exact number. If you do notice speed regressions on any architecture, please open an issue so that we can investigate. We pay the most attention to x86-64 speed, so it is definitely possible that regressions on other architectures slip through the cracks.

                Comment


                • #18
                  Originally posted by HD7950 View Post

                  I totally agree and let me add that negative compression levels shouldn´t exist. To avoid confusion Zstd:1 should be the fastest compression mode and Zstd:64 (for example) the slowest.

                  Actually zstd:1 is not the best choice for an NVME hard drive using Btrfs compression, as you can see it here.
                  that's interesting, thanks.

                  however this synthetic benchmark doesn't keep in count some issues that emerges during real life usage of compressed btrfs FSs.
                  In particular, compressed data chunk are limited to 128k (if I'm not wrong) and that creates huge metadata and an high degree of fragmentation.

                  This might be a minor issue on modern solid state drives, but it kills performance on HDDs.

                  Comment


                  • #19
                    Originally posted by HD7950 View Post
                    Hi, can you explain why is not possible yet to choose negative compression values on kernel? Things like file compression or zram needs faster zstd levels.
                    I think you already can in archs mkinitcpio you can set any compression option with COMPRESSION_OPTIONS=(-1 --fast) for example. Haven't tested it though.

                    Comment


                    • #20
                      Originally posted by terrelln View Post
                      Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
                      If it's noise, it's OK. I know stuff like code alignment can mess it up and it's out of your control more or less.

                      Are you sure it's noise though? I mean, did you touch the decompression code? If you did, did you try to bisect exactly where it happens? If it's completely unrelated, then yeah it's okay to treat it as noise, since it means it's just unlucky alignment or compiler issue.

                      Sometimes it's not very obvious.

                      Comment

                      Working...
                      X