Announcement

Collapse
No announcement yet.

Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance

    Phoronix: Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance

    Back in Linux 6.2 the in-kernel Zstd compression/decompression code was updated against the Zstd 1.5 upstream state. Now for the Linux 6.8 kernel in the new year the plan is for updating to Zstd 1.5.5 that should provide better compression performance...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    That's great. Better and faster Zstd helps in a lot of places. I just wish BTRFS, Zswap, Zram, etc would add negative level support since not everything has LZ4 and Zstd:1 isn't fast enough for use cases where raw throughput is more important than more storage.

    Comment


    • #3
      the article mentions a minor increase in decompression time, but the patch linked mentions a fix for it below it?

      Comment


      • #4
        Originally posted by skeevy420 View Post
        That's great. Better and faster Zstd helps in a lot of places. I just wish BTRFS, Zswap, Zram, etc would add negative level support since not everything has LZ4 and Zstd:1 isn't fast enough for use cases where raw throughput is more important than more storage.
        I totally agree and let me add that negative compression levels shouldn´t exist. To avoid confusion Zstd:1 should be the fastest compression mode and Zstd:64 (for example) the slowest.

        Actually zstd:1 is not the best choice for an NVME hard drive using Btrfs compression, as you can see it here.

        Comment


        • #5
          Originally posted by HD7950 View Post

          I totally agree and let me add that negative compression levels shouldn´t exist. To avoid confusion Zstd:1 should be the fastest compression mode and Zstd:64 (for example) the slowest.

          Actually zstd:1 is not the best choice for an NVME hard drive using Btrfs compression, as you can see it here.
          Totally. That'd be a great change for Zstd 2.0.

          Comment


          • #6
            However, there is a minor increase in time for read+decompression times.
            This is terrible, as decompression performance is far more important than compression. Should be looked into ASAP.

            Comment


            • #7
              Originally posted by HD7950 View Post
              I totally agree and let me add that negative compression levels shouldn´t exist. To avoid confusion Zstd:1 should be the fastest compression mode and Zstd:64 (for example) the slowest.
              While that is probably the most intuitive way, it also limits your ability to add faster compression methods without losing backwards compatibility.

              My idea would be to use float. Zstd 0 would equal to no compression at all and 1 should be the standard (best compromise between size and speed with the most basic algorithm). That way you can add 0.1 and 0.13 etc to extend it on the faster side and 5, 64, 22222 for slower modes. And that still gives you the possibility to sneak in newer finer grained options like 5.1234567, 5.1234568, ... while you can be sure that Zstd 1 or 5 will always give the same results.

              Otherwise you regularly have to recreate the number system.

              Comment


              • #8
                Last Zstd release was in April.

                Is that normal?

                Comment


                • #9
                  Originally posted by Anux View Post
                  While that is probably the most intuitive way, it also limits your ability to add faster compression methods without losing backwards compatibility.

                  My idea would be to use float. Zstd 0 would equal to no compression at all and 1 should be the standard (best compromise between size and speed with the most basic algorithm). That way you can add 0.1 and 0.13 etc to extend it on the faster side and 5, 64, 22222 for slower modes. And that still gives you the possibility to sneak in newer finer grained options like 5.1234567, 5.1234568, ... while you can be sure that Zstd 1 or 5 will always give the same results.

                  Otherwise you regularly have to recreate the number system.
                  1-N tied to adding new algorithms to feature flags is what I'd do. Similar to how zstd --ultra can enable higher numbers, and I'm gonna just make stuff up for argument's sake, --quick could enable a faster, less compressing algorithm of Zstd and --slow could enable a higher compressing algorithm. Those could then be implemented into the kernel similar to how Zstd levels are used, like using "zstd:20:fast", "zstd:64:slow", or "zstd:22:ultra" in an fstab entry. If all the Zstd options were added to the kernel and implemented in such a manner then LZ4 could be used on BTRFS with "zstd:format=lz4".

                  Comment


                  • #10
                    Originally posted by Weasel View Post
                    This is terrible, as decompression performance is far more important than compression. Should be looked into ASAP.
                    With zram, I could see a some nuance, but for files being written, it's pretty much guaranteed they'll be written far less frequently than read. This is the case for the lion's share of files. I'd actually love if filesystems and packages offered something akin to LZ4HC for core files and programms other files that just don't get touched often. Sure, upgrades would be slower for whoever is compressing, but ratios and especially decompression are golden

                    Comment

                    Working...
                    X