Announcement

Collapse
No announcement yet.

Updated Zstd Implementation Merged For Linux 6.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Danny3 View Post
    Finally!
    I've been asking for this for almost a year.
    But weren't the Zstd developers saying that they plan to release another Zstd version this month and then update the kernel to that?
    It's a shame they didn't do that.
    Like discussed here:
    https://github.com/facebook/zstd/iss...ent-1267381027
    I wonder if in-tree filesystems will have problems because of this, since zstd’s constants was changed in 1.5.0 such that the output from the compressor is not the same as it was in older versions. At least in ZFS, it is assumed that compress(decompress(old_compress(data))) = old_compress(data). This makes updating zstd potentially problematic without treating it as a different version of the compressor internally, which is something that ZFS implemented alongside zstd support so that future updates could be done.

    Comment


    • #12
      Originally posted by Danny3 View Post

      LOL!
      Somebody has good observation skills!

      But when you have only 119 GiB of storage and not a high-end CPU, your only hope is BTRFS + Zstd compression, but that must not slow down everything down, especially when extracting / compressing files or copy / move folders with lots of things inside.

      And if you can install and play a few games that would be great!

      But to be able to do all that, considering the storage and CPU limitations, at least you better have the latest and the greatest improvements in BTRFS and Zstd.

      BTRFS keep getting them, but Zstd did not.

      Now with Linux 6.2 I'm very happy that they both have.

      Too bad that 6.1 is the LTS one as 6.2 seems to be really wonderful and the one that makes pretty much everyone happy!
      I use ZFS with zstd and a 1M recordsize on my machine for that. I get higher compression than btrfs can provide even with zstd since btrfs is limited to compressing in 64KB blocks at a time, which harms the compression ratio. You can test this by manually using zstd to compress the Linux kernel at different block sizes. You should see zstd compresses it much more efficiently with a 1MB block size than a 64KB block size.

      That is not 100% accurate since it does not do padding like ZFS and btrfs do to round up to say the nearest 4KB sector, but it should still do a good job of showing the difference between the two in terms of space savings.
      Last edited by ryao; 20 December 2022, 01:53 AM.

      Comment


      • #13
        Originally posted by ryao View Post

        I wonder if in-tree filesystems will have problems because of this, since zstd’s constants was changed in 1.5.0 such that the output from the compressor is not the same as it was in older versions. At least in ZFS, it is assumed that compress(decompress(old_compress(data))) = old_compress(data). This makes updating zstd potentially problematic without treating it as a different version of the compressor internally, which is something that ZFS implemented alongside zstd support so that future updates could be done.
        If thats true, then ZFS is incompatible with compression. Thats never a viable assumption.

        Comment


        • #14
          Originally posted by piorunz View Post
          Does Btrfs benefit from this also? Meaning filesystem compression uses that updated Zstd, or it has its own zstd built-in into Btrfs code?
          AFAICT it uses the kernel implementation, so, it will benefit from this upgrade
          Last edited by cynic; 20 December 2022, 07:47 AM.

          Comment


          • #15
            Originally posted by cynic View Post

            AFICT it uses the kernel implementation, so, it will benefit from this upgrade
            That's excellent news if true.

            Comment


            • #16
              Originally posted by ryao View Post

              I use ZFS with zstd and a 1M recordsize on my machine for that. I get higher compression than btrfs can provide even with zstd since btrfs is limited to compressing in 64KB blocks at a time, which harms the compression ratio. You can test this by manually using zstd to compress the Linux kernel at different block sizes. You should see zstd compresses it much more efficiently with a 1MB block size than a 64KB block size.

              That is not 100% accurate since it does not do padding like ZFS and btrfs do to round up to say the nearest 4KB sector, but it should still do a good job of showing the difference between the two in terms of space savings.
              I alternate between LZ4 and Zstd-19 on my ZFS volumes (1M recordsize, of course)...one for write heavy and the other for read heavy. Since it's a "hidden" feature: Have you tried up to 16M? I've thought about it but I've only seen minor bits of anecdotal data on Reddit.

              Comment


              • #17
                I feel for all these people excited about using Zstd Zram so they can literally download more ram. Seriously, I feel bad for those folks. The way I figure, if you have so little ram that Zstd's ~2.5x compression ratio actually matters when compared to LZ4's ~1.5x compression ratio so you're compelled to limit your ram speed to an SSD or slower must be a sucky position to be in; especially if it's a system that can't be upgraded. IMHO, once you have 16GB or more ram, Zram for swap is a "just in case some shit needs swap precaution" than something that's actually necessary. That's why I use a 4GB LZ4 Zram with my 32GB of memory. I care about throughput over space savings so all I need is some compression that enhances throughput and not a compressor that will slow throughput for enhanced compression. Zstd is great, but it isn't designed for high performance IO....just the O. The I is rather limited using the non-fast values.

                Zswap...I mixed feelings about using Zstd there since it uses ram and a backing drive and the backing drive is likely slower than ram. In that case we might as well throttle the compressor to the lowest common denominator storage's write speed which will probably be Zstd 1 or 2 using in-kernel options and assuming an SSD and a preference towards more throughput.​
                Last edited by skeevy420; 20 December 2022, 10:21 AM.

                Comment


                • #18
                  Originally posted by skeevy420 View Post
                  I feel for all these people excited about using Zstd Zram so they can literally download more ram. Seriously, I feel bad for those folks. The way I figure, if you have so little ram that Zstd's ~2.5x compression ratio actually matters when compared to LZ4's ~1.5x compression ratio so you're compelled to limit your ram speed to an SSD or slower must be a sucky position to be in; especially if it's a system that can't be upgraded. IMHO, once you have 16GB or more ram, Zram for swap is a "just in case some shit needs swap precaution" than something that's actually necessary. That's why I use a 4GB LZ4 Zram with my 32GB of memory. I care about throughput over space savings so all I need is some compression that enhances throughput and not a compressor that will slow throughput for enhanced compression. Zstd is great, but it isn't designed for high performance IO....just the O. The I is rather limited using the non-fast values.​
                  That's just you. More RAM is always good, even if it's "slower".

                  Comment


                  • #19
                    Originally posted by Danny3 View Post

                    But when you have only 119 GiB of storage and not a high-end CPU, your only hope is BTRFS + Zstd compression, but that must not slow down everything down, especially when extracting / compressing files or copy / move folders with lots of things inside.
                    Out of curiosity, do you tune the compressor for each individual partition/volume or do you just go with the default settings? For example, on my ZFS raid, games are stored on a Zstd-19 compressed volume with a whopping 85mb/s write speed but KDE kdesrc-build is using LZ4 for it's build directory so I don't impact compile speeds.

                    Comment


                    • #20
                      Originally posted by Weasel View Post
                      That's just you. More RAM is always good, even if it's "slower".
                      There's a point of diminishing returns in regards to using Zram or Zswap in a high memory system ram. For me it's 16GB using a 4GB lz4 Zram, 4GB /tmp, and 8GB ram setup. If I'm on a system with more than that then swap is more of a precaution than a necessity so there isn't a point in putting an IO limiter as a form of out of memory precaution. If I'm on a system with less than 16GB then Zstd becomes an option since I'm starting to delve into the realm of actually running out of memory and the extra space that Zstd can offer can be helpful and necessary even at the expense of slower memory/swapping write speeds.

                      Comment

                      Working...
                      X