Announcement

Collapse
No announcement yet.

Updated Zstd Implementation Merged For Linux 6.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by skeevy420 View Post
    There's a point of diminishing returns in regards to using Zram or Zswap in a high memory system ram. For me it's 16GB using a 4GB lz4 Zram, 4GB /tmp, and 8GB ram setup. If I'm on a system with more than that then swap is more of a precaution than a necessity so there isn't a point in putting an IO limiter as a form of out of memory precaution. If I'm on a system with less than 16GB then Zstd becomes an option since I'm starting to delve into the realm of actually running out of memory and the extra space that Zstd can offer can be helpful and necessary even at the expense of slower memory/swapping write speeds.
    1 word: tmpfs.

    Here's a quickie: have you ever stored throwaway VMs and containers entirely in tmpfs?

    If not, then you clearly are using your PC completely different than me.

    Comment


    • #22
      Originally posted by Weasel View Post
      1 word: tmpfs.

      Here's a quickie: have you ever stored throwaway VMs and containers entirely in tmpfs?

      If not, then you clearly are using your PC completely different than me.
      Yes. I've copied entire games into TMPFS. On my last system I used to run a 30GB modded Skyrim from TMPFS. IMHO, VMs and large things running from ram are niche write once, read many use-cases that benefit from the higher compression ratios. It's the same logic as my games being stored under Zstd-19.

      Still, though, I think that for generic, unknown use-cases LZ4 is better*; especially if the system has ram to spare and we're talking about my original use-case of a compressor for a backing swap drive that is likely unnecessary. If you have a niche use like copying 30GB of disk images or games or ??? into ram before running it, by all means use a different compressor geared for that use.

      *but better is subjective and varies by use-case and we're talking about two very different use-cases. A very minimally used Zswap versus an intentionally saturated TMPFS. Of course those wildly different scenarios use different tunings for "better".

      Comment


      • #23
        Well sure, LZ4 is better when you access the swap more frequently. But in general, my use case for ZRAM is to free some of my RAM for barely-used portions in it. (i.e. original intention of swap).

        I could use specialized compressed filesystem, mind you, but I prefer swap because it's transparent with tmpfs. I don't have to specifically put stuff into zram that I want compressed (and don't access frequently), it just happens transparently and automatically for stuff that's barely accessed, so it's convenient.

        Comment


        • #24
          Originally posted by Danny3 View Post
          Finally!
          I've been asking for this for almost a year.
          But weren't the Zstd developers saying that they plan to release another Zstd version this month and then update the kernel to that?
          It's a shame they didn't do that.
          Like discussed here:
          https://github.com/facebook/zstd/iss...ent-1267381027
          The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.

          Comment


          • #25
            Originally posted by ryao View Post

            I use ZFS with zstd and a 1M recordsize on my machine for that. I get higher compression than btrfs can provide even with zstd since btrfs is limited to compressing in 64KB blocks at a time, which harms the compression ratio. You can test this by manually using zstd to compress the Linux kernel at different block sizes. You should see zstd compresses it much more efficiently with a 1MB block size than a 64KB block size.

            That is not 100% accurate since it does not do padding like ZFS and btrfs do to round up to say the nearest 4KB sector, but it should still do a good job of showing the difference between the two in terms of space savings.
            Correction. Btrfs uses maximum of 128k blocks for storing the compressed data. The uncompressed data can be much larger, depending on the compression ratio.

            Comment


            • #26
              Originally posted by skeevy420 View Post

              I alternate between LZ4 and Zstd-19 on my ZFS volumes (1M recordsize, of course)...one for write heavy and the other for read heavy. Since it's a "hidden" feature: Have you tried up to 16M? I've thought about it but I've only seen minor bits of anecdotal data on Reddit.
              Compression on that is limited to 1M chunks if I recall correctly. It was a surprise to me when I learned that it had been implemented that way.

              Comment


              • #27
                Originally posted by S.Pam View Post

                Correction. Btrfs uses maximum of 128k blocks for storing the compressed data. The uncompressed data can be much larger, depending on the compression ratio.
                When I last read the documentation, I saw a 64KB limit. Thanks for letting me know that it is 128KB.

                Comment


                • #28
                  Originally posted by terrelln View Post

                  The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.
                  I figured as much. Thank you a lot for your work on zstd. I'm benefiting daily from what you've accomplished.

                  Comment


                  • #29
                    Originally posted by terrelln View Post

                    The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.
                    Well, I'm glad that at least v1.5.2 go in into 6.2 and that's great.
                    Thank you very much for all the hard work!

                    Comment


                    • #30
                      Originally posted by skeevy420 View Post

                      Out of curiosity, do you tune the compressor for each individual partition/volume or do you just go with the default settings? For example, on my ZFS raid, games are stored on a Zstd-19 compressed volume with a whopping 85mb/s write speed but KDE kdesrc-build is using LZ4 for it's build directory so I don't impact compile speeds.
                      No.

                      Unfortunately I didn't know how to do that.
                      I rember I was tweaking some file called Btrfs before installing the system, but I remember I just put "zstd" as shown in some tutorial, I didn't know how and where to put a level.

                      Comment

                      Working...
                      X