Announcement

Collapse
No announcement yet.

Updated Zstd Implementation Merged For Linux 6.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • skeevy420
    replied
    Originally posted by opengears View Post
    Does anyone know what the current timeline for ZSTD is? When can we expect a new version?
    Kernel or Zstd actual?

    It looks like Zstd actual might have a 1.5.4 release sometime soon. Been nearly a year since its last release but there's some talk about a 1.5.4 release in various git issues lately. Take that with a grain of salt.

    Leave a comment:


  • opengears
    replied
    Does anyone know what the current timeline for ZSTD is? When can we expect a new version?
    Last edited by opengears; 12 January 2023, 03:18 PM.

    Leave a comment:


  • NobodyXu
    replied
    Originally posted by skeevy420 View Post

    For whatever reason I though that levels could be set on a per-directory basis with btrfs property set...but that only uses the default value of 3.
    Oh you are right, btrfs actually supports per file compression algorithm setting, which does not support compression level yet.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by NobodyXu View Post

    The btrfs compression algorithm/level is set in fstab (bascially at mounting), so you cannot tune that for each subvolume.
    For whatever reason I though that levels could be set on a per-directory basis with btrfs property set...but that only uses the default value of 3.

    Leave a comment:


  • NobodyXu
    replied
    Originally posted by Danny3 View Post

    No.

    Unfortunately I didn't know how to do that.
    I rember I was tweaking some file called Btrfs before installing the system, but I remember I just put "zstd" as shown in some tutorial, I didn't know how and where to put a level.
    The btrfs compression algorithm/level is set in fstab (bascially at mounting), so you cannot tune that for each subvolume.

    Leave a comment:


  • Danny3
    replied
    Originally posted by skeevy420 View Post

    Out of curiosity, do you tune the compressor for each individual partition/volume or do you just go with the default settings? For example, on my ZFS raid, games are stored on a Zstd-19 compressed volume with a whopping 85mb/s write speed but KDE kdesrc-build is using LZ4 for it's build directory so I don't impact compile speeds.
    No.

    Unfortunately I didn't know how to do that.
    I rember I was tweaking some file called Btrfs before installing the system, but I remember I just put "zstd" as shown in some tutorial, I didn't know how and where to put a level.

    Leave a comment:


  • Danny3
    replied
    Originally posted by terrelln View Post

    The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.
    Well, I'm glad that at least v1.5.2 go in into 6.2 and that's great.
    Thank you very much for all the hard work!

    Leave a comment:


  • nadir
    replied
    Originally posted by terrelln View Post

    The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.
    I figured as much. Thank you a lot for your work on zstd. I'm benefiting daily from what you've accomplished.

    Leave a comment:


  • ryao
    replied
    Originally posted by S.Pam View Post

    Correction. Btrfs uses maximum of 128k blocks for storing the compressed data. The uncompressed data can be much larger, depending on the compression ratio.
    When I last read the documentation, I saw a 64KB limit. Thanks for letting me know that it is 128KB.

    Leave a comment:


  • ryao
    replied
    Originally posted by skeevy420 View Post

    I alternate between LZ4 and Zstd-19 on my ZFS volumes (1M recordsize, of course)...one for write heavy and the other for read heavy. Since it's a "hidden" feature: Have you tried up to 16M? I've thought about it but I've only seen minor bits of anecdotal data on Reddit.
    Compression on that is limited to 1M chunks if I recall correctly. It was a surprise to me when I learned that it had been implemented that way.

    Leave a comment:

Working...
X