Announcement

Collapse
No announcement yet.

Updated Zstd Implementation Merged For Linux 6.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • S.Pam
    replied
    Originally posted by ryao View Post

    I use ZFS with zstd and a 1M recordsize on my machine for that. I get higher compression than btrfs can provide even with zstd since btrfs is limited to compressing in 64KB blocks at a time, which harms the compression ratio. You can test this by manually using zstd to compress the Linux kernel at different block sizes. You should see zstd compresses it much more efficiently with a 1MB block size than a 64KB block size.

    That is not 100% accurate since it does not do padding like ZFS and btrfs do to round up to say the nearest 4KB sector, but it should still do a good job of showing the difference between the two in terms of space savings.
    Correction. Btrfs uses maximum of 128k blocks for storing the compressed data. The uncompressed data can be much larger, depending on the compression ratio.

    Leave a comment:


  • terrelln
    replied
    Originally posted by Danny3 View Post
    Finally!
    I've been asking for this for almost a year.
    But weren't the Zstd developers saying that they plan to release another Zstd version this month and then update the kernel to that?
    It's a shame they didn't do that.
    Like discussed here:
    https://github.com/facebook/zstd/iss...ent-1267381027
    The timing didn't work out, unfortunately. It was either take v1.5.2 in the 6.2 merge window, or wait until 6.3. Even if we had the next release out right now, it would have to bake in linux-next for some time before we can update it, and would miss this merge window. But once the release is complete, we can aim for the 6.3 or 6.4 merge window.

    Leave a comment:


  • Weasel
    replied
    Well sure, LZ4 is better when you access the swap more frequently. But in general, my use case for ZRAM is to free some of my RAM for barely-used portions in it. (i.e. original intention of swap).

    I could use specialized compressed filesystem, mind you, but I prefer swap because it's transparent with tmpfs. I don't have to specifically put stuff into zram that I want compressed (and don't access frequently), it just happens transparently and automatically for stuff that's barely accessed, so it's convenient.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Weasel View Post
    1 word: tmpfs.

    Here's a quickie: have you ever stored throwaway VMs and containers entirely in tmpfs?

    If not, then you clearly are using your PC completely different than me.
    Yes. I've copied entire games into TMPFS. On my last system I used to run a 30GB modded Skyrim from TMPFS. IMHO, VMs and large things running from ram are niche write once, read many use-cases that benefit from the higher compression ratios. It's the same logic as my games being stored under Zstd-19.

    Still, though, I think that for generic, unknown use-cases LZ4 is better*; especially if the system has ram to spare and we're talking about my original use-case of a compressor for a backing swap drive that is likely unnecessary. If you have a niche use like copying 30GB of disk images or games or ??? into ram before running it, by all means use a different compressor geared for that use.

    *but better is subjective and varies by use-case and we're talking about two very different use-cases. A very minimally used Zswap versus an intentionally saturated TMPFS. Of course those wildly different scenarios use different tunings for "better".

    Leave a comment:


  • Weasel
    replied
    Originally posted by skeevy420 View Post
    There's a point of diminishing returns in regards to using Zram or Zswap in a high memory system ram. For me it's 16GB using a 4GB lz4 Zram, 4GB /tmp, and 8GB ram setup. If I'm on a system with more than that then swap is more of a precaution than a necessity so there isn't a point in putting an IO limiter as a form of out of memory precaution. If I'm on a system with less than 16GB then Zstd becomes an option since I'm starting to delve into the realm of actually running out of memory and the extra space that Zstd can offer can be helpful and necessary even at the expense of slower memory/swapping write speeds.
    1 word: tmpfs.

    Here's a quickie: have you ever stored throwaway VMs and containers entirely in tmpfs?

    If not, then you clearly are using your PC completely different than me.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Weasel View Post
    That's just you. More RAM is always good, even if it's "slower".
    There's a point of diminishing returns in regards to using Zram or Zswap in a high memory system ram. For me it's 16GB using a 4GB lz4 Zram, 4GB /tmp, and 8GB ram setup. If I'm on a system with more than that then swap is more of a precaution than a necessity so there isn't a point in putting an IO limiter as a form of out of memory precaution. If I'm on a system with less than 16GB then Zstd becomes an option since I'm starting to delve into the realm of actually running out of memory and the extra space that Zstd can offer can be helpful and necessary even at the expense of slower memory/swapping write speeds.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Danny3 View Post

    But when you have only 119 GiB of storage and not a high-end CPU, your only hope is BTRFS + Zstd compression, but that must not slow down everything down, especially when extracting / compressing files or copy / move folders with lots of things inside.
    Out of curiosity, do you tune the compressor for each individual partition/volume or do you just go with the default settings? For example, on my ZFS raid, games are stored on a Zstd-19 compressed volume with a whopping 85mb/s write speed but KDE kdesrc-build is using LZ4 for it's build directory so I don't impact compile speeds.

    Leave a comment:


  • Weasel
    replied
    Originally posted by skeevy420 View Post
    I feel for all these people excited about using Zstd Zram so they can literally download more ram. Seriously, I feel bad for those folks. The way I figure, if you have so little ram that Zstd's ~2.5x compression ratio actually matters when compared to LZ4's ~1.5x compression ratio so you're compelled to limit your ram speed to an SSD or slower must be a sucky position to be in; especially if it's a system that can't be upgraded. IMHO, once you have 16GB or more ram, Zram for swap is a "just in case some shit needs swap precaution" than something that's actually necessary. That's why I use a 4GB LZ4 Zram with my 32GB of memory. I care about throughput over space savings so all I need is some compression that enhances throughput and not a compressor that will slow throughput for enhanced compression. Zstd is great, but it isn't designed for high performance IO....just the O. The I is rather limited using the non-fast values.​
    That's just you. More RAM is always good, even if it's "slower".

    Leave a comment:


  • skeevy420
    replied
    I feel for all these people excited about using Zstd Zram so they can literally download more ram. Seriously, I feel bad for those folks. The way I figure, if you have so little ram that Zstd's ~2.5x compression ratio actually matters when compared to LZ4's ~1.5x compression ratio so you're compelled to limit your ram speed to an SSD or slower must be a sucky position to be in; especially if it's a system that can't be upgraded. IMHO, once you have 16GB or more ram, Zram for swap is a "just in case some shit needs swap precaution" than something that's actually necessary. That's why I use a 4GB LZ4 Zram with my 32GB of memory. I care about throughput over space savings so all I need is some compression that enhances throughput and not a compressor that will slow throughput for enhanced compression. Zstd is great, but it isn't designed for high performance IO....just the O. The I is rather limited using the non-fast values.

    Zswap...I mixed feelings about using Zstd there since it uses ram and a backing drive and the backing drive is likely slower than ram. In that case we might as well throttle the compressor to the lowest common denominator storage's write speed which will probably be Zstd 1 or 2 using in-kernel options and assuming an SSD and a preference towards more throughput.​
    Last edited by skeevy420; 20 December 2022, 10:21 AM.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by ryao View Post

    I use ZFS with zstd and a 1M recordsize on my machine for that. I get higher compression than btrfs can provide even with zstd since btrfs is limited to compressing in 64KB blocks at a time, which harms the compression ratio. You can test this by manually using zstd to compress the Linux kernel at different block sizes. You should see zstd compresses it much more efficiently with a 1MB block size than a 64KB block size.

    That is not 100% accurate since it does not do padding like ZFS and btrfs do to round up to say the nearest 4KB sector, but it should still do a good job of showing the difference between the two in terms of space savings.
    I alternate between LZ4 and Zstd-19 on my ZFS volumes (1M recordsize, of course)...one for write heavy and the other for read heavy. Since it's a "hidden" feature: Have you tried up to 16M? I've thought about it but I've only seen minor bits of anecdotal data on Reddit.

    Leave a comment:

Working...
X