Announcement

Collapse
No announcement yet.

LZ4 Compression For Hibernation Images Queued For Linux 6.9: Faster Restore Times

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Anux View Post
    While I don't see the point in not exposing those options I didn't really demand it. Just having zstd with standard settings results in -3, which is also a pretty fast setting.

    With LZ4 and to a lesser extend LZO there is no point in giving options because they don't really do anything, LZO starts to do something with -6 and up but it mostly results in much longer compression times and a little bit smaller files.
    While zstd can be heavily tuned to be supper fast or high compression as well as any nuance in between. Which in theory would give the devs the possibility to do small benchmarks on the user machine and select the options that give you the fastest wake up times.


    Exactly but you can't be specific because every combination of CPU and SSD/HDD has another perfect decompressor. On one PC it might be LZ4 on another (lets say with a classic HDD) it might be zstd -10. Therefore my desire to do some automated testing on installation to select the best compression.
    Just changing the standard to another will always result in some people having a worse experience.
    Except you can somewhat generalize CODECs since it isn't hard to Google up nearly a decade of LZ4 and LZO benchmark data or half a decade for Zstd. Plus we're stuck using the default settings so saying use LZ4 on one PC and use Zstd -10 on another isn't possible. We're not talking about BTRFS or OpenZFS with minimal compressor tunables.

    Limited to that specific scenario with fixed hardware, fixed OS settings, and fixed compressor settings, LZ4 is always faster at decompression and compression over LZO and Zstd. When it comes to compression ratios, LZO has a very slight edge over LZ4 while Zstd has the edge over LZO. In regards to decompression, LZ4 is always fastest followed by Zstd and then LZO. That will happen with any CPU or memory combination from the x86_64 era. The only time they'll have similar numbers and that generalization won't work is if there's a disk bottleneck...and even then there's the generalization that accessing compressed data from a slow drive is faster with a CODEC than accessing the same data uncompressed from the slow drive since the data is larger uncompressed. In that case, under a severe bottleneck, LZ4 is still the fastest and generally best one to use in regards to raw throughput and system responsiveness.

    Speaking of Zstd, LZ4 and OpenZFS, LZ4 is so fast that OpenZFS first uses LZ4 first to determine if data can be compressed and then it'll hand that off to Zstd if it compresses. They do that because it turns out that using LZ4 as a testing heuristic is faster than the heuristic that Zstd uses to determine if data can be compressed or not. Go figure.

    Comment


    • #22
      Originally posted by skeevy420 View Post
      Plus we're stuck using the default settings so saying use LZ4 on one PC and use Zstd -10 on another isn't possible.
      Sure it is possible, if I can do it manually in arch for my initramfs then I see no reason why it can't be done for hibernate.

      We're not talking about BTRFS or OpenZFS with minimal compressor tunables.
      What maters is only if the decompressor is available in the kernel.

      Limited to that specific scenario with fixed hardware, fixed OS settings, and fixed compressor settings, LZ4 is always faster at decompression and compression over LZO and Zstd.
      Yes it is for the pure decompression part but not for the whole part (read data and then decompress). In some cases LZ4 could even be slower than uncompressed data (slow CPU with fast NVMe).

      and even then there's the generalization that accessing compressed data from a slow drive is faster with a CODEC than accessing the same data uncompressed from the slow drive since the data is larger uncompressed.
      Yes but we look for the fastest combination of reading data and decompressing.

      size in MB * read speed of disk is the first time contributor and decompression time the second one. The optimal solution is dependend on the hibernate size, the disk read speed and the decompression speed (which in turn depend on your disk and cpu).

      In case of kernel boot times we are speaking of 100 ms differences but with hibernate we have > 1 GB of data and boot times become much higher (5 to 10 seconds). Compression time is actually not that important as long as we stay on low zstd settings.

      Comment

      Working...
      X