Announcement

Collapse
No announcement yet.

Patches Revived For A Zstd-Compressed Linux Kernel While Dropping LZMA & BZIP2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ermo
    replied
    Originally posted by bofh80

    Does the lz4 implementation include the -hc option or is that a different port. i was never sure. i'd like to see it tested. unless i'm terribly confused, it should provide the smaller file size while still being faster decompression wise. i'll look it into it more when i have time i guess.

    Michael these different options for the kernel compression be hard to test through / benchmark ?
    Looking at
    Code:
    ./scripts/Makefile.lib
    the LZ4 compression of the build artifacts is done with
    Code:
    lz4c -l -c1
    This is the legacy format and not the new LZ4 stream format, which supports LZ4HC as I understand it. Don't know if there's a patch available for the kernel that makes its LZ4 decoder support the new hc-compatible format.

    In terms of improving the kernel boot speed, the couple of papers I've seen both suggest that the combined time for loading and then decompressing a LZ4HC compiled kernel is the fastest method by some margin.

    The way I see it, why not cut down the options to either xz, zstd or lz4hc? Drop bzip2, gzip and lzo since they serve little purpose given that xz > bzip2, zstd > gzip and lz4hc > lzo. But that's just my off-the-cuff judgement -- there could very well be people who for some reason need one of the legacy formats.

    Leave a comment:


  • hotaru
    replied
    Originally posted by jntesteves View Post

    You could equaly parallelize the workload with any of these codecs, though they all work sequentially. This isn't a property specific to bzip2.
    no, you couldn't. in bzip2, the blocks are compressed independently, so they don't need to be processed sequentially. this is a property that, among the algorithms available for kernel compression, is specific to bzip2.

    Leave a comment:


  • jntesteves
    replied
    Originally posted by hotaru View Post
    not with bzip2. check the lbzip2 implementation.

    not as much as using lz4 will.

    if you have a lot of slow cores, a parallel implementation of bzip2 is similar in speed to lz4, but with significantly better compression ratio.
    You could equaly parallelize the workload with any of these codecs, though they all work sequentially. This isn't a property specific to bzip2.

    Leave a comment:


  • hotaru
    replied
    Originally posted by log0 View Post
    Compression by its nature is a serial process.
    not with bzip2. check the lbzip2 implementation.

    Originally posted by log0 View Post
    Going multi-threaded will hurt compression ratio.
    not as much as using lz4 will.

    if you have a lot of slow cores, a parallel implementation of bzip2 is similar in speed to lz4, but with significantly better compression ratio.

    Leave a comment:


  • Weasel
    replied
    But LZMA1 sometimes compresses better (less redundancy) than xz.

    I hope this proposal gets trashed because it's retarded.

    Leave a comment:


  • log0
    replied
    Originally posted by hotaru View Post

    can it be decompressed in parallel on multiple cores?
    Compression by its nature is a serial process. Going multi-threaded will hurt compression ratio.

    lz4 decompression beats memcpy. So I'd argue it is fast enough.

    Leave a comment:


  • rene
    replied
    yep, you're welcome! ;-)

    Leave a comment:


  • hotaru
    replied
    Originally posted by caligula View Post

    Fastest? Ever tried lz4?
    can it be decompressed in parallel on multiple cores?

    Leave a comment:


  • ermo
    replied
    I recently tried switching the compression on my custom built kernels for a couple of old C2Q 9400s from xz to lz4. The size increased by ~2x and the decompression speed increased by ~3x -- it was the difference between wait .. wait .. wait .. GO! and wa..GO! after the bootloader screen.

    Size wise, going from 5 -> 10 MB on my systems (for the kernel image in /boot) isn't an issue, but I can see why distros (which typically have bigger kernels and more downloads) might not want to switch away from xz.

    Leave a comment:


  • caligula
    replied
    Originally posted by hotaru View Post
    why would they remove the fastest one, the only one that allows multithreaded compression and decompression? sure, the implementation in the kernel sucks and only uses a single thread, but the solution is to fix the implementation, not doom everyone to horribly slow single-threaded decompression forever.
    Fastest? Ever tried lz4?

    Leave a comment:

Working...
X