Announcement

Collapse
No announcement yet.

Patches Revived For A Zstd-Compressed Linux Kernel While Dropping LZMA & BZIP2

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Patches Revived For A Zstd-Compressed Linux Kernel While Dropping LZMA & BZIP2

    Phoronix: Patches Revived For A Zstd-Compressed Linux Kernel While Dropping LZMA & BZIP2

    For more than a year it's been talked about adding an option to support Zstd-compressed Linux kernel images while it looks like that Facebook-backed high performance compression algorithm for kernel images could soon finally be mainlined...

    http://www.phoronix.com/scan.php?pag...-Image-EOY2018

  • #2
    This link details a lot of arguments for why LZMA2 and XZ are poor quality archive formats, and why LZMA1 is superior to them.

    http://www.nongnu.org/lzip/xz_inadequate.html

    Comment


    • #3
      BZIP2 is basically useless these days except for compatibility, the decompressor is so slow that it's not appropriate in an embedded setting, and the compression is worse than LZMA2 (and often worse than DEFLATE from zopfli), and outside of an embedded setting, you're often better off with simpler compression anyway due to the capacity and speed of desktop/server storage devices.

      Comment


      • #4
        Originally posted by microcode View Post
        BZIP2 is basically useless these days except for compatibility, the decompressor is so slow that it's not appropriate in an embedded setting, and the compression is worse than LZMA2 (and often worse than DEFLATE from zopfli), and outside of an embedded setting, you're often better off with simpler compression anyway due to the capacity and speed of desktop/server storage devices.

        Not entirely true. At work, we're logging proprietary sensor data (basically CAN frames) plus some header plus some more stuff.


        Guess what? bzip2 compresses best - better than xz. Files are some 10% smaller. It's quite slow, true, however if the data rate is low, compressing live data on some older ARM is fast enough.


        Previously, we used gz as well, but I replaced it with zstd. xz is too slow for our purpose .



        Fun fact : if I preprocess the log data with a large burrow-wheeler-transform, xz manages to close the gap to bz2.

        Comment


        • #5
          What support for Zstd compression on BTRFS in GRUB?

          Comment


          • #6
            Originally posted by EmbraceUnity View Post
            This link details a lot of arguments for why LZMA2 and XZ are poor quality archive formats, and why LZMA1 is superior to them.

            http://www.nongnu.org/lzip/xz_inadequate.html
            The arguments are very similar to the ARJ vs PKZIP debate in the 90s. ARJ users advocated that ARJ was so much more secure, safe and recoverable than PKZIP. Either way, we all know how that went...

            Comment


            • #7
              why would they remove the fastest one, the only one that allows multithreaded compression and decompression? sure, the implementation in the kernel sucks and only uses a single thread, but the solution is to fix the implementation, not doom everyone to horribly slow single-threaded decompression forever.

              Comment


              • #8
                Originally posted by oleid View Post
                Not entirely true. At work, we're logging proprietary sensor data (basically CAN frames) plus some header plus some more stuff.

                Guess what? bzip2 compresses best - better than xz. Files are some 10% smaller. It's quite slow, true, however if the data rate is low, compressing live data on some older ARM is fast enough.

                Previously, we used gz as well, but I replaced it with zstd. xz is too slow for our purpose .

                Fun fact : if I preprocess the log data with a large burrow-wheeler-transform, xz manages to close the gap to bz2.
                Hot damn, I suppose there's always something. I'll have to remember to try bzip2 next time I'm compressing something structured like that.

                Comment


                • #9
                  Originally posted by hotaru View Post
                  why would they remove the fastest one, the only one that allows multithreaded compression and decompression? sure, the implementation in the kernel sucks and only uses a single thread, but the solution is to fix the implementation, not doom everyone to horribly slow single-threaded decompression forever.
                  Fastest? Ever tried lz4?

                  Comment


                  • #10
                    I recently tried switching the compression on my custom built kernels for a couple of old C2Q 9400s from xz to lz4. The size increased by ~2x and the decompression speed increased by ~3x -- it was the difference between wait .. wait .. wait .. GO! and wa..GO! after the bootloader screen.

                    Size wise, going from 5 -> 10 MB on my systems (for the kernel image in /boot) isn't an issue, but I can see why distros (which typically have bigger kernels and more downloads) might not want to switch away from xz.

                    Comment

                    Working...
                    X