Announcement

Collapse
No announcement yet.

Ubuntu Moving Ahead With Compressing Their Kernel Image Using LZ4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I've been compressing my kernels with xz for years, it was certainly the smallest option when I picked it then. They're 8MB each with the firmware and drivers baked in - no modules

    Comment


    • #12
      Another Canonical breakthrough!

      Comment


      • #13
        Originally posted by FireBurn View Post
        I've been compressing my kernels with xz for years, it was certainly the smallest option when I picked it then. They're 8MB each with the firmware and drivers baked in - no modules
        I use xz on Arch installations too. I don't use the autodetect hook for portability and because it bugged out once so I'll never trust autodetect again. xz keeps the non-autodetect initrd nice and small.

        Way to go, Dimitri John Ledkov! I remember you from the Ubuntu mailing list. One of the few progressives left at Canonical, so it's nice to see something you do finally not be blocked by grumpy people.

        Comment


        • #14
          According to this page : https://github.com/facebook/zstd

          Decompress speed:
          lz4: 4220 MB/s
          gzip-1: 440 MB/s
          zstd-1: 1360 MB/s

          So even gzip-1 should be good enough unless you have NVMe drive that can do 1GB/s +

          Personally I would rather save the space than getting a fraction of a second increase in speed...

          zstd-1 would be optimal though. (Edit: from candidates in that table)

          Edit1:
          Lets take a hypothetical image that is 20MB in size:
          Compress with zstd-1:
          6.93 MB
          Compress with lz4:
          9.52 MB
          Compress with gzip-1:
          7.29 MB

          Load time:
          zstd-1:
          5.099 ms
          lz4:
          2.255 ms
          gzip-1:
          16.571 ms

          Saving 14ms during boot (only on NVMe) feels kind of optimizing the wrong thing ...

          Edit2:
          Ok so those values are for a i9900k-wara-wara-wara

          So lets say my crappy laptop is 10x slower than mentioned machine.(Crappy laptop probably doesn't even have SDD, in which case then (decompress speed/compression ratio) is king like Compartmentalisation rightly pointed out)

          Now the load times (assume SSD for crappy laptop so disk io doesn't break the math)
          Load time:
          zstd-1:
          50.09 ms
          lz4:
          22.55 ms
          gzip-1:
          160.571 ms

          So now it is 138.02 ms faster at boot time. Still not even close to a second ...
          Last edited by Raka555; 06-06-2019, 04:38 PM.

          Comment


          • #15
            Originally posted by Raka555 View Post
            According to this page : https://github.com/facebook/zstd

            Decompress speed:
            lz4: 4220 MB/s
            gzip-1: 440 MB/s
            zstd-1: 1360 MB/s

            So even gzip-1 should be good enough unless you have NVMe drive that can do 1GB/s +

            Personally I would rather save the space than getting a fraction of a second increase in speed...

            zstd-1 would be optimal though.
            Note those numbers were gathered on a Core i9-9900K CPU @ 5.0GHz. A laptop is going to be significantly slower.

            Comment


            • #16
              Originally posted by Raka555 View Post
              According to this page : https://github.com/facebook/zstd

              Decompress speed:
              lz4: 4220 MB/s
              gzip-1: 440 MB/s
              zstd-1: 1360 MB/s

              So even gzip-1 should be good enough unless you have NVMe drive that can do 1GB/s +

              Personally I would rather save the space than getting a fraction of a second increase in speed...

              zstd-1 would be optimal though.
              zstd -19 would be optimal for most use cases, or zstd -2 if you have to compress really quickly. *signed by the zstd evangelists*

              In a serious way, you missed that it goes from disk to memory. So you can have something that has 50% compression level, right? Then you only need to read 50% of the size from the disk, which is why compressing the initramfs means it gets loaded faster than using a plain image. It'd be on those NVMe disks that you have less need to compress it, but you'd really want to compress it on a slow HDD.

              Comment


              • #17
                Originally posted by smitty3268 View Post

                Note those numbers were gathered on a Core i9-9900K CPU @ 5.0GHz. A laptop is going to be significantly slower.
                Everyone has one...

                That's what you need to get a decent experience when running gnome and a modern browser

                Comment


                • #18
                  Originally posted by Compartmentalisation View Post

                  In a serious way, you missed that it goes from disk to memory. So you can have something that has 50% compression level, right? Then you only need to read 50% of the size from the disk, which is why compressing the initramfs means it gets loaded faster than using a plain image. It'd be on those NVMe disks that you have less need to compress it, but you'd really want to compress it on a slow HDD.
                  I assumed SSD as that is the only case where faster decompression would matter, otherwise the storage is the bottleneck and your argument comes in to play.

                  Comment


                  • #19
                    heh I have been building kernels ... in this way for 5 years

                    https://www.netext73.pl/

                    Comment


                    • #20
                      Now I'm wondering what the kernel time from systemd-analyze mean in the command output on my laptop:
                      Startup finished in 3.371s (kernel) + 11.628s (userspace) = 14.999s
                      Is the time counted from when the kernel decompressor starts or is the time counted after the kernel is decompressed?
                      Is systemd-analyze tool able to show a difference between these compression / decompression algorithms?

                      Comment

                      Working...
                      X