Announcement

Collapse
No announcement yet.

Ubuntu Moving Ahead With Compressing Their Kernel Image Using LZ4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Raka555
    replied
    Originally posted by phuclv View Post

    One of my previous employees have switched to LZ4 years ago for embedded ARM devices after benchmarking various compression options and realized that the space-time trade-off is good enough
    Time for him/her to re-do that investigation with zstd in the mix

    Leave a comment:


  • phuclv
    replied
    Originally posted by Raka555 View Post
    So now it is 138.02 ms faster at boot time. Still not even close to a second ...
    One of my previous employees have switched to LZ4 years ago for embedded ARM devices after benchmarking various compression options and realized that the space-time trade-off is good enough

    Leave a comment:


  • hotaru
    replied
    Originally posted by LoveRPi View Post
    In modern era of SSD and eMMC based drives, lz4 is optimally suited. It's the only compression and decompression method that can saturate these devices.
    bzip2 would be better if the kernel implemented multi threaded decompression. much better compression ratio, and with a decent number of cores, it's faster than lz4.

    Leave a comment:


  • uxmkt
    replied
    Originally posted by Raka555 View Post
    According to this page : https://github.com/facebook/zstd
    Decompress speed:
    lz4: 4220 MB/s
    gzip-1: 440 MB/s
    zstd-1: 1360 MB/s
    Cannot reproduce those supposed decompression speeds. (Then again, using a binary executable corpus, and not the Silesia text corups!)
    zstd 1.4.0 is giving about 3300-3600 MB/s across all its 19 levels, and lz4 1.9.1 2400-3000 MB/s across all 12 levels. (gzip 1600-1700). (CPU: ryzen 1700, all 16 threads busy with decomp.)
    Last edited by uxmkt; 06-07-2019, 05:49 AM.

    Leave a comment:


  • discordian
    replied
    Originally posted by Danny3 View Post
    Now I'm wondering what the kernel time from systemd-analyze mean in the command output on my laptop:
    Startup finished in 3.371s (kernel) + 11.628s (userspace) = 14.999s
    Is the time counted from when the kernel decompressor starts or is the time counted after the kernel is decompressed?
    Is systemd-analyze tool able to show a difference between these compression / decompression algorithms?
    Decompression is happening before any logs are available, most you could get is some messages on a early serial driver (needs to be configured during compilation). You would have to measure yourself.
    The decoder is really a layer around the kernel-image, kinda like some self-extracting zip/rar archive. On one of my ARM boards this is painfully obvious, as the bootloader is not able to initialize the voltage regulators and the cpu runs at s laughable fraction of its speed while decompressing.

    You could however see the time it takes to decompress the initramdisk in the logs.

    Leave a comment:


  • Danny3
    replied
    Now I'm wondering what the kernel time from systemd-analyze mean in the command output on my laptop:
    Startup finished in 3.371s (kernel) + 11.628s (userspace) = 14.999s
    Is the time counted from when the kernel decompressor starts or is the time counted after the kernel is decompressed?
    Is systemd-analyze tool able to show a difference between these compression / decompression algorithms?

    Leave a comment:


  • ext73
    replied
    heh I have been building kernels ... in this way for 5 years

    https://www.netext73.pl/

    Leave a comment:


  • Raka555
    replied
    Originally posted by Compartmentalisation View Post

    In a serious way, you missed that it goes from disk to memory. So you can have something that has 50% compression level, right? Then you only need to read 50% of the size from the disk, which is why compressing the initramfs means it gets loaded faster than using a plain image. It'd be on those NVMe disks that you have less need to compress it, but you'd really want to compress it on a slow HDD.
    I assumed SSD as that is the only case where faster decompression would matter, otherwise the storage is the bottleneck and your argument comes in to play.

    Leave a comment:


  • Raka555
    replied
    Originally posted by smitty3268 View Post

    Note those numbers were gathered on a Core i9-9900K CPU @ 5.0GHz. A laptop is going to be significantly slower.
    Everyone has one...

    That's what you need to get a decent experience when running gnome and a modern browser

    Leave a comment:


  • Compartmentalisation
    replied
    Originally posted by Raka555 View Post
    According to this page : https://github.com/facebook/zstd

    Decompress speed:
    lz4: 4220 MB/s
    gzip-1: 440 MB/s
    zstd-1: 1360 MB/s

    So even gzip-1 should be good enough unless you have NVMe drive that can do 1GB/s +

    Personally I would rather save the space than getting a fraction of a second increase in speed...

    zstd-1 would be optimal though.
    zstd -19 would be optimal for most use cases, or zstd -2 if you have to compress really quickly. *signed by the zstd evangelists*

    In a serious way, you missed that it goes from disk to memory. So you can have something that has 50% compression level, right? Then you only need to read 50% of the size from the disk, which is why compressing the initramfs means it gets loaded faster than using a plain image. It'd be on those NVMe disks that you have less need to compress it, but you'd really want to compress it on a slow HDD.

    Leave a comment:

Working...
X