Announcement

Collapse
No announcement yet.

Zstd'ing The Kernel Might See Mainline With Linux 5.9 For Faster Boot Times

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • atomsymbol
    replied
    Originally posted by yoshi314 View Post
    smaller (compressed) kernel with fast unpack speed = faster boot. zstd is designed to decompress quickly, pretty much on par with gzip. it actually seems to pull ahead in decompression in benchmarks.

    on low-end embedded hardware things might be different, though.
    Just a note/observation:

    The math expression for evaluating Linux kernel compressors should be: T_read + T_decompress. Unless there is a good reason to leave T_read out of the equation, it should always be present.

    With a NVMe SSD and no compression (T_decompress == 0), T_read is very small. Assuming the size of vmlinux.bin is 32 MB, T_read_32MB_nvme = 32MB/(2GB/s) = 16 milliseconds.

    With SATA SSD, T_read_32MB_sata = 32MB/(500MB/s) = 64 milliseconds.

    In conclusion: If /boot is NVMe SSD, then either avoid compressing vmlinux.bin altogether, or use the fastest decompressor irrespective of how efficient the compression algorithm is.

    Leave a comment:


  • CommunityMember
    replied
    Originally posted by NateHubbard View Post
    Although I think this is a nice development, I'm not entirely understanding why so many people are so excited by this particular use of zstd.
    In addition to the IoT low end use cases, every (fraction of a) second that boot time is reduced can be significant for the cloud-scale services who may boot thousands of systems and VMs per minute to spin up new and/or replacement instances to handle their workloads.

    Leave a comment:


  • discordian
    replied
    Originally posted by NateHubbard View Post
    Although I think this is a nice development, I'm not entirely understanding why so many people are so excited by this particular use of zstd.
    Even using xz for the kernel, it appears to decompress and start booting pretty instantaneously.
    I'm not sure I'd even notice the difference here.
    Again, I am glad they will be adding the option though.
    Think of static kernels with tons of modules, think of embedded SOCs (my RK3288 will run at a tiny fraction of the normal clock until the external voltage regulators are configured, bootloader doesn't handle them).

    but the initrd support is more critical IMHO, because I'd want to eradicate all de(compressors) except zstd and lz4.

    Leave a comment:


  • NateHubbard
    replied
    Although I think this is a nice development, I'm not entirely understanding why so many people are so excited by this particular use of zstd.
    Even using xz for the kernel, it appears to decompress and start booting pretty instantaneously.
    I'm not sure I'd even notice the difference here.
    Again, I am glad they will be adding the option though.

    Leave a comment:


  • R41N3R
    replied
    Finally, I wanted to use this since a very long time :-)

    Leave a comment:


  • tildearrow
    replied
    gzip is like H.264 (fast but not that efficient)
    xz is like AV1 (very efficient but extremely slow)
    Zstd is like VP9 or H.265 (fast and efficient)

    Leave a comment:


  • discordian
    replied
    Originally posted by evergreen View Post

    I managed to get the decompression code down to ~15 KB with just a few macro definitions and compilation flags (documented on the project's page).
    Really? Did not go anywhere near that for me (back when I tried). Suppose you will use a good chunk of speed with those settings.

    Leave a comment:


  • LoveRPi
    replied
    Originally posted by NotMine999 View Post
    I didn't think size matters anymore since current computers have gigabytes of memory and terabytes of storage ... with the obvious exception for parts of the embedded world.
    The embedded world is still waiting for LZ4 on BTRFS.

    Leave a comment:


  • stormcrow
    replied
    Originally posted by yoshi314 View Post
    smaller (compressed) kernel with fast unpack speed = faster boot. zstd is designed to decompress quickly, pretty much on par with gzip. it actually seems to pull ahead in decompression in benchmarks.


    on low-end embedded hardware things might be different, though.
    On embedded devices it'll depend on whether or not the CPU or other processor has an optimized version of zstd/gzip/w-e whether that's an advantage. In nearly any case though, the compressed kernel is often necessary as some computer systems won't boot if a kernel is over a certain file size limit, this includes some more familiar x86-64 based systems.

    Leave a comment:


  • NotMine999
    replied
    I didn't think size matters anymore since current computers have gigabytes of memory and terabytes of storage ... with the obvious exception for parts of the embedded world.

    We are talking Linux kernels here, code that is typically a few MB in size, depending on what features have been compiled into it. Compressing that to make it smaller so it will load faster? WTF

    To me it sounds like the storage subsystem is slow or poorly optimized to load a few MB of code. Perhaps that is where this exercise should look.

    Now I am assuming here that the CPU is adequate for the task as decently performing Intel and AMD processors can be bought for 100 to 200 USD.

    Perhaps the code being loaded and executed is not optimal for the target processor in question? This exercise should also look at that.

    All this bikeshedding over compression algorithms ... for what? It's just bandage efforts. Such time and energy are better spent on more important things.

    Leave a comment:

Working...
X