Announcement

Collapse
No announcement yet.

Ubuntu 19.10 To Boot Faster Thanks To LZ4 Compression

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • skeevy420
    replied
    Originally posted by markg85 View Post

    Sorry, but your tl;dr is moot. You compare it to xz which is not what we're talking about.
    Anyhow, i did look at your link and you do get very impressive results!
    I'm not adventurous enough to try out kernel patches so i just take your word for it

    To be clear though. Booting my machine with nvme is already insanely fast in mere seconds (cold boot to login). I'm very sure the initramfs part takes about half a second.. So ehh :P
    Yeah well, it's fast. Compression could just make it a tiny bit faster, but still nothing i'd actually notice i think.
    On my PC with spinning drives, ZSTD does make a difference over XZ, but barely noticeable but enough to mention it because it could matter for really low end devices.

    ZSTD compared to LZ4 on boot, can't tell a difference between the two until I look at raw numbers from benchmarks.

    It's how well ZSTD compresses at the really high --fast modes that is really peaking my interests. My ramdisks would love that.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by markg85 View Post
    I can see this being a benefit on SSD's and HDD's as the decompression speed in memory simply outperforms the time it would have taken to read the raw data from the source.

    But i wonder how that equation holds when looking at Nvme m.2 with 3.2 GB/s!
    Now the raw throughput is not what one should look at here. 3.2GB/s is insanely fast but you only get that with large files.
    You don't get anywhere near that number when you have loads of small files, which initramfs has!
    The more realistic number is the 4k random read on those storage devices which is closer to ~250MB/s. Still darn impressive!
    But would using a compressed LZ4 (or zstd) image be faster in this case? I really don't know.
    I'm "guessing" the compressed version could win, but probably not with large numbers. But this likely entirely depends on your 4k performance of your nvme.
    LZ4 compresses at ~ 4500mpbs and decompresses around 5500mbps. ZSTD can be tuned to compress faster and decompress at LZ4's speed while maintaining higher compression ratios (based on zstd compressing the 5.2 kernel image and not what's in the kernel). The kernel & BTRFS are using ZSTD 1.3.3 whereas ZSTD is at 1.4.2 and version 1.3.4 is when the --fast settings were introduced. While backwards compatible so --fast compressed stuff works with those kernel patches above, it's moot in regards to ramdisks and SSD on-the-fly compression where --fast would likely have a greater benefit than standard ZSTD.

    Since they both can operate within the range of an SSDs top speed, especially when tuned, ZSTD wins since it always had the higher compression ratio when both compressors were tuned in a manner that they had similar compression/decompression speeds.

    I really, really wish the kernel and BTRFS had some sort of native zstd_extra_fast mode that set it to --fast=something_really_high by default because that would really shine for both ramdisks and SSDs wanting to use compression.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by uid313 View Post
    On my development machine I have Apache and PostgreSQL installed, so they get started every time I boot, even though I rarely use them instead of starting through socket activation.
    That's your own problem though. Nothing prevents you from not using the default services (designed for server usage) and write new ones that are good for your own usage, using socket activation feature of systemd.

    Leave a comment:


  • markg85
    replied
    Originally posted by skeevy420 View Post
    Huh, here's a link to my benchmarks on this and the patches I made based on kilobyte's work.

    The TLDR is ZSTD has ~half the compression speed and same decompression speed between --fast=2 to --fast=4 with compression ratios that equal xz. Perfectly good enough for the kernel on boot and I personally use --fast=2 when I zstd compress my kernel images.

    Not benchmarked is when one uses insane sounding values like --fast=500000. IIRC, it starts becoming as fast to faster than lz4 while still compressing better than lz4 somewhere around --fast=1000 or --fast=10000 Yes, those are real & valid values...

    ZSTD kicks ass in this department.

    Try it for yourself if you don't believe my benchmarks.
    Sorry, but your tl;dr is moot. You compare it to xz which is not what we're talking about.
    Anyhow, i did look at your link and you do get very impressive results!
    I'm not adventurous enough to try out kernel patches so i just take your word for it

    To be clear though. Booting my machine with nvme is already insanely fast in mere seconds (cold boot to login). I'm very sure the initramfs part takes about half a second.. So ehh :P
    Yeah well, it's fast. Compression could just make it a tiny bit faster, but still nothing i'd actually notice i think.

    Leave a comment:


  • uid313
    replied
    I doubt the boot speed is as much affected by the decompression speed of the kernel image as it is of waiting for services such as daemons to start.

    On my development machine I have Apache and PostgreSQL installed, so they get started every time I boot, even though I rarely use them instead of starting through socket activation.
    Last edited by uid313; 09-10-2019, 08:06 AM.

    Leave a comment:


  • mathew7
    replied
    Originally posted by markg85 View Post
    I can see this being a benefit on SSD's and HDD's as the decompression speed in memory simply outperforms the time it would have taken to read the raw data from the source.

    But i wonder how that equation holds when looking at Nvme m.2 with 3.2 GB/s!
    Now the raw throughput is not what one should look at here. 3.2GB/s is insanely fast but you only get that with large files.
    You don't get anywhere near that number when you have loads of small files, which initramfs has!
    The more realistic number is the 4k random read on those storage devices which is closer to ~250MB/s. Still darn impressive!
    But would using a compressed LZ4 (or zstd) image be faster in this case? I really don't know.
    I'm "guessing" the compressed version could win, but probably not with large numbers. But this likely entirely depends on your 4k performance of your nvme.
    It does not matter how many small files it has, the initramfs is a single file "bursted" into memory, decompressed and....I'm not sure if it's accessed directly from the decompressed memory or copied to a new RAM filesystem (nevertheless, all in RAM).
    With sizes of ~60MB, it's a non-issue for any internal drive (less than 1s difference between HDD and nvme). The 25% "larger" file is the problem on CD-ROMs, where old systems may be reading at less than 1MB/s (1x CD is 150KB/s, 1x DVD is 1.3MB/s).
    The point of the article is that even with older drives, the extra time spend reading the larger LZ4 file is less than the extra time spend on non-LZ4 decompression. I.e.: x*media_speed + x*non_lz4_decompress > (x+25%)*media_speed + (X+25%)*lz4_decompress
    Assuming less than 30x CDs are "sacrificed", LZ4 should be faster on any system.

    Leave a comment:


  • skeevy420
    replied
    Huh, here's a link to my benchmarks on this and the patches I made based on kilobyte's work.

    The TLDR is ZSTD has ~half the compression speed and same decompression speed as LZ4 when tuned with --fast=2 to --fast=4 with compression ratios that equal xz. Perfectly good enough for the kernel on boot and I personally use --fast=2 when I zstd compress my kernel images.

    Not benchmarked is when one uses insane sounding values like --fast=500000. IIRC, it starts becoming as fast to faster than lz4 while still compressing better than lz4 somewhere around --fast=1000 or --fast=10000 Yes, those are real & valid values...

    ZSTD kicks ass in this department.

    Try it for yourself if you don't believe my benchmarks.
    Last edited by skeevy420; 09-10-2019, 08:25 AM.

    Leave a comment:


  • ermo
    replied
    Having experimented with this on an old Core 2 Quad Q9400, my personal experience is that lz4 makes an even bigger difference on systems with less oomph on the CPU side.

    The difference between xz and lz4 in decompression time was obvious and with a /boot partition of 512MB, size is no longer the issue it might once have been now that 128GB SSDs have become dirt cheap compared to years ago.

    Last I checked (late in the 4.x kernel cycle) the version of lz4 in use during the kernel decompression stage appeared to be the "legacy" lz4 version, which is also single-threaded.
    Last edited by ermo; 09-10-2019, 07:37 AM.

    Leave a comment:


  • markg85
    replied
    I can see this being a benefit on SSD's and HDD's as the decompression speed in memory simply outperforms the time it would have taken to read the raw data from the source.

    But i wonder how that equation holds when looking at Nvme m.2 with 3.2 GB/s!
    Now the raw throughput is not what one should look at here. 3.2GB/s is insanely fast but you only get that with large files.
    You don't get anywhere near that number when you have loads of small files, which initramfs has!
    The more realistic number is the 4k random read on those storage devices which is closer to ~250MB/s. Still darn impressive!
    But would using a compressed LZ4 (or zstd) image be faster in this case? I really don't know.
    I'm "guessing" the compressed version could win, but probably not with large numbers. But this likely entirely depends on your 4k performance of your nvme.

    Leave a comment:


  • ms178
    replied
    And I thought there was momentum building around ZSTD for this purpose...

    Leave a comment:

Working...
X