Announcement

Collapse
No announcement yet.

Zstd-Compressing The Linux Kernel Has Been Brought Up Again

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Guest
    Guest replied
    Originally posted by dwagner View Post
    Then by all means tell them how to wait even less, using S3 sleep instead of rebooting.
    What if you need to reboot, for instance after upgrading the kernel? I prefer shutting my computer down, because I've experienced high battery drain when suspending to RAM.

    Leave a comment:


  • eva2000
    replied
    Originally posted by omgold View Post
    The results shown match very well with mine (which a posted here quite a while ago, but can't find currently). To answer your question, lz4 is the best choice for at low compression levels (fastest for given compression ratio).

    The conclusion is the only algos worth keeping are xz, zstd and lz4.
    indeed though for me would pxz (multi-threaded xz), pigz (multi-threaded gzip, pbzip2 (multi-threaded bzip2) and zstd - these are my go to for compression as zstd negative compression levels can also be used for faster compression speed at expensive of compression ratio/size.

    Leave a comment:


  • ms178
    replied
    Originally posted by k1e0x View Post

    There is a trend to always chase the "new shiny" to always say.. oh the Macintosh is out the terminal is obsolete GUI only. Or Plan 9 makes Unix obsolete! Sometimes the new thing really sucks and people don't see the value in the old methods. We don't advance as fast as we think.

    So be careful here.. however with this one I agree with you. Zstd or LZ4 sounds more optimal.
    I understand that the new and shiny thing is not always really the best solution but therefore I like the need for hard facts which prove that usefulness (especially in this case it is not too hard to measure).

    Leave a comment:


  • omgold
    replied
    Originally posted by fuzz View Post
    I was thinking zstd would be great for compressed zswap too.
    It is available for zram on newer kernels.

    Leave a comment:


  • omgold
    replied
    Originally posted by Raka555 View Post
    Pitty lz4 was'nt in your test.
    The results shown match very well with mine (which a posted here quite a while ago, but can't find currently). To answer your question, lz4 is the best choice for at low compression levels (fastest for given compression ratio).

    The conclusion is the only algos worth keeping are xz, zstd and lz4.

    Leave a comment:


  • eva2000
    replied
    Originally posted by Raka555 View Post

    I am curious as to what you have decided to use for your backups ?
    I use in my backup scripts zstd or pigz (multi-threaded gzip) depending on my needs. I've also tried switching linux logrotate from gzip to zstd defaults too https://community.centminmod.com/thr...g-sizes.16371/ and disk space savings on very large logs is quite noticeable

    Note though zstd by default can consume quite a bit more memory than lz4, pigz, pbzip2 (multi-threade bzip2) so might not be suited to all situations out of the box unless you tune zstd options to use 33-66% less memory at the expense of compression ratio somewhat at higher levels of compression.

    Also when comparing zstd note the specific version of zstd used as zstd 1.3.4+ and higher releases have had very noticeable performance improvements https://github.com/facebook/zstd/releases

    Leave a comment:


  • stormcrow
    replied
    Originally posted by dwagner View Post
    And look how easily this problem is solved by e.g. Android devices, as they just do not boot that often. My phone certainly takes a lot of time - like 30 seconds - to reboot. Do I mind? No, because that does not happen frequently enough to be of relevance.

    It is precisely optimization of decompression implementations that can easily introduce additional risks. Because every safety "if"-clause in the decompression code path costs time, only to protect against malicious inputs.
    You're taking a single use case in a very broad market. Android is the most visible case of Linux in the computing device market, but I rather struggle to call it "embedded" without being very loose with the term. Tablets and smart phones are more like single slab laptops these days with almost as much computing power in the latest generation as a laptop from just a few years ago. Most embedded devices do indeed not reboot or cold boot very often, but they do actually have to do so every so often either after an update or to clear technical or mechanical glitches. In which case it's very desirable to go from minutes from power on to usable to a minute and a half, a minute, or less depending on which compression system is being used, yes even on a smart phone.

    As for your security assertions, you have evidence of such tight timing integrity attacks carried out in practice rather than theoretical?

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by stormcrow View Post

    Not really. What may take 1s on the latest Intel/AMD x64 hardware, could be much more of a factor (like tens of seconds to minutes) on low power embedded hardware, which is probably the biggest sector using the Linux kernel. It's not about how fast it takes your laptop to boot, um... unless it's like my vintage HP Centrino Duo laptop I use for a terminal, but I don't really care much there either... it's about how fast it takes a Blackfin controller, or a low power ARM32 board (like early R-Pi models) to unpack its kernel before it can even start to boot. Leaving the kernel uncompressed isn't an option either, as non-volatile storage on embedded devices aren't always large enough and/or the boot loader may have kernel size limits.
    (...)
    .
    if you go by that..
    Take a Look at ARM64 speeds with lz4 at bottom of page..
    Last edited by tuxd3v; 10 June 2019, 05:36 PM. Reason: typos

    Leave a comment:


  • Zan Lynx
    replied
    Arguing about this is dumb. If it is faster, that is good for no other reason than it is faster. No purpose is ever served by going slower.

    Faster! Faster!

    Leave a comment:


  • dwagner
    replied
    Originally posted by stormcrow View Post
    What may take 1s on the latest Intel/AMD x64 hardware, could be much more of a factor (like tens of seconds to minutes) on low power embedded hardware, which is probably the biggest sector using the Linux kernel.
    And look how easily this problem is solved by e.g. Android devices, as they just do not boot that often. My phone certainly takes a lot of time - like 30 seconds - to reboot. Do I mind? No, because that does not happen frequently enough to be of relevance.

    As for Zstd from a security prospective, you do realize zstd is already in the Linux kernel in different spots as it is. Btrfs uses it as does squashfs. The algorithms (as opposed to the implementation, don't conflate the two) have been around for decades already, the zstd group just put them together in a unique way to produce better performance than previous incarnations.
    It is precisely optimization of decompression implementations that can easily introduce additional risks. Because every safety "if"-clause in the decompression code path costs time, only to protect against malicious inputs.

    Leave a comment:

Working...
X