Announcement

Collapse
No announcement yet.

Zstd-Compressing The Linux Kernel Has Been Brought Up Again

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by dwagner View Post
    If shaving off 1s of boot time is relevant for you, then you clearly have a severe stability issue with your operating system.
    It is really relevant in a lot of cases. Modern laptops can boot up in 5-10s which means a 1s reduction is a massive improvement
    One of my previous employers have already switched to lz4 for faster boot up several years ago on ARM devices because it relates to user safety and user satisfaction

    Comment


    • #12
      Originally posted by eva2000 View Post

      FYI, I posted my own gzip vs xz vs zstd and other benchmarks at https://community.centminmod.com/thr...-xz-etc.17259/ - zstd has so many options beyond default which can tune it for either high compression ratios or compression/decompression speed
      Wow, very nicely done!
      It is exactly the graph I wanted to see. Glad you didn't use log scale for x-axis.
      Pitty lz4 was'nt in your test.
      I am very surprised at how well pigz did at high speed compression.
      Would have been awesome if you had the same graph for decompression speed vs compression ratio. (Edit: Its hard to get a good picture from the data alone)

      Comment


      • #13
        I was thinking zstd would be great for compressed zswap too.

        Comment


        • #14
          Originally posted by dwagner View Post
          If shaving off 1s of boot time is relevant for you, then …
          Need motivation? Steve Jobs (an expert motivator) used the analogy of saving lives.

          Comment


          • #15
            Zstd seems to be the new hotness, but is it “suitable for long term archiving” (unlike Xz)?

            Comment


            • #16
              Originally posted by dwagner View Post
              If shaving off 1s of boot time is relevant for you, then you clearly have a severe stability issue with your operating system.

              BTW: (De-)compression algorithms need very thorough testing against security vulnerabilities. If you put such an algorithm into the kernel, make very, very sure that you have tested it with all kinds of random and maliciously crafted input.
              A second is a lot, considering it doesn't take much to set up your system to boot up in <4s on a cheap TLC SSD. Some people don't like to wait for these things.

              Comment


              • #17
                Originally posted by Raka555 View Post

                Wow, very nicely done!
                It is exactly the graph I wanted to see. Glad you didn't use log scale for x-axis.
                Pitty lz4 was'nt in your test.
                I am very surprised at how well pigz did at high speed compression.
                Would have been awesome if you had the same graph for decompression speed vs compression ratio. (Edit: Its hard to get a good picture from the data alone)
                Yeah my testing is more focused on compression speed and compression ratio as it relates to backup speed and lz4 has neither of those strengths i.e. for tar + zstd backup tested speeds

                Comment


                • #18
                  Originally posted by LoveRPi View Post
                  Ubuntu got it right with lz4. Adding support for Zstd isn't a problem. Going forward, IO will continue to scale faster than CPU which is already the case.
                  Indeed, lz4 is good on compression times and decompression times.
                  Its the default for example Oracle Databases, it don't compromise too much the speed, and depending on database content, you could earn a lot with it..

                  A comparison for several algos

                  Comment


                  • #19
                    Originally posted by eva2000 View Post

                    Yeah my testing is more focused on compression speed and compression ratio as it relates to backup speed and lz4 has neither of those strengths i.e. for tar + zstd backup tested speeds
                    I am curious as to what you have decided to use for your backups ?

                    Comment


                    • #20
                      Originally posted by dwagner View Post
                      If shaving off 1s of boot time is relevant for you, then you clearly have a severe stability issue with your operating system.

                      BTW: (De-)compression algorithms need very thorough testing against security vulnerabilities. If you put such an algorithm into the kernel, make very, very sure that you have tested it with all kinds of random and maliciously crafted input.
                      Not really. What may take 1s on the latest Intel/AMD x64 hardware, could be much more of a factor (like tens of seconds to minutes) on low power embedded hardware, which is probably the biggest sector using the Linux kernel. It's not about how fast it takes your laptop to boot, um... unless it's like my vintage HP Centrino Duo laptop I use for a terminal, but I don't really care much there either... it's about how fast it takes a Blackfin controller, or a low power ARM32 board (like early R-Pi models) to unpack its kernel before it can even start to boot. Leaving the kernel uncompressed isn't an option either, as non-volatile storage on embedded devices aren't always large enough and/or the boot loader may have kernel size limits.

                      As for Zstd from a security prospective, you do realize zstd is already in the Linux kernel in different spots as it is. Btrfs uses it as does squashfs. The algorithms (as opposed to the implementation, don't conflate the two) have been around for decades already, the zstd group just put them together in a unique way to produce better performance than previous incarnations. Other examples of zstd being used in critical places would be FreeBSD's version of OpenZFS which also contains zstd compression similar to btrfs and Ubuntu's version of deb packages which uses zstd by default since 18.10.
                      Last edited by stormcrow; 06-10-2019, 04:12 PM. Reason: Edited for a bit more clarity for embedded device issues.

                      Comment

                      Working...
                      X