Announcement

Collapse
No announcement yet.

There Is Another Attempt At Allowing Zstd-Compressed Firmware For The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • There Is Another Attempt At Allowing Zstd-Compressed Firmware For The Linux Kernel

    Phoronix: There Is Another Attempt At Allowing Zstd-Compressed Firmware For The Linux Kernel

    With Facebook's Zstandard compression algorithm becoming quite popular and well supported across many different environments -- including support for Zstd compressing the Linux kernel, among other uses -- there is a renewed effort in allowing Linux firmware to be compressed via Zstd...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Seems like zstd is slowly becoming the new compression standard on Linux for just about everything. Guess some distros will soon use it for debug symbols etc. as well?

    Comment


    • #3
      Originally posted by Berniyh View Post
      Seems like zstd is slowly becoming the new compression standard on Linux for just about everything. Guess some distros will soon use it for debug symbols etc. as well?
      I find it fun that Zstandard is becoming standard both in the name and in fact.

      To those who don't know yet, I'd also love to note that this is not just a work of some "Facebook", but the original work was done primarily by Yann Collet, the creator not only of Zstd, but also of LZ4, Zhuff and other compression codecs. Kudos to him as well as to all other contributors making Zstd awesome!

      Comment


      • #4
        Compressing everything is going to make systems less fault tolerant though. Not that this should be a blocker for adopting compression but it should be known that single bit errors are going to be more catastrophic if you have any set of 100MB data you filled with compressed data compared to 100MB of uncompressed data. Computers nowadays aren't known for their data relyability.

        Comment


        • #5
          Originally posted by Yttrium View Post
          Compressing everything is going to make systems less fault tolerant though. Not that this should be a blocker for adopting compression but it should be known that single bit errors are going to be more catastrophic if you have any set of 100MB data you filled with compressed data compared to 100MB of uncompressed data. Computers nowadays aren't known for their data relyability.
          ZFS compression is why I use ECC these days. While I know that ZFS ECC is half myth I like the peace of mind it brings.

          Comment


          • #6
            So we'll have Zstd compressed firmware in a Zstd compressed package in a Zstd compressed container on a Zstd compressed filesystem on a SSD which transparently compresses using some proprietary codec, which may be Zstd as well... Good way to discover that even a fast algorithm can be made slow by sequentially applying it sufficient number of times.

            Comment


            • #7
              Originally posted by Black_Fox View Post

              I find it fun that Zstandard is becoming standard both in the name and in fact.

              To those who don't know yet, I'd also love to note that this is not just a work of some "Facebook", but the original work was done primarily by Yann Collet, the creator not only of Zstd, but also of LZ4, Zhuff and other compression codecs. Kudos to him as well as to all other contributors making Zstd awesome!
              Zstandard is really good, it deserves becoming the standard. It has good compression ratio and is very fast, and you can tune both (at the expense of the other), and the code is portable. Other than for supporting legacy systems, I see no reason for using another compression algorithm.

              Comment


              • #8
                Originally posted by mb_q View Post
                So we'll have Zstd compressed firmware in a Zstd compressed package in a Zstd compressed container on a Zstd compressed filesystem on a SSD which transparently compresses using some proprietary codec, which may be Zstd as well... Good way to discover that even a fast algorithm can be made slow by sequentially applying it sufficient number of times.
                I'm blue da ba dee da ba daa

                Comment


                • #9
                  Originally posted by archkde View Post

                  Zstandard is really good, it deserves becoming the standard. It has good compression ratio and is very fast, and you can tune both (at the expense of the other), and the code is portable. Other than for supporting legacy systems, I see no reason for using another compression algorithm.
                  The only time I don't like Zstd is when I don't have access to all the tunables, like the kernel's implication. The reason I say that is because in a lot of situations I care more about speed than I do compression and would rather use LZ4 or Zstd-fast over Zstd 1.

                  implication...autocorrect fail
                  Last edited by skeevy420; 28 January 2021, 10:11 AM.

                  Comment


                  • #10
                    Zstd and Facebook...

                    If I "like" Zstd on Facebook, will I get more followers?
                    GOD is REAL unless declared as an INTEGER.

                    Comment

                    Working...
                    X