Announcement

Collapse
No announcement yet.

Zstd Compression For Btrfs & Squashfs Set For Linux 4.14, Already Used Within Facebook

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zstd Compression For Btrfs & Squashfs Set For Linux 4.14, Already Used Within Facebook

    Phoronix: Zstd Compression For Btrfs & Squashfs Set For Linux 4.14, Already Used Within Facebook

    As we've been expecting, Zstd compression for Btrfs is coming with the Linux 4.14 along with Zstd support in SquashFS...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Death to BTRFS!

    No seriously, I would really like to see it succeed. It's really great and offers many possibilities, but the development is slow and there're so many negative eyes around it.

    Go Suse en Synology, make it become world champion! Let someone finally defeat Mayweather!

    Comment


    • #3
      Hm, I had a look at the zstd license, an it seems that 20 days ago they removed the retarded patent restrictions, now it is a normal BSD+GPLv2 dual license.
      Zstandard - Fast real-time compression algorithm. Contribute to facebook/zstd development by creating an account on GitHub.



      So it seems zstd is now free to be used by anyone.

      Comment


      • #4
        for those wondering about zstd vs zlib vs ... performance (both speed and compression ratio)

        has a few graphs I made measuring things

        Comment


        • #5
          Now I'm wondering if it's worth to switch from LZO to zstd... Saving some disk space with not much less speed sounds great!

          Comment


          • #6
            Originally posted by arjan_intel View Post
            for those wondering about zstd vs zlib vs ... performance (both speed and compression ratio)

            has a few graphs I made measuring things
            It'd be nice if you could add LZO as well

            Comment


            • #7
              Originally posted by arjan_intel View Post
              for those wondering about zstd vs zlib vs ... performance (both speed and compression ratio)
              https://clearlinux.org/blogs/linux-o...aring-behavior
              has a few graphs I made measuring things
              Thanks for sharing been search all other the web looking for zstd benchmarks. I haven't really thought out compiler options for compression algorithms - only focused on it for nginx and php-fpm compiles myself

              Last week I revisited some compression algorithm benchmarks for zstd vs gzip vz bzip2 vs lbzip2 vs pbzip2 vs lzip vs plzip as I wanted to see how zstd performed after I first read about zstd on Phoronix web site https://community.centminmod.com/thr...-xz-etc.12764/. Initial tests were done against Silesia Compression Corpus but then I also retested against Kernel 4.13 tar file too further down at https://community.centminmod.com/posts/54106/

              edit: FYI zstd 1.3.1 reduced memory usage dramatically for multi-threaded runs https://community.centminmod.com/posts/53999/

              Hope there's useful too
              Last edited by eva2000; 09 September 2017, 01:52 AM.

              Comment


              • #8
                Originally posted by arjan_intel View Post
                for those wondering about zstd vs zlib vs ... performance (both speed and compression ratio)

                has a few graphs I made measuring things
                is intel's zlib new fork still only on 1.2.8 version ? or is there a 1.2.11 version somewhere https://github.com/jtkukunas/zlib/issues/16 ?

                Comment


                • #9
                  Originally posted by geearf View Post
                  It'd be nice if you could add LZO as well
                  In my experience, the overall performance of LZO tend to be "shittier than LZ4".
                  Doesn't compress much better, but takes a lot more time.
                  So the LZ4 is a general ball park estimation on these graph.

                  Then there's another thing to keep in mind : data rate.
                  LZO (and evenmore so LZ4) can be extremely fast at compressing/decompressing.
                  The graph done at Intel (big thanks, arjan_intel !) where all done in-memory (or in cache ? can you correct me ?).
                  The big question is : does your disk (HDD or SSD) can actually sustain the rate of data that LZO/LZ4 can compress/decompress ?
                  Beyond a certain combo (of ultra-fast processor with slower mass storage) LZO (and LZ4) won't result in much *faster* compression as they will result in *lower CPU use*. (e.g.: they finish decompressing earlier, while still waiting for more compressed data to arrive - that's not necessarily a bad thing, just different)

                  The few example of real-world use I've googled around seem to show that Zstd in practice is a bit slower than LZO, but not by extremely much.
                  (Because in practice, you still need for the data to flow to your program and the speed of this datalink was the limiting factor in those tests).


                  Now another thing :
                  From what I've understand, Facebook seems to have settled for Zstd's level "3" which produces a tiny bit better results than the current Zlib's 6 at nearly twice the compression rate.

                  I would like to express wishes for some future version of btrfs-tools to support offline slower/better compression for seldom written files.
                  i.e.: that I could run something "btrfs fi defrag -czstd=9 {file}"

                  Comment


                  • #10
                    Originally posted by DrYak View Post


                    Now another thing :
                    From what I've understand, Facebook seems to have settled for Zstd's level "3" which produces a tiny bit better results than the current Zlib's 6 at nearly twice the compression rate.

                    I would like to express wishes for some future version of btrfs-tools to support offline slower/better compression for seldom written files.
                    i.e.: that I could run something "btrfs fi defrag -czstd=9 {file}"
                    this might be less useful than it sounds. filesystems that do compression generally do this on small blocks (4Kb or 32Kb at most) so that random read access remains reasonably possible. But the thing that makes higher compression levels different from lower levels tends to be the history size.... I'd not be surprised if zstd level 3 has 32 Kb history size for 32kb fs blocks, and higher levels just don't help because there just isn't more history to be had.

                    Comment

                    Working...
                    X