Announcement

Collapse
No announcement yet.

Configurable Zstd Compression Level Support Is Revived For Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Configurable Zstd Compression Level Support Is Revived For Btrfs

    Phoronix: Configurable Zstd Compression Level Support Is Revived For Btrfs

    Since the Linux 4.14 kernel Btrfs has supported Zstd for transparent file-system compression while a revived patch-set would allow that Zstd compression level to become configurable by the end-user...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Ah, that's wonderful! According to my tests, zstd's decompression speed varies not too much, depending on the compression level; even for insane levels like 22 it's in line with what Michael reported. It's still fast for embedded ARM devices - I tested it with some Toradex Colibri T20 and AFAIR decompression was still above 100 MiB/s ( I think close to 200 MiB/s) for in memory decompression and always faster than gzip.

    So in the end, you could compress static data with the highest levels and still profit from fast access!

    Comment


    • #3
      More important than level is a smart heuristic for detecting when to compress and when not. Right now in non forced mode, btrfs isn't being too smart about it, and if first chunks aren't compressed well, it stops compressing the whole file.

      Comment


      • #4
        Originally posted by shmerl View Post
        More important than level is a smart heuristic for detecting when to compress and when not. Right now in non forced mode, btrfs isn't being too smart about it, and if first chunks aren't compressed well, it stops compressing the whole file.
        Yes that is true, but you also have the compress-force option that can only be applied as a mount option (not with chattr). This ensures that BTRFS does not bail out in case the first blocks are not compressable and instead run all blocks through the compressor. Even with compress-force BTRFS will not store a the block compressed if the compressed data is larger than the uncompressed data which is really a good thing!

        http://www.dirtcellar.net

        Comment


        • #5
          The "automatic compression detector" of btrfs is a total bullshit.
          It just kills compression performance, almost at random.
          The "detector" itself is very poorly done, and wrongly believes it can determine compressibility from some trivial-to-grab statistics.
          That couldn't be more wrong.

          Besides, and that's the funny point, modern compressors such as LZ4, LZO and Zstandard
          have built-in skippers which know when things are not compressible and just intelligently skip past the bad section.
          The only case for this external detector is zlib, but it's wrongly applied to all !

          Try it on `zstd`, it skips the incompressible parts at > 1 GB / s !
          That's faster than the processing of this "compressibility detector" .

          Bottom line : on btrfs, always use "force compress" on any partition with anything else than zlib.

          Comment


          • #6
            Originally posted by poorguy View Post
            The "automatic compression detector" of btrfs is a total bullshit.
            It just kills compression performance, almost at random.
            The "detector" itself is very poorly done, and wrongly believes it can determine compressibility from some trivial-to-grab statistics.
            That couldn't be more wrong.

            Besides, and that's the funny point, modern compressors such as LZ4, LZO and Zstandard
            have built-in skippers which know when things are not compressible and just intelligently skip past the bad section.
            The only case for this external detector is zlib, but it's wrongly applied to all !

            Try it on `zstd`, it skips the incompressible parts at > 1 GB / s !
            That's faster than the processing of this "compressibility detector" .

            Bottom line : on btrfs, always use "force compress" on any partition with anything else than zlib.
            Oh that's very interesting!
            I guess when these patches get in I'll recompress everything and force it.

            Comment


            • #7
              Originally posted by shmerl View Post
              More important than level is a smart heuristic for detecting when to compress and when not. Right now in non forced mode, btrfs isn't being too smart about it, and if first chunks aren't compressed well, it stops compressing the whole file.
              That's why I always disable that heuristic.
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment

              Working...
              X