Announcement

Collapse
No announcement yet.

Tar Picks Up Support For Zstd Compression

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Tar Picks Up Support For Zstd Compression

    Phoronix: Tar Picks Up Support For Zstd Compression

    The latest program joining the Zstd bandwagon is Tar...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Where's the lz4 support for tar?

    Comment


    • #3
      How does zstd compare to xz, lz4, gz etc in file size, compress time/ratio etc for different types of data?

      Comment


      • #4
        No short option for it? My immediate thought was they are running out of letters.

        Comment


        • #5
          well, people familiar with Unix do not need this funny built-in support otherwise mostly know from DOS programs, and instead already used it like: zstd -d < foobar | tar x ... and vice versa, like we just o for our #t2sde support for our source mirror cache and default binary packages: https://t2sde.org

          Comment


          • #6
            Originally posted by danboid View Post
            How does zstd compare to xz, lz4, gz etc in file size, compress time/ratio etc for different types of data?
            Here's an interesting comparison. Basically, lz4 can be useful if you want speed above all else, and lzma can be useful if you want the best possible compression, and don't care about decompression speed. Otherwise zstd or brotli is probably best. lzo and snappy weren't included though. And I suppose they don't compare different kinds of data, but it's still an interesting comparison.
            Last edited by LinAGKar; 26 March 2018, 04:09 PM.

            Comment


            • #7
              Originally posted by LinAGKar View Post

              Here's an interesting comparison. Basically, lz4 can be useful if you want speed above all else, and lzma can be useful if you want the best possible compression, and don't care about decompression speed. Otherwise zstd or brotli is probably best. lzo and snappy weren't included though. And I suppose they don't compare different kinds of data, but it's still an interesting comparison.
              lzo is pretty obsolete. It's only marginally better (comp ratio) than lz4 but much slower. The lz4 page has comparisons.

              Comment


              • #8
                Originally posted by rene View Post
                well, people familiar with Unix do not need this funny built-in support otherwise mostly know from DOS programs, and instead already used it like: zstd -d < foobar | tar x ... and vice versa, like we just o for our #t2sde support for our source mirror cache and default binary packages: https://t2sde.org
                It's not fully builtin. It still relies on the presence of the program zstd. What it does is it identifies a compressed archive by its magic number and then pipes it through the corresponding decompressor. It is also used for creating archives by using only the file extension as identifier:

                Code:
                $ tar -caf dir.tar.zst dir/
                This creates a tar archive compressed with zstd. You can decompress it and extract it with:

                Code:
                $ tar -xaf dir.tar.zst
                It makes shell scripts much easier when you don't have to identify the compression tool yourself, but let tar do it for you.



                Originally posted by caligula View Post
                lzo is pretty obsolete. It's only marginally better (comp ratio) than lz4 but much slower. The lz4 page has comparisons.
                lzo is actually a bit faster at compression than lz4 is, but it is slower at decompression. This makes lzo still useful for temporary backup files which you create, but likely not need. lz4 is better than lzo whenever you know you'll have to decompress that data afterwards, too, or even multiple times.
                Last edited by sdack; 28 March 2018, 07:48 AM.

                Comment


                • #9
                  Originally posted by sdack View Post
                  lzo is actually a bit faster at compression than lz4 is, but it is slower at decompression.
                  I think you're confusing with "lz4hc", the special mode of lz4 that's much slower, but does a much more thorough search and thus produce smaller files, that can't still be decompressed amazingly fast with lz4 (even faster, given that the file are smaller and LZ4 is usually IO-bound), but at the cost of slower compression.
                  It's basically the equivalent of "-9" option of other tools.

                  (It's useful for data that needs to be compressed once, and then streamed *and decompressed* as fast as possible to clients - where gunzip wouldn't necessarily be fast enough. Typically for embed client with good networking links but poor CPU. Nowadays, Zstd at lower level could be better suited for most of these usecases except for the most CPU-starved).

                  LZ4HC is slower than LZO. (but not slower than LZO at higher levels).
                  LZ4 is a bit faster at compression than LZO.

                  source:
                  - https://github.com/lz4/lz4#benchmarks
                  - https://catchchallenger.first-world...._vs_LZ4_vs_LZO
                  Last edited by DrYak; 27 March 2018, 12:54 PM.

                  Comment


                  • #10
                    Originally posted by danboid View Post
                    How does zstd compare to xz, lz4, gz etc in file size, compress time/ratio etc for different types of data?
                    just use xz

                    Comment

                    Working...
                    X