If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
well, people familiar with Unix do not need this funny built-in support otherwise mostly know from DOS programs, and instead already used it like: zstd -d < foobar | tar x ... and vice versa, like we just o for our #t2sde support for our source mirror cache and default binary packages: https://t2sde.org
How does zstd compare to xz, lz4, gz etc in file size, compress time/ratio etc for different types of data?
Here's an interesting comparison. Basically, lz4 can be useful if you want speed above all else, and lzma can be useful if you want the best possible compression, and don't care about decompression speed. Otherwise zstd or brotli is probably best. lzo and snappy weren't included though. And I suppose they don't compare different kinds of data, but it's still an interesting comparison.
Here's an interesting comparison. Basically, lz4 can be useful if you want speed above all else, and lzma can be useful if you want the best possible compression, and don't care about decompression speed. Otherwise zstd or brotli is probably best. lzo and snappy weren't included though. And I suppose they don't compare different kinds of data, but it's still an interesting comparison.
lzo is pretty obsolete. It's only marginally better (comp ratio) than lz4 but much slower. The lz4 page has comparisons.
well, people familiar with Unix do not need this funny built-in support otherwise mostly know from DOS programs, and instead already used it like: zstd -d < foobar | tar x ... and vice versa, like we just o for our #t2sde support for our source mirror cache and default binary packages: https://t2sde.org
It's not fully builtin. It still relies on the presence of the program zstd. What it does is it identifies a compressed archive by its magic number and then pipes it through the corresponding decompressor. It is also used for creating archives by using only the file extension as identifier:
Code:
$ tar -caf dir.tar.zst dir/
This creates a tar archive compressed with zstd. You can decompress it and extract it with:
Code:
$ tar -xaf dir.tar.zst
It makes shell scripts much easier when you don't have to identify the compression tool yourself, but let tar do it for you.
lzo is pretty obsolete. It's only marginally better (comp ratio) than lz4 but much slower. The lz4 page has comparisons.
lzo is actually a bit faster at compression than lz4 is, but it is slower at decompression. This makes lzo still useful for temporary backup files which you create, but likely not need. lz4 is better than lzo whenever you know you'll have to decompress that data afterwards, too, or even multiple times.
lzo is actually a bit faster at compression than lz4 is, but it is slower at decompression.
I think you're confusing with "lz4hc", the special mode of lz4 that's much slower, but does a much more thorough search and thus produce smaller files, that can't still be decompressed amazingly fast with lz4 (even faster, given that the file are smaller and LZ4 is usually IO-bound), but at the cost of slower compression.
It's basically the equivalent of "-9" option of other tools.
(It's useful for data that needs to be compressed once, and then streamed *and decompressed* as fast as possible to clients - where gunzip wouldn't necessarily be fast enough. Typically for embed client with good networking links but poor CPU. Nowadays, Zstd at lower level could be better suited for most of these usecases except for the most CPU-starved).
LZ4HC is slower than LZO. (but not slower than LZO at higher levels).
LZ4 is a bit faster at compression than LZO.
Comment