Announcement

Collapse
No announcement yet.

Lizard: Yet Another Compression Algorithm Joins The Party

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Lizard: Yet Another Compression Algorithm Joins The Party

    Phoronix: Lizard: Yet Another Compression Algorithm Joins The Party

    Lizard was previously developed as LZ5 and is a lossless compression algorithm that yields a compression ratio similar to zip/zlib/Zstd/Brotli but at very fast decompression speeds...

    http://www.phoronix.com/scan.php?pag...rd-Compression

  • #2
    Rock, paper, scissors, lizard, Spock anyone?

    Comment


    • #3
      Michael
      From https://github.com/inikep/lizard/blob/lizard/README.md:
      The high compression/decompression speed is achieved without any SSE and AVX extensions.

      Comment


      • #4
        Take me to your lizard

        Comment


        • #5
          It does a fantastic job at beating out zstd and brotli at decompression speeds, although, for the same ratio, both of them are faster at compressing.

          Comment


          • #6
            Kinda asskicking thing. I've gave it a try and guess what? While its highest compression ratio is no match to brotli and zstd and more similar to zlib, its decompression speed beats the crap out of each and every competitor. Brotli isn't exactly fast at decompression. Zstd is faster. But Lizard beats them all hell a lot when it comes to decompression SPEED. On interesting note, it isn't using sse/avx, being just a masterpiece of coding in plain C. Beating even LZ4 itself in LZ4-like modes, wow! Its almost magic one could do things like that. Zlib? Oh, well, it got crappy compression ratio and awkward decompression speed. So zlib is basically obsoleted technology these days.

            Either way, if someone wants FAST decompression and opensource implementation, Lizard is going to be a big win. Modes without huffman coding could be decompressed with "no memory" in sense one have their compressed data and decompressed data, but other big states in memory (i.e. no huge dictionaries. or tables). This makes these modes quite appealing for low memory systems like microcontrollers and somesuch.

            Comment


            • #7
              Originally posted by SystemCrasher View Post
              Kinda asskicking thing. I've gave it a try and guess what? While its highest compression ratio is no match to brotli and zstd and more similar to zlib, its decompression speed beats the crap out of each and every competitor. Brotli isn't exactly fast at decompression. Zstd is faster. But Lizard beats them all hell a lot when it comes to decompression SPEED. On interesting note, it isn't using sse/avx, being just a masterpiece of coding in plain C. Beating even LZ4 itself in LZ4-like modes, wow! Its almost magic one could do things like that. Zlib? Oh, well, it got crappy compression ratio and awkward decompression speed. So zlib is basically obsoleted technology these days.

              Either way, if someone wants FAST decompression and opensource implementation, Lizard is going to be a big win. Modes without huffman coding could be decompressed with "no memory" in sense one have their compressed data and decompressed data, but other big states in memory (i.e. no huge dictionaries. or tables). This makes these modes quite appealing for low memory systems like microcontrollers and somesuch.
              Lizard is using a neat trick of decoding of up to four deflate segments at the same time. This is great for modern CPUs, explains the crazy performance.

              Btw there is an optimized deflate implementation that has almost twice the perf of zlib https://github.com/ebiggers/libdeflate

              I think if people actually cared, zlib and co would be considerably faster. But it is always more fun to tinker on new stuff I guess.

              Comment


              • #8
                Originally posted by log0 View Post
                Lizard is using a neat trick of decoding of up to four deflate segments at the same time. This is great for modern CPUs, explains the crazy performance
                Not sure what you mean by "deflate segments". Lizard has also got pure LZ modes where it impressive like a hell, both being faster than LZ4 in LZ4-like mode AND compressing better. Its own "LZ5-like" mode is just slightly slower than LZ4 at decompression, but ratio is much better, so overall tradeoff is quite a WIN. Well, it started as LZ4 spinoff, resulting in LZ5. So it has been meant to be LZ5 v2, at which point author also decided to change the name and add (optional) Huffman as well as changing bitstream to something more fancy (and I guess a bit more dense, since LZ4 and LZ5 v1.x were using rather inefficient coding of integers).

                Btw there is an optimized deflate implementation that has almost twice the perf of zlib https://github.com/ebiggers/libdeflate
                Its okay, but you see, zlib also got yet another problem: dictionary size. Its like 32Kb maximum. So zlib does not eliminates redundant data if they appear at larger distances. These days file sizes have increased and 32k dictionary is quite an issue when it comes to compression ratio. Nothing could fix this shortcoming since it would break bitstream and decoder compatibility. At which point it no longer happens to be zlib and deflate but something else. And if someone wants thing like that, it going to be zstd. They also use ANS which gives it a good tradeoff in terms of speed vs ratio. So overall it scores good decompression speed (beating "usual" zlib to the dust) and far better ratios. Which is quite an improvement. LZMA and XZ could squeeze few more bytes, but are much slower to decompress. Which could be painful if one deals with large files or wants some "real time" compression like filesystem, network traffic and so on. Or, say, compressing e.g. kernel and initrd using LZMA-based things could cause noticeable boot delay where system basically blank and inoperable, which is unpleasant. Seems zstd could be an improvement here as well.

                Comment

                Working...
                X