Announcement

Collapse
No announcement yet.

Zlib "Next Generation" Preparing Massive Decompression Speed-Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zlib "Next Generation" Preparing Massive Decompression Speed-Up

    Phoronix: Zlib "Next Generation" Preparing Massive Decompression Speed-Up

    After being in development for two years, a new beta release of Zlib-ng as the "next generation" data compression library is available with much faster data decompression...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Nice improvements but the core algorithm is outdated and ZSTD nowadays runs circles around zlib. It has basically made zlib obsolete.

    I wonder if the PNG standard could add ZSTD compression. That would be fantastic.

    Actually someone did that over five years ago but it's not gained any traction: https://github.com/catid/Zpng

    Comment


    • #3
      Originally posted by avis View Post
      Nice improvements but the core algorithm is outdated and ZSTD nowadays runs circles around zlib. It has basically made zlib obsolete.

      I wonder if the PNG standard could add ZSTD compression. That would be fantastic.

      Actually someone did that over five years ago but it's not gained any traction: https://github.com/catid/Zpng
      How does it compare to lossless JPEG XL?

      Comment


      • #4
        Originally posted by avis View Post
        Nice improvements but the core algorithm is outdated and ZSTD nowadays runs circles around zlib. It has basically made zlib obsolete.

        I wonder if the PNG standard could add ZSTD compression. That would be fantastic.

        Actually someone did that over five years ago but it's not gained any traction: https://github.com/catid/Zpng
        Could I compress ye ol zlib game files on a ZSTD-compressed BTRFS partition? With a tangible compression result. CPU effect aside.

        I'm having to be pretty selective with my compression. I find, the whole game changes whether you're on rust, SSD, NVME and its variants.

        Exciting times.
        Hi

        Comment


        • #5
          Originally posted by stiiixy View Post

          Could I compress ye ol zlib game files on a ZSTD-compressed BTRFS partition? With a tangible compression result. CPU effect aside.

          I'm having to be pretty selective with my compression. I find, the whole game changes whether you're on rust, SSD, NVME and its variants.

          Exciting times.
          Data compressed using most compression algos in most cases becomes practically incompressible.

          Comment


          • #6
            Originally posted by avis View Post
            Nice improvements but the core algorithm is outdated and ZSTD nowadays runs circles around zlib. It has basically made zlib obsolete.
            That doesn't matter all that much in this case. These improvements are targeting decompression speed ie) existing archives already using zlib will be able to deal with them better. These legacy archives aren't going to get extracted and re-compressed with anything newer. Familiarity and compatibility are among the reason things like Zip archives are still very popular even they are technically superseded by much better algorithms.

            Comment


            • #7
              Marvellous project, have been using it for 2 year now on different machines w/o problems.

              Comment


              • #8
                Originally posted by avis View Post
                Data compressed using most compression algos in most cases becomes practically incompressible.
                Hu, that doesn't sound logic at all. I just took a random png (15,7 MB) to 7z (14,9 MB) and a random jpeg 103 KB to 76 KB. You could do such things yourself in a few seconds before you post wild claims on the internet.

                And if you think a few seconds about the implications of your argument, it would mean that the algorithm is allready the best possible. Since all compression algos we had so far got replaced with better ones we are not at the theoretical optimum and therefore it is always possible to further compress, especially with different algorithms.
                Shannon and entropy might be worth googling ...

                Comment


                • #9
                  Originally posted by Anux View Post
                  Hu, that doesn't sound logic at all. I just took a random png (15,7 MB) to 7z (14,9 MB) and a random jpeg 103 KB to 76 KB. You could do such things yourself in a few seconds before you post wild claims on the internet.

                  And if you think a few seconds about the implications of your argument, it would mean that the algorithm is allready the best possible. Since all compression algos we had so far got replaced with better ones we are not at the theoretical optimum and therefore it is always possible to further compress, especially with different algorithms.
                  Shannon and entropy might be worth googling ...
                  They're not wild claims, and he wasn't claiming that you can never re-compress already compressed data. But in most cases, it will not help, and if it helps, you're probably doing something sub-optimal.

                  For instance if you use the fastest zip compression against a text file, you will be leaving some compression headroom on the table (that's what it means to use the fastest setting), and you can absolutely reclaim some of that space by using a second algorithm. But it will be slower and less efficient than if you had simply used a higher compression level in the better of the two algorithms.

                  So in your examples, you probably could have used a higher compression level in PNG and ended with a file smaller than 14.9MB-- or you could have used a better image format.
                  Last edited by ll1025; 28 April 2023, 10:26 AM.

                  Comment


                  • #10
                    I recently was surprised by bzip3​ , which achieved much better compression ratios with some data types than it competitors - but with other data types quite the opposit.

                    Looks like we are still far away from the perfect all-purpose compressor. If you need best compression ratio you have to do some trial&error to find the best match. I wrote a little script which compresses my data with 4 different tools and then deletes all but the smallest. I use this at least for data I know I will archive for decades, when trading CPU cycles for some storage savings makes sense. So if the implementations become faster, this approach becomes even more attractive.

                    Comment

                    Working...
                    X