No announcement yet.

Google Unveils "Zopfli" Compression Algorithm

  • Filter
  • Time
  • Show
Clear All
new posts

  • Google Unveils "Zopfli" Compression Algorithm

    Phoronix: Google Unveils "Zopfli" Compression Algorithm

    Google has announced Zopfli, a new general purpose data compression library that's open-source. Zopfli implements the Deflate compression algorithm that yields a smaller output size than previous techniques...

  • #2
    Didn't 7zip already deliver gains over 2-3% with less CPU time required?


    • #3

      Is this improvement really even worth a Google engineer's time?


      • #4
        I'm curious: it's the same can zlib decompressors handle it?
        Ah, yes:
        Originally posted by
        The output generated by Zopfli is typically 3?8% smaller compared to zlib at maximum compression, and we believe that Zopfli represents the state of the art in Deflate-compatible compression. Zopfli is written in C for portability. It is a compression-only library; existing software can decompress the data. Zopfli is bit-stream compatible with compression used in gzip, Zip, PNG, HTTP requests, and others.
        Last edited by Ibidem; 03-01-2013, 06:10 PM.


        • #5
          Theres a mistake in the article. Its not 2-3x slower, its 10-100x slower.

          All the same it looks cool The great thing about it is that its an implementation of deflate so the existing clients can decompress it just the same, it just shaves an additional 3-8% off serving any static content where you can devote the server resources to it.

          For sure, a more modern algorithm such as sported by 7zip or bzip2 or a myriad of others are better choices where you can dictate the client can decode it, but for mobile especially, a low complexity decoder that does better than stock zlib is great.


          • #6
            Originally posted by eLDST0RM View Post
            Theres a mistake in the article. Its not 2-3x slower, its 10-100x slower.
            Oops, I'm an eejit too. 100-1000x slower.


            • #7
              Since I'm already using advdef for these purposes (zlib-compatible, but better compression), had to run a quick bench.

              gzip and pigz with -9
              advdef -z4
              zopfli defaults (15 rounds)

              Time not included in this chart as only zopfli and advdef were timed.

              24278 1.04809 pigz.gz
              24277 1.04805 gzip.gz
              23591 1.01843 ad/gzip.gz
              23591 1.01843 ad/pigz.gz
              23164 1 ad/zop.gz
              23164 1 zop.gz
              It compresses about ~2% better than advdef, while using 30% more time. Advdef uses the 7-zip deflate algo.


              • #8
                Can I use it for compressing initrd?


                • #9
                  Originally posted by MartinN View Post
                  Is this improvement really even worth a Google engineer's time?
                  Well, from what I read this was done on the 20% of work time a Google developer is allowed to spend on whatever they want. And yes I can see that if you serve alot of compressed (as in deflate, which is the compression algorithm all browsers support) static content you'd likely be happy to shave of 3-8% of your bandwidth by compressing said content with zopfli.

                  No, it's not going to cause a revolution on the web, it will simply allow certain content to decrease bandwidth use, as such it is a nice tool.


                  • #10
                    Parallel Zopfli compressor

                    Zopfli had some interesting developments recently:
                    One, its inclusion as compressionlevel 11 in pigz

                    Second, Charles Bloom take on Zopfli here