Announcement

Collapse
No announcement yet.

LZ4m: Taking LZ4 Compression To The Next Level

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by microcode View Post
    Some day we may have transparent 32kbps stereo music.
    Well maybe for Lady Gaga and Justin Bieber.

    Comment


    • #12
      Originally posted by microcode View Post

      Not speaking specifically of this whitepaper, didn't have a chance to look at it. I'm referring to interesting developments like zstd, lzham, etc. which address different applications much better than things which existed a decade ago. I mean, somebody ported lz4 to the 8086 and it performs way better than anything of the time (including other fast decompressors, not just "LOL LZ4 IZ FASTER THAN PK-Zip". It's not some rehashing of old work, recent lossless comp work has done a lot of useful things.

      LZMA (and especially LZMA2) can get you high ratios, but it is depressingly slow both to compress and decompress. lzham gets similar ratios with considerably faster decompression.

      The one thing that should be shocking is the lack of movement in lossy still image compression. JPEG encoders are still getting better to this day, and they make JPEG better than most of the "JPEG killer" technologies that have been pushed throughout time. There are a couple of still image compressors which do get reliably better efficiency, but they are usually hideously slow, or of tenuous patent standing.
      Oh, so blind.
      What would you consider to be "interesting" in lossless compression?
      On the ENGINEERING side (ie improvements that actually affect people's lives) we have things like Apple's LZFSE which give the compression ratio of existing codecs like ZLIB level 5, but are much faster and use less energy, or Google's Brotli which is substantially improving the web experience.
      There is also an interesting field of compressing cache lines in L3 caches, a space that requires both extremely rapid encode+decode and the ability to do something useful with lines that are only 64 or 128 bytes long.

      As for still image compression, look up HEIF. It's not only real (Apple is using it in iOS 11 and MacOS High Sierra, and has HW support for it on iOS devices) but it provides around 2x the compression of JPEG at equivalent image quality, and the container format (which, sure, is not relevant to the compression per se, but does scope its capabilities) allows for new things like depth maps along with the obvious things you'd want from a more modern codec like modern color support.

      Comment


      • #13
        Originally posted by name99 View Post
        As for still image compression, look up HEIF.
        HEIF uses an h.265 I-frame. It's excellent, and about time someone standardized a still-image format with good intra-compression to exploit the redundancy between blocks.

        But I suspect it's one of the formats microcode was referring to as "hideously slow" and/or having possible patent issues.

        h.265 is even more patent-encumbered than h.264. IDK what the situation is with HEIF, but I wouldn't be surprised if the patent holders want everyone selling devices (like phones) that can decode it to pay them royalties. (And probably extra if you want to encode it). See also https://news.ycombinator.com/item?id=14489987 where there's some discussion of open-source implementations and licensing.

        As far as speed, well obviously it's more CPU-intensive to decode than JPEG, and *much* slower than JPEG to encode *well*. It requires motion-search while JPEG doesn't.

        Many devices have h.265 decode in hardware, but feeding all the images on a web page through an API with startup overhead designed for video probably loses to just decoding in software. So IDK how easy it would be for a web browser to actually take advantage of that. I think h.265 decode is fairly CPU intensive, or maybe ffmpeg's decoder just isn't very well optimized for old CPUs (without AVX). Maybe I-frame decoding isn't bad, and video playback on my old computer isn't representative of how expensive HEIF is to decode.

        h.265 definitely uses a much more complex entropy coder than JPEG (often just Huffman, but can be arithmetic coding).

        h.265 can use an adaptive DCT transform size from 4x4 to 32x32, but JPEG is fixed at 8x8. An encoder has a lot more work to do trying different block sizes. I'm not sure about decode. Does it take more work to do one 32x32 inverse-DCT than to do four 16x16 inverse-DCTs?

        h.265 built-in deblocking also takes CPU time (and is a huge improvement over JPEG for very-low-bitrate content, where a mix of blurring and blocking is much better than massive blocking + ringing.)

        ---

        Anyway, the extra CPU time & battery power to decode is probably worth it in a lot of use-cases. Especially for images on web pages, where the data has to go over the Internet so smaller is much better.

        IDK if it's really "hideously" slow, or just a reasonable amount slower. In future, with more optimized libraries to encode and decode, it will be even better.
        Last edited by Peter_Cordes; 03 July 2017, 05:23 AM. Reason: Point out that the speed downsides are worth it!

        Comment


        • #14
          Originally posted by phoronix View Post
          Phoronix: LZ4m: Taking LZ4 Compression To The Next Level
          But LZ4m does come up short of the WKdm page compressor's compression ratio and compression speed.
          http://www.phoronix.com/scan.php?pag...4m-Compression
          You're reading the compression-ratio chart wrong. It isn't clear from the label on the diagram, the Y axis is the size of the compressed version as a fraction of the original. So smaller is better. The text of the paper confirms this interpretation by saying

          On the other hand, LZ4m outperforms WKdm significantly in compression ratio and decompression speed at the cost of 21% slowdown in compression speed
          So they sped up LZ4 significantly, at the cost of a small amount of compression-efficiency.

          Comment


          • #15
            Originally posted by Peter_Cordes View Post

            HEIF uses an h.265 I-frame. It's excellent, and about time someone standardized a still-image format with good intra-compression to exploit the redundancy between blocks.

            But I suspect it's one of the formats microcode was referring to as "hideously slow" and/or having possible patent issues.

            h.265 is even more patent-encumbered than h.264. IDK what the situation is with HEIF, but I wouldn't be surprised if the patent holders want everyone selling devices (like phones) that can decode it to pay them royalties. (And probably extra if you want to encode it). See also https://news.ycombinator.com/item?id=14489987 where there's some discussion of open-source implementations and licensing.

            As far as speed, well obviously it's more CPU-intensive to decode than JPEG, and *much* slower than JPEG to encode *well*. It requires motion-search while JPEG doesn't.

            Many devices have h.265 decode in hardware, but feeding all the images on a web page through an API with startup overhead designed for video probably loses to just decoding in software. So IDK how easy it would be for a web browser to actually take advantage of that. I think h.265 decode is fairly CPU intensive, or maybe ffmpeg's decoder just isn't very well optimized for old CPUs (without AVX). Maybe I-frame decoding isn't bad, and video playback on my old computer isn't representative of how expensive HEIF is to decode.

            h.265 definitely uses a much more complex entropy coder than JPEG (often just Huffman, but can be arithmetic coding).

            h.265 can use an adaptive DCT transform size from 4x4 to 32x32, but JPEG is fixed at 8x8. An encoder has a lot more work to do trying different block sizes. I'm not sure about decode. Does it take more work to do one 32x32 inverse-DCT than to do four 16x16 inverse-DCTs?

            h.265 built-in deblocking also takes CPU time (and is a huge improvement over JPEG for very-low-bitrate content, where a mix of blurring and blocking is much better than massive blocking + ringing.)

            ---

            Anyway, the extra CPU time & battery power to decode is probably worth it in a lot of use-cases. Especially for images on web pages, where the data has to go over the Internet so smaller is much better.

            IDK if it's really "hideously" slow, or just a reasonable amount slower. In future, with more optimized libraries to encode and decode, it will be even better.
            If you have several images to decode, maybe you could upload them to the hardware decoder as an I-frame only clip, though I think they'd need to be the same resolution.

            Comment

            Working...
            X