Announcement

Collapse
No announcement yet.

Zstd Compression Being Eyed For Use Within LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Zstd Compression Being Eyed For Use Within LLVM

    Phoronix: Zstd Compression Being Eyed For Use Within LLVM

    LLVM developers are eyeing Zstandard "Zstd" use within this compiler stack as a secondary compression method to Zlib. Zstd could be used for compressing ELF debug sections, AST data structures, and other purposes within this open-source compiler stack...

    https://www.phoronix.com/scan.php?pa...-LLVM-Explored

  • #2
    Yeah, lets pile on the dependencies and bloat ...

    Comment


    • #3
      Originally posted by Raka555 View Post
      Yeah, lets pile on the dependencies and bloat ...
      This word, I don't think it means what you think it means.

      Comment


      • #4
        Zstd's compression ratio is similar to DEFLATE
        This statement is somewhat misleading. While it is true that in the most common implementations, the default levels of deflate (gzip -6) and zstd (zstd -3) offer similar compression, zstd takes significantly less time to compress. The point where they take similar time is around zstd -9 or zstd -10 (when comparing to gzip -6), where zstd compresses much better (about halfway between gzip and xz in my experience).
        Last edited by archkde; 25 June 2022, 12:17 PM. Reason: An earlier version of the comment claimed that "zstd takes signicantly less time to decompress". While this is also true, it's not particularly relevant to what I wanted to say.

        Comment


        • #5
          Originally posted by intelfx View Post

          This word, I don't think it means what you think it means.
          It means exactly what he says:
          Raka555 has no clue about compression algorithms or how ANS differs from plain Huffman coding. That's the reason why he hasn't got the slightest clue on how to distinguish between Zstd and Deflate and why to him it all looks the same.

          Comment


          • #6
            Originally posted by pkese View Post

            It means exactly what he says:
            Raka555 has no clue about compression algorithms or how ANS differs from plain Huffman coding. That's the reason why he hasn't got the slightest clue on how to distinguish between Zstd and Deflate and why to him it all looks the same.
            The difference between zstd and deflate is larger than Huffman vs ANS. It already starts at a faster LZ implementation in zstd (causing a speed gain), and the entropy coding is also subtly different even apart from Huffman vs ANS (causing better compression).

            Comment


            • #7
              Originally posted by pkese View Post

              It means exactly what he says:
              Raka555 has no clue about compression algorithms or how ANS differs from plain Huffman coding. That's the reason why he hasn't got the slightest clue on how to distinguish between Zstd and Deflate and why to him it all looks the same.
              My comment has nothing to do with zstd or compression.

              The problem is next week they will want to add ${insert flavour of the day} feature again. All adding to extra dependencies, complexity and resource usage.
              The slowness of LLVM is not because of the use of zlib vs zstd. I bet if you add a hypothetical compressor/decompressor that can do it in 0 time, it will probably not even make a notable difference to the overall slowness of LLVM.
              They have much bigger problems to solve before adding more dependencies (that won't make much of a difference)
              Last edited by Raka555; 25 June 2022, 01:37 PM.

              Comment


              • #8
                Originally posted by Raka555 View Post

                My comment has nothing to do with zstd or compression.

                The problem is next week they will want to add ${insert flavour of the day} feature again. All adding to extra dependencies, complexity and resource usage.
                The slowness of LLVM is not because of the use of zlib vs zstd. I bet if you add a hypothetical compressor/decompressor that can do it in 0 time, it will probably not even make a notable difference to the overall slowness of LLVM.
                They have much bigger problems to solve before adding more dependencies (that won't make much of a difference)
                LLVM and GCC are at about equality in terms of performance, although LLVM consistently compiles faster and has a far cleaner LTO implementation. The LLVM codebase is smaller than GCC. Add to that GCC is renowned for being a huge and unwieldy codebase.

                Slightly unclear on how LLVM is slow and bloated.

                Comment


                • #9
                  Originally posted by scottishduck View Post

                  LLVM and GCC are at about equality in terms of performance, although LLVM consistently compiles faster and has a far cleaner LTO implementation. The LLVM codebase is smaller than GCC. Add to that GCC is renowned for being a huge and unwieldy codebase.

                  Slightly unclear on how LLVM is slow and bloated.
                  Indeed.
                  In the discussion they provided performance measurement for compressing clang++ debug info.
                  Although zstd version was just 7% smaller, zlib compression took 5.1 seconds, while zstd took 1.6 seconds.

                  Comment


                  • #10
                    Originally posted by scottishduck View Post

                    LLVM and GCC are at about equality in terms of performance, although LLVM consistently compiles faster and has a far cleaner LTO implementation.

                    Slightly unclear on how LLVM is slow and bloated.
                    https://www.phoronix.com/scan.php?pa...l-Builds-Clang

                    Not so fast anymore. The better code clang generates the slower it gets .

                    Comment

                    Working...
                    X